What should I specify at listener address when running Boundary in Kubernetes?

Hi, I’m curious what to specify at listener address at both controller and worker. I’m using kubernetes, and running controller and worker each with a single deployment.

The logs at Boundary Controller doesn’t give much information (see below) when I type boundary connect ssh -target-id ttcp_eRx54ee62G. From stdout, I can see the result of kex_exchange_identification: Connection closed by remote host. The server log is as below.

│ boundary-controller 2021-03-05T09:13:35.606Z [INFO]  controller.worker-handler: session activated: session_id=
│ s_MnZCuOmyCK target_id=ttcp_eRx54ee62G user_id=u_LXuxNOgL2E host_set_id=hsst_bVu9KkZys6 host_id=hst_Z9dfMSixay
│ boundary-controller 2021-03-05T09:13:35.629Z [INFO]  controller.worker-handler: authorized connection: session
│ _id=s_MnZCuOmyCK connection_id=sc_bXWkeQXqJS connections_left=-1
│ boundary-controller 2021-03-05T09:13:35.675Z [INFO]  controller.worker-handler: connection established: sessio
│ n_id=s_MnZCuOmyCK connection_id=sc_bXWkeQXqJS client_tcp_address=127.0.0.1 client_tcp_port=46414 endpoint_tcp_
│ address=172.20.84.75 endpoint_tcp_port=5432
│ boundary-controller 2021-03-05T09:13:35.809Z [INFO]  controller.worker-handler: connection closed: connection_
│ id=sc_bXWkeQXqJS
│ boundary-controller 2021-03-05T09:14:12.049Z [INFO]  controller: terminating completed sessions successful: se
│ ssions_terminated=2

I speculate that I have set listener address wrong… The Boundary Worker configuration is as below.

disable_mlock = true
worker {
    # Name should be unique across workers
    name = "kubernetes-boundary-worker"
    description = "Boundary worker running in k8s"
    controllers = ["boundary-controller.{{ .Release.Namespace }}.svc.cluster.local:9201"]

    # public host or IP address (and optionally port) at which the worker can be reached by clients for proxying
    # This defaults to the address of the listener marked for proxy purpose
    public_addr = "inside-np.dev.mydomain.cloud:31577"
}
listener "tcp" {
    address = "127.0.0.1"
    purpose = "proxy"
    tls_disable = true
    tls_min_version = "tls12"
}
kms "aead" {
    purpose = "root"
    aead_type = "aes-gcm"
    key = "sP1fnF5Xz85RrXyELHFeZg9Ad2qt4Z4bgNHVGtD6ung="
    key_id = "global_root"
}
kms "aead" {
    purpose = "worker-auth"
    aead_type = "aes-gcm"
    key = "8fZBjCUfN0TzjEGLQldGY4+iE9AkOvCfjh7+p0GtRBQ="
    key_id = "global_worker-auth"
}
kms "aead" {
    purpose = "recovery"
    aead_type = "aes-gcm"
    key = "8fZBjCUfN0TzjEGLQldGY4+iE9AkOvCfjh7+p0GtRBQ="
    key_id = "global_recovery"
}

The Boundary Controller configuration is as below.

disable_mlock = true

controller {
    name = "kubernetes-controller"
    description = "A controller for a kubernetes demo!"
    database {
        url = "env://BOUNDARY_PG_URL"
    }
    # public host or IP address (and optionally port) at which the controller can be reached by workers
    # This will be used by workers after initial connection to controllers via the worker's controllers block
    public_cluster_addr = "boundary-controller.{{ .Release.Namespace }}.svc.cluster.local:9201"
}

listener "tcp" {
    address = "127.0.0.1"
    purpose = "api"
    tls_disable = true
    tls_min_version = "tls12"
}
listener "tcp" {
    address = "127.0.0.1"
    purpose = "cluster"
    tls_disable = true
    tls_min_version = "tls12"
}

kms "aead" {
    purpose = "root"
    aead_type = "aes-gcm"
    key = "sP1fnF5Xz85RrXyELHFeZg9Ad2qt4Z4bgNHVGtD6ung="
    key_id = "global_root"
}
kms "aead" {
    purpose = "worker-auth"
    aead_type = "aes-gcm"
    key = "8fZBjCUfN0TzjEGLQldGY4+iE9AkOvCfjh7+p0GtRBQ="
    key_id = "global_worker-auth"
}
kms "aead" {
    purpose = "recovery"
    aead_type = "aes-gcm"
    key = "8fZBjCUfN0TzjEGLQldGY4+iE9AkOvCfjh7+p0GtRBQ="
    key_id = "global_recovery"
}

Any suggestions or help would be helpful. What should I specify at listener address when running Boundary in Kubernetes?

Just looking at your config. The worker public access port is 31577 - is that public address doing port mapping back to 9202?

One other question is what "“boundary-controller.{{ .Release.Namespace }}.svc.cluster.local” resolves to - I’m assuming it is going to return an address from service discovery, which is not going to be 127.0.0.1. I’m using Consul with Envoy, and had to set my worker block as so, with an upstream configured to for 9201 to my cluster service:

worker {
  name = "worker-{{ env "NOMAD_ALLOC_INDEX" }}"
  description = "A default worker created demonstration"
  controllers = [
    "127.0.0.1:9201"
  ]

  public_addr = "env://BOUNDARY_PUBLIC_WORKER_ADDR"

  tags {
    type   = ["dev"]
    region = ["humblelab"]
  }
}

This allowed me to listen on 127.0.0.1, while straight host networking required me to listen on the host address.

1 Like

IMHO, in general in any containerized environment you should be listening on 0.0.0.0 inside a container that needs to accept connections from outside (with a few exceptions and you’ll know what they are if they apply to you). If you listen on 127.0.0.1 then port forwards from outside the container (e.g. k8s NodePorts/service network ports) may not work as you expect.

Also, make sure that both your worker and controller are accessible via their advertised addresses both to each other and from your SSH client host. One thing that tripped me up early on was that after the worker auths to the controller, the controller redirects it to connect to the controller’s public address if that’s configured. If there’s a mismatch you’ll see the worker auth successfully in the logs on both sides, then the worker will log errors about connecting to the controller’s public address forever. In Kubernetes this can be tricky if you’re using a LoadBalancer service as you may not even know the public address when Boundary comes up if it hasn’t created the load balancer yet. I’m exploring using Terraform to resolve this ordering issue like so:

  • create the needed LoadBalancer service(s) and wait for them to come up (via the wait_for feature of the new alpha Kubernetes provider)
  • start Boundary controller/worker with templated config file(s) with the load-balancer public address rendered in

I don’t have sample code written yet, but the theory is sound… :grin:

Can you provide any reference to above solution you mentioned. I am kinda stuck on deploying boundary into kubernetes with two workers and controller and getting below error I am using k8 service DNS as Public addr for both controller and worker

{"id":"416hyWNeMB","source":"https://hashicorp.com/boundary/boundary-worker-5787dc9557-7g4xz/worker","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"worker.(Worker).sendWorkerStatus","data":{"msg":"worker is not authenticated to an upstream, not sending status"}},"datacontentype":"application/cloudevents","time":"2022-10-06T06:57:34.766090063Z"}

It’s hard to tell what might be going on without some config info. Can you post the listener part of your controller config, the listener and worker part of your worker config, and your Kubernetes manifests for them?

My worker and controller config

worker {
	name = "kubernetes-worker"
	description = "A worker for a kubernetes demo"
	address = "localhost"
  controllers = ["boundary-controller.default.svc.cluster.local"]
	public_addr = "localhost"
}

listener "tcp" {
	address = "0.0.0.0"
	purpose = "proxy"
	tls_disable = true
}

Controller

controller {
	name = "kubernetes-controller"
	description = "A controller for a kubernetes demo!"
	database {
			url = "env://BOUNDARY_PG_URL"
	}
	public_cluster_addr = "boundary-controller.default.svc.cluster.local"
}

listener "tcp" {
	address = "0.0.0.0"
	purpose = "api"
	tls_disable = true
}

listener "tcp" {
	address = "0.0.0.0"
	purpose = "cluster"
	tls_disable = true
}

And i am exposing k8 services using ClusterIP

What does your Kubernetes manifest for the worker and controller look like?

Also I don’t think localhost is going to work for the worker listen address or public address. Try 0.0.0.0 for the listen address, and you need to expose it to clients outside the cluster somehow (at least with a NodePort, not a ClusterIP service) and set the public address for it appropriately depending how it’s exposed. Otherwise clients outside the cluster can’t reach it.