CLI - Attempting to connect to localhost for worker

I’m currently doing some testing in my environment with a Boundary dev box and when trying to connect to a target, the error I’m given indicates that the client is attempting to the worker at the localhost instead of the worker in AWS. Here is the command and error:

BOUNDARY_ADDR='http://boundary.xxxxx.org:9200' boundary connect ssh -target-id=<target-id> -host-id=<host-id>
Unable to connect to worker at 127.0.0.1:9202
kex_exchange_identification: read: Connection reset by peer
error fetching connection to send session teardown request to worker: Unable to connect to worker at 127.0.0.1:9202

I’m pretty sure the issue is that I need to set the env variable for the worker to the public IP but I can’t find the right setting anywhere in the documentation. I’ve even looked in the code but can’t find it. I can confirm that port 9202 is running on the host and that the security group allows port 9202 from where I’m coming from.

Any help with this is definitely appreciated!

What’s your setup currently – is dev-mode Boundary running locally on your laptop/desktop, or on an instance in AWS?

If the worker’s in AWS, you’ll want to use the public_addr option in the worker config.

Boundary is running in Dev mode in AWS, a single instance running there.

I’m also running Boundary on my laptop to run the CLI commands, but didn’t run the “dev” command as I didn’t need the environment locally. I had previously tried just using the Boundary Desktop app (the DMG) but was having issues connecting to the local port (kek exchange error that I couldn’t get around).

I can confirm that port 9202 is running on the AWS instance, listening on 0.0.0.0. The issue seems to be that the CLI is trying to connect to a local worker.

Did I install the wrong Boundary app for CLI?

What options did you give to the boundary dev command on your AWS instance?

Boundary in dev mode runs with a very minimalistic configuration that only listens on localhost for everything by default, e.g.:

$ boundary dev
==> Boundary server configuration:
[...]
Controller Public Cluster Addr: 127.0.0.1:9201
[...]
Listener 1: tcp (addr: "127.0.0.1:9200", cors_allowed_headers: "[]", cors_allowed_origins: "[*]", cors_enabled: "true", max_request_duration: "1m30s", purpose: "api")
Listener 2: tcp (addr: "127.0.0.1:9201", max_request_duration: "1m30s", purpose: "cluster")
Listener 3: tcp (addr: "127.0.0.1:9202", max_request_duration: "1m30s", purpose: "proxy")
[...]
Worker Public Proxy Addr: 127.0.0.1:9202
[...]
{
  "id": "jwepRLH8GY",
  "source": "https://hashicorp.com/boundary/dev-controller/boundary-dev",
  "specversion": "1.0",
  "type": "system",
  "data": {
    "version": "v0.1",
    "op": "worker.(Worker).createClientConn",
    "data": {
      "address": "127.0.0.1:9201",
      "msg": "connected to controller"
    }
  },
  "datacontentype": "text/plain",
  "time": "2021-08-17T16:25:11.711083034-04:00"
}

What’s probably happening is the worker thread is running on localhost and reporting its address to the controller as such. The controller then passes that along to the client for proxy connections, but your client isn’t on the AWS instance so it can’t access that localhost-only worker.

You can add arguments to boundary dev to make the controller and worker threads listen on all interfaces and advertise the public IP, like so:

$ boundary dev  -controller-public-cluster-address=[public IP] -worker-public-address=[public IP] -api-listen-address=0.0.0.0 -cluster-listen-address=0.0.0.0 -proxy-listen-address=0.0.0.0
==> Boundary server configuration:
[...]
Controller Public Cluster Addr: [public IP]:9201
[...]
Listener 1: tcp (addr: "0.0.0.0:9200", cors_allowed_headers: "[]", cors_allowed_origins: "[*]", cors_enabled: "true", max_request_duration: "1m30s", purpose: "api")
Listener 2: tcp (addr: "0.0.0.0:9201", max_request_duration: "1m30s", purpose: "cluster")
Listener 3: tcp (addr: "0.0.0.0:9202", max_request_duration: "1m30s", purpose: "proxy")
[...]
Worker Public Proxy Addr: [public IP]:9202
[...]
{
  "id": "r81iL3aKo1",
  "source": "https://hashicorp.com/boundary/dev-controller/boundary-dev",
  "specversion": "1.0",
  "type": "system",
  "data": {
    "version": "v0.1",
    "op": "worker.(Worker).createClientConn",
    "data": {
      "address": "0.0.0.0:9201",
      "msg": "connected to controller"
    }
  },
  "datacontentype": "text/plain",
  "time": "2021-08-17T16:36:26.958596943-04:00"
}

Then worker connections from the clients on your local laptop should work.

Thank you, that was it. When initially troubleshooting and looking at the listening ports, I mistakenly viewed the remote connections, which said 0.0.0.0 and thought everything was good.

Thanks again! I assume this will also fix the issue with my Desktop thick client I was having as well.

Yep, if you can connect with the CLI, you should be fine with the desktop client as well.

1 Like