Hi, I setup in HCP Boundary a target SSH host, with brokered credentials.
I am trying now to connect to this target from macOS CLI.
After authenticating to the cluster, I try to run
boundary connect ssh -target-id=
And I get in response -
kex_exchange_identification: Connection closed by remote host
Connection closed by 127.0.0.1 port 57201
When I try to directly SSH connect to the target (EC2 instance in AWS), I succeed. i.e. without Boundary.
Any idea what can be the issue? disabling local firewall and allowing remote Login to the macOS didn’t help (though in general as a long term solution I would not want to allow SSH connectivity to the macOS in order to SSH connect to remote hosts).
I am using static username & keypair credentials (stored in boundary, not vault), and indeed trying the same one when connecting directly. So direct SSH works and I point to the local pem file, and when doing boundary connect I assume it is using the brokered credentials, where in the “SSH Private Key” value i just pasted the private key content.
The server side is a plain AWS EC2 (amazon linux 2) which generates automatically a key pair upon creation, so i believe it just lets me download the private key and places automatically the public one in authorized_keys on server side.
I don’t have a security group issue i believe, as i tried also allowing SSH access from all source addresses and it didn’t work. Just so I know though for the future as I don’t want to allow all addresses - how can I tell what is the source IP used by the HCP workers? i want to allowlist them all in the firewall.
I also tried seeing if allowing remote login on my local macOS helps, or turning off the host firewall - and it didn’t help. Though I assume that if I do boundary connect it doesn’t have to proxy via local SSH server? but rather proxy through the remote boundary worker and from there to the target server?
To test that, you can issue the Boundary command without the ssh argument. It will output the credentials to the screen and still proxy the tcp connection, but without attempting to run ssh for you. You’d be responsible for copying the username and key and inputting those to ssh.
I’m not an HCP Boundary user but I believe they don’t provide those (or at least I couldn’t find them). One thing you can do though is to run the worker in the same VPC as the instance you’re trying to access.
That’s correct. It does not connect to any local service. It will create an outbound connection to the worker on port tcp/9202. That would only be a problem if you have egress firewall rules. You can easily test that though if you have Netcat installed:
$ nc -v -z <boundary_worker_hostname> 9202
Connection to ****** port 9202 [tcp/wap-wsp-s] succeeded!
It would still be good if you could set the sshdLogLevel to DEBUG and share the connection logs here.