Connecting a worker in another cloud provider to a controller in AWS

I’ve setup a HA AWS Boundary environment which works within the AWS estate. However, I need to connect to a worker within another cloud provider. The AWS estate was provisioned via Terraform and the worker in another cloud has been manually created (Ubuntu) and boundary has been installed. We have opened the ports to the designated IP addresses at both ends, which I can SSH to, so I know that connection is working. I don’t know how to connect that worker to the controllers within AWS. I have tried setting the worker as a target, but I cannot connect to it. I get the following error: kex_exchange_identification: Connection closed by remote host

Any ideas on how I connect via the controllers (AWS), or is it connecting in some other way?

Any help appreciated.

What you’ll want to do first is make sure the worker’s proxy port (9202 by default) is open for clients to access, that the controllers’ cluster port (9201 by default) is accessible by the workers, and that the workers have tags in their configuration that mark which environment they’re in. Then use those tags in filters on your targets so that a given target is only accessed by the workers in its environment.

If you’ve got the worker config and connectivity correct then you should see Boundary log entries on the controller about the worker being registered when you start boundary server on the worker.

Hi thanks for that answer. However, I still don’t understand how the new worker in another network (not AWS) knows where the controller is and vice-versa? Is there something I need to do on the worker to register with the controller? As I have a target on the new network (private IP), but how does the controller know to use the external worker to AWS instead of the AWS workers?

As it stands, the target is giving me a connection refused message when trying to connect with ssh 127.0.0.1 -p and I get a kex_exchange_identification: Connection closed by remote host when I try to connect via boundary connect ssh -target-id ttcp_ -username ubuntu

Workers have an initial_upstreams item in their config, that tells them an initial set of controllers to try to connect to. When they connect to one, it will tell them about any other controllers that they don’t already know about.

Also in your worker config, you’ll want to assign tags to each worker (these are Boundary-specific, separate from tags in the cloud provider) so that you can use filters on your targets to make sure that targets are accessed only through workers that can provide access to them.

Further note, since you’re crossing cloud boundaries which would make KMS difficult to impossible to do securely, you may want to investigate PKI Workers (that link goes to the registration process for HCP Boundary, but it should be almost the same for OSS Boundary except you’ll still use initial_upstreams instead of hcp_boundary_cluster_id in the worker config). They don’t need KMS access to securely communicate to the controllers, they go through a token-based registration flow instead.