Why the worker in hcp boundary has to be in publicly available?

I have been following the provided reference architecture for the HCP boundary to thoroughly test it and grasp its concepts before considering its adoption within our organization.

While the Terraform configuration in the repository is quite old, I managed to make it work by making a few adjustments to align it with the latest provider.

A couple of things caught my attention. Firstly, the provisioning places the controller and worker instances in the public subnet. One reason for this is to allow SSH access for running files and using the remote-exec provisioner. However, besides this, I couldn’t find any valid justification for keeping the controller in a public subnet, especially when they are located behind a public load balancer.

Therefore, I attempted to relocate the controllers to a private subnet. This adjustment worked well. However, when I tried moving the workers to a private subnet as well, I encountered an issue where I could no longer establish a connection to the target. Could this be due to the fact that the worker proxies the private target to the client without the need for the client to communicate directly with the controller?"

As long as these connections can proceed, everything should work:

  • Client to the control plane (you can use a public LB fronting private controllers for this as you already saw)
  • Client to the proxy port on the worker
  • Worker(s) to the control plane hosts – if you’re hosting your own controllers, this should be direct connectivity from each ingress worker to all controllers, with no load balancers in between.
  • Worker to the target host(s) on the target port.

Either of the last two could be the issue you’re having – it sounds like either the workers can’t reach the target, or maybe the workers aren’t able to check in with the controllers when they start up so there are no workers available for proxying. What happens when you try to connect? What do the worker logs say?

@omkensey thankyou for the detailed response
my understanding was that the client can’t directly communicate with the worker, it was the controller which was forwarding the request to workers on proxy port.

but you mentioned

  • Client to the proxy port on the worker

from that I am assuming that the workers need to be publicly accessible since the desktop client is running on my local machine.

Yeah, the clients are what need to connect to the workers. The control plane actually doesn’t connect out to the workers at all, the workers connect in to the control plane (unless, I think, the control plane is using a worker to reach a private Vault – in which case the control plane is behaving as a Boundary client itself).

A slightly off-topic question:

Does Boundary offer a web client that can be hosted on our VPC? Currently, we are utilizing Teleport to secure our internal tools through the Teleport HTTPS proxy. The prospect of downloading the web client onto each machine within our organization appears to be impractical.
Additionally, is our decision to use HCP Boundary suitable for the use case I described?