I am engineering a system by which time-bound SSH certificates are checked out by colleagues who need access to hosts in our environment. I am using the methodology laid out in SSH Certificate Injection with HCP Boundary | Boundary | HashiCorp Developer.
A key dependency for the solution we are designing is the use of network-accessible management hosts to act as “jump boxes” for network-inaccessible destinations, where my colleagues will ultimately connect in order to ultimately do the necessary work.
In theory, a connection such as this should be easy to establish with a correctly signed SSH certificate. But I am not sure how Certificate Injection by Boundary might affect this workflow.
After the certificate is brokered to the user/machine initiating the connection and the connection to the management host is established, is it possible for us to present the same certificate in order to connect to the final host to which my colleagues would need to connect?
If not, how might I construct an access schema that would work within the network-accessible management host/network-inaccessible destination construct that we have in place?
Thanks in advance.
With HCP Boundary, you can use a downstream multi-hop “egress” worker on that jump host. Instead of two separate connections in sequence, the user would make a single connection that would be proxied through that egress worker via the upstream “ingress” worker.
1 Like
Thanks! That answers the question!
@omkensey - two additional questions.
1- Am I correct in assuming that, just as the machine that is initiating the chain of connections has, by virtue of the Boundary local client, a localhost ssh port open, the management/jump host would also have a similar ssh port established by way of its local Boundary instance?
2 - how is that local connection secured? Is it incumbent on the end user to access it promptly after establishing the connection? What happens if an unauthorized user/box attempts to connect to those local connections?
Thank you again!
The local Boundary client opens a random (by default) port on localhost. Local clients on the same system can access that port but it should not be remotely exposed or accessible by default (and of course I wouldn’t recommend manually setting up any kind of port forward, etc. that would make it publicly exposed).
The workers in the worker chain listen on the Boundary proxy port (9202 by default) and authenticate incoming connections as Boundary clients or multihop workers – if a client or multihop worker hasn’t authenticated to the Boundary control plane or doesn’t present its Boundary credentials to the worker, the worker won’t proxy for it.
Note that the workers are talking to one another and the HCP Boundary control plane via their own APIs to get your inbound traffic from the client to the ultimate SSH destination. Only the last worker in the chain (what our docs call the egress worker) needs to be able to access the target system over SSH.