Is it possible to run Boundary controllers and workers on Kubernetes with multiple cloud resources as targets?

Hello experts!

My team is doing a POC of Hashicorp Boundary for access management and so far I love the product.

I have one question around setting up Boundary. For a bit of context before I ask the question, we are running on Microsoft Azure and will likely use Azure AD as the ldap provider.

We also extensively use Azure PaaS services such as function apps, managed databases, redis and queues and K8s clusters.

While I did find this demo from Ned Bellavance on how to set up Boundary on Azure(Ned, if you are reading this, thank you for the brilliant demo).

I want to know if it is possible to run the controllers and workers on a Kubernetes clusters instead of running on virtual machines.

I did see the K8s references in the boundary reference architecture - boundary-reference-architecture/deployment/kube at main · hashicorp/boundary-reference-architecture · GitHub

but it looks like this setup is specifically for resources running on the kubernetes cluster?

Is it possible to setup controllers and workers on kubernetes to access all cloud resources irrespective of wether they are running on the kubernetes cluster?

All inputs and suggestions are appreciated.

Hey there and thanks for trying out Boundary!

Boundary can manage access across cloud and runtime platforms (in fact, that’s a big reason why we built it!). As long as your k8s cluster has networking configured to allow ingress and egress from the workers and controllers you should be good to go. The configuration here can get a little tricky, but at the end of the day, as long as a TCP connection can be made from the client to the worker and from the worker to the target, you shouldn’t have any issues.

Adding to what @malnick said, I have a demo environment that runs a Boundary worker inside Kubernetes (k3s in my case), exposed as a NodePort service with a fixed port number, and it works great. Some additional things to consider:

  • Controller/worker access out of Kubernetes to the Boundary database and KMS keys – not just network connectivity, but how will Boundary processes auth to those resources? Explore using some type of instance profile credentials if you can.
  • How do you want to pass the config to the Boundary processes? I did it with a mounted Kubernetes Secret just for demo convenience but that’s not the only way to do it.
  • Add worker tags to the config of workers inside Kubernetes if there are workers that aren’t – this is not required, but will allow you to create targets that reference Kubernetes services by native cluster DNS name, with a worker filter so they are always forwarded through an in-cluster worker. (Note that services in other namespaces than the worker will need to be referred to by their K8s service FQDN, e.g. myapp.namespace.service.cluster.local)

(For my next trick, I plan to try tying in-cluster and out-of-cluster app instances together with Consul and use the Consul service discovery DNS name as a target, so the worker filtering for in-cluster services will no longer be necessary. In theory this is fairly trivial, I just want to fully work out an example.)