Access to kubeapi and apps in k8s

Hi all,

I am evaluating Boundary at the moment and I really love it. :partying_face:

I want to set it up in a cloud for users to give them access to Kube API via kubectl and to pods within k8s (argocd, lens, etc.).
My best guess would be now to set up a controller/worker in the same VPC as k8s but outside of it (to access Kube API) and another worker inside k8s as a pod to access the apps.

My question here is now, do I have to publish 2 endpoints to the internet? One to access the controller/worker in the VPC and one to access the worker inside of k8s? As far as I understood the user directly connects to the worker and not via the controller or a hop?
If I would like to only expose one endpoint to the internet what would a possible architecture look like then?

Thank you very much
Mark

That’s correct, the worker in k8s needs to be exposed someplace the client can connect to it directly. I’ve been thinking of it as a sort of IngressController – you can use it to layer-4-expose various Kubernetes apps with Boundary authn/authz integration, without needing the apps themselves to be exposed directly.

That said, if the client isn’t coming across the public Internet, the controller/worker endpoints don’t need to be fully public, just available to the clients.

Incidentally, you should also be able to access the kube-api through the in-cluster worker as well, by using the kubernetes DNS name as a Boundary target. So if you just needed the out-of-cluster working for API access, maybe you don’t need it at all :slight_smile:

Thank you very much for the answer.
The clients come always from the internet as we don’t have a VPN or Jumphost in the VPC so I have to publish boundary to the internet.
Thank you for the hint I’ll give it a try. This would be great if it would work. :+1:

Update: This was a nice idea, thank you. It is working. If you are in a different namespace than default you have to put kubernetes.default.svc.
After adding as Target you can then use boundary connect boundary connect kube -target-id <k8s target id> :partying_face: :+1: