Kubernetes Secrets Engine for multiple Kubernetes clusters

Greetings,

We are using HashiCorp Vault in our company and we would love to use the brand new Kubernetes Secrets Engine for managing the authentication process for several bash scripts which are currently authenticating with Kubernetes API Server by using x509 certificates.

We have made a proof of concept with the Kubernetes Secrets Engine with the local Kubernetes cluster (the one where Vault is running) and everything works like clockwork, nonetheless we would like to use this Secrets Engine with every single Kubernetes cluster in our premises. The procedure seems to be as simple as creating one engine instance per cluster and configuring the API Server Endpoint accordingly in the engine configuration, however the JWT is another story due to the fact that this token comes from the Service Account which the Vault Server is using, which is bound to the local cluster.

One possible way to achieve that might be to create tokens within all target clusters and mount them into the Vault Server pods as volumes, but we dislike this procedure because it involves manual configuration.

So, in the light of the abovementioned considerations, is there a way to achieve this goal? That is, configuring Vault to generate tokens in remote clusters, not only in the one where Vault is running.

Thank you very much.

1 Like

Wouldn’t you just have a unique mount path per cluster for each secrets engine?

Exactly @mister2d , the idea is to have one Kubernetes secrets engine instance per Kubernetes cluster, and every single instance mounted at a different mount path, thereby, with a completely separate configuration. This configuration includes, for each cluster, the API Server endpoint, the root CA certificate and the Jason Web Token associated with the Service Account used by Vault for authenticating with the Kubernetes cluster. This Service Account will have, at least, capabilities for creating tokens in the target cluster.

That’s our idea, but we don’t know for sure if this is achievable.

Have you attempted? Should be straightforward. I’ve used multiple mount paths for other secret engines without issue. Each mount path is basically a configuration silo.

@mister2d You’re absolutely right, using multiple mount paths for multiple secrets engines or auth methods is straightforward, I’ve also done it many times with no trouble. Therefore, I hold the view that using multiple instances of the Kubernetes Secrets Engine with a separate mount path is not a problem whatsoever.

The thing is that I’d love to know wether or not it’s possible to use one instance of the Kubernetes Secrets Engine to connect to a remote cluster, using its remote API Server endpoint, its root CA certificate and a JWT bound to a Service Account generated in that remote cluster with the appropriate permissions to create tokens. This CA certificate and JWT should be mounted into the Vault Server’s filesystem by using a ConfigMap or a Secret.

In my mind it should work, but I don’t know for sure. The next work day I’ll carry out the definitive attempt, and I’ll write back the results.

Thank you!

Most of what you said is possible. The bit that isn’t is:

Vault requires these be passed to the Vault HTTP API, not supplied via Kubernetes concepts.

Absolutely. My production Vault instance is external to my kubernetes and Nomad clusters and I use it for pki and secrets rendering. Why wouldnt it work?

Vault requires these be passed to the Vault HTTP API, not supplied via Kubernetes concepts.

Of course! I mean using a ConfigMap or a Secret in order for the Vault’s Pods to reach the root CA file and the JWT file since they will be mounted into its filesystem, that is, running the Secrets Engine configuration command using the Vault’s pod binary and targeting both files within the Pod’s filesystem. However, I guess that is also possible to achieve the same goal by using the binary installed in my own laptop, isn’t it? That way, a ConfigMap or a Secret should not be needed.

Absolutely. My production Vault instance is external to my kubernetes and Nomad clusters and I use it for pki and secrets rendering. Why wouldnt it work?

I don’t know at all! I have never used an external Kubernetes cluster to integrate with Vault, that’s the reason why I’m not clear with the procedure. I’ve always been using the Service Account mounted at Vault’s Pods in order for Vault to authenticate with Kubernetes, and using a JWT bound to a Service Account created in an external cluster is something that seems to me achievable but never tried.

Moreover, there is the point of the new behaviour of JWT and Service Accounts in Kubernetes 1.24+. The fact that tokens now always expire adds even more complexity to the stage, but I do hold the view that creating the Service Account secret manually and bind it to the Service Account via annotation should be a solution.