I am injecting secrets from vault running in a k8s cluster. I am the cluster-admin.
Those secrets are highly confidential customer data (e.g. API keys/secrets to other platforms), which the cluster-admin shall have no access to whatsoever.
After injection, those secrets are available in plain-text either as file (e.g. in a mounted volume) or as environment variables. Both are visible to the cluster-admin if he just exec’s into the container. I need to only the app being able to read those and nobody else. One could store the secrets in vault encrypted, but the app would need a key for decryption, which the cluster-admin will get access to as well, as in the end it seems to be a chicken and egg problem. In addition, the cluster-admin can impersonate the service account to authenticate at vault. I also thought of adding securityContext to the container, but that doesn’t prevent the cluster-admin from changing the yaml and exec into the pod eitherway.
Does anybody has a solution how to solve that (likely common) problem? Can certin vault related tools assist here?
@muzzy So you probably mean something like the following workflow:
encrypt the customer secrets and store those encrypted secrets in vault
inject them with kubernetes auth and vault init container into a file in some mounted volume - those secrets are still encrypted and the cluster admin can’t do anything with them
now we need to decrypt those secrets: to do so, store the decryption key in vault, use a further init or sidecar container with some app that authenticates at vault with a single use limit token and check the path of the vault reply to see if the wrapped token (cubbyhole) was not intercepted along the way by a min-in-the-middle (e.g. cluster admin). The decryption key when retrieved is then stored in-memory in the init-/sidecar and can be used to decrypt the injected encrypted secrets in-memory as well.
There should be a simpler solution. But I think it still does not prevent the cluster admin to exec into the containers, dump the memory and retrieve the decrypted secrets from there.
Also, the cluster admin can impersonate any service account running. Is it possible in vault to identify that a request was being made by a service account being impersonated and having an option to deny that request?
Why do you need them locally? Keep secrets in Vault, until you need to use them.
If an intruder intercepts and use the wrapping secret once, it will expire and next attempt to use will cause “access denied” but intruder can be identified using audit logs.
If possible, you also should try to use short-lived/dynamic credentials. Just generate a short-lived API keys, e.g. 2 min TTL, and/or revoke immediately after use.
This is just ideas, it should be implemented depending on you orchestrators and pipeline.
Thank you.
Those are customer API keys for external services which the customers has stored within vault. The app running in a pod/container in kubernetes requires those API keys in plain-text to connect to the API of those external services after the container boots up and do some stuff there. So I need to have those credentials locally at least in-memory of the running container application. Injecting those secrets at container startup is nice, because one does not have to modify the application: those API keys are just there where the app should await them (file or env var). In this regard, what bothers me is that the kubernetes cluster admin can gain full access to those credentials by execing into the container, which is a security problem. So it seems to me that I would rather need to modify the app so that the app authenticates with vault and gets a wrapped token with one-time use. This would allow to retrieve the customer API keys and keep them in-memory and not as a file or env var. That enhances the security for sure, but does not prevent the cluster admin to intercept the request from app to vault as a man-in-the-middle, get the wrapped token that the app should receive, use it once, get the API keys, generate a new wrapped token and send it to the app. The app should then check on which path this new wrapped was generated and shall identify that the path is likely not valid. Only in this way, one could then generate alerts - but the API keys are already in plain-text in the hands of the cluster admin.
Another intrusion would be that the cluster-admin dumps the memory of the container while it retrieves the API keys from vault - but this is a kubernetes RBAC problem that vault likely cannot solve.