Vault-helm and kubernetes environment variables

Is it possible to inject a secret into an environment variable in the container?

I don’t see an annotation that supports this at https://www.vaultproject.io/docs/platform/k8s/injector/index.html

Thanks!

5 Likes

I also created a question on this.

If someone already found this solution please guide.

I think the documentation you’re looking for is the following one:

I tried that example but it doesn’t work. After I exec into a pod I’m able to source the file and print env var. It doesn’t seem to work on startup.

1 Like

Any luck from anybody who was lucky to get this working?
I am facing the same issue and i am overall stucked because of that and i would really love to avoid rewriting the application to get the configuraiton from files as writing whole catalina file instead of using the env var would make the templates really unreadable.

1 Like

I was able to get this to work. I believe the issue with using this example, then trying to exec into the pod, is that you’re spawning a new shell (which doesn’t yet source this file) - as you’re ultimately bypassing entrypoint with kubectl exec. Basically you need to source the secret file ahead of your entrypoint to make automation successful. From the example (shortened for convenience):

spec:
  template:
    spec:
      containers:
        - args: ["sh", "-c", "source /vault/secrets/config && <entrypoint script>"]

If you want to exec into the pod to confirm it works, you can use this (example is an alpine container, so using the ash shell):

kubectl exec -it <pod_name> -c <container_name> -- ash -c "source /vault/secrets/<secret_name> && ash

That’ll get you into your pre-existing pod with the k8s injector agent tmpfs mount point: /vault/secrets, and your current shell will have this environment variable set. You can confirm with env | grep <var_name>.

Note: whatever you used for export <var_name> inside the vault secret does not have to match /vault/secrets/<secret_name>.

Alternatively, you could edit /etc/profile (bourne shell) or whatever default file your shell sources on new login, which would require building a new image.

With all that said, what I would really prefer is to have a way to let K8s inject this secret as an environment variable (so that it only exists in memory), instead of letting the vault injector agent mount it to the disk. If anyone has an idea on how to do so, I’d love to hear it.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: basic-secret
  labels:
app: basic-secret
spec:
  selector:
matchLabels:
  app: basic-secret
  replicas: 1
  template:
metadata:
  annotations:
    vault.hashicorp.com/agent-inject: "true"
    vault.hashicorp.com/tls-skip-verify: "true"
    vault.hashicorp.com/agent-inject-secret-helloworld: "secret/basic-secret/helloworld"
    vault.hashicorp.com/agent-inject-template-helloworld: |
      {{- with secret "secret/basic-secret/helloworld" -}}
      {
         export DB_USERNAME={{ .Data.username }}
         export DB_PASSWORD={{ .Data.password }}
      }
      {{- end }}
    vault.hashicorp.com/role: "basic-secret-role"
    vault.hashicorp.com/agent-inject-command-helloworld: "chmod +x /vault/secrets/helloworld"
  labels:
    app: basic-secret
spec:
  serviceAccountName: basic-secret
  containers:
  - name: app
    image: jweissig/app:0.0.1
    args: ["sh", "-c", "source /vault/secrets/helloworld && printenv >> /app/secret.txt && /app/web"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: basic-secret
  labels:
app: basic-secret

I did try to run it inside of single shell and verify what you wrote but no luck. The file is non-created and i don’t see any errors anywhere. Will have to look for other workaround then

As basically i think that running the upper mentioned command is the same as if you would perform:
kubectl exec -it <pod_name> -c <container_name> /bin/ash
source /vault/secrets/<secret_name>
printenv

In your example did it work if you would add after source to perform as i mentioned in above comment?
Meaning adding an:
&& printenv >> <someFile>
and then just run the
kubectl exec -it <pod_name> -c <container_name> --ash -c “cat /” to see if that would be really added ?