Vault-helm and kubernetes environment variables

Is it possible to inject a secret into an environment variable in the container?

I don’t see an annotation that supports this at https://www.vaultproject.io/docs/platform/k8s/injector/index.html

Thanks!

6 Likes

I also created a question on this.

If someone already found this solution please guide.

I think the documentation you’re looking for is the following one:

I tried that example but it doesn’t work. After I exec into a pod I’m able to source the file and print env var. It doesn’t seem to work on startup.

3 Likes

Any luck from anybody who was lucky to get this working?
I am facing the same issue and i am overall stucked because of that and i would really love to avoid rewriting the application to get the configuraiton from files as writing whole catalina file instead of using the env var would make the templates really unreadable.

1 Like

I was able to get this to work. I believe the issue with using this example, then trying to exec into the pod, is that you’re spawning a new shell (which doesn’t yet source this file) - as you’re ultimately bypassing entrypoint with kubectl exec. Basically you need to source the secret file ahead of your entrypoint to make automation successful. From the example (shortened for convenience):

spec:
  template:
    spec:
      containers:
        - args: ["sh", "-c", "source /vault/secrets/config && <entrypoint script>"]

If you want to exec into the pod to confirm it works, you can use this (example is an alpine container, so using the ash shell):

kubectl exec -it <pod_name> -c <container_name> -- ash -c "source /vault/secrets/<secret_name> && ash

That’ll get you into your pre-existing pod with the k8s injector agent tmpfs mount point: /vault/secrets, and your current shell will have this environment variable set. You can confirm with env | grep <var_name>.

Note: whatever you used for export <var_name> inside the vault secret does not have to match /vault/secrets/<secret_name>.

Alternatively, you could edit /etc/profile (bourne shell) or whatever default file your shell sources on new login, which would require building a new image.

With all that said, what I would really prefer is to have a way to let K8s inject this secret as an environment variable (so that it only exists in memory), instead of letting the vault injector agent mount it to the disk. If anyone has an idea on how to do so, I’d love to hear it.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: basic-secret
  labels:
app: basic-secret
spec:
  selector:
matchLabels:
  app: basic-secret
  replicas: 1
  template:
metadata:
  annotations:
    vault.hashicorp.com/agent-inject: "true"
    vault.hashicorp.com/tls-skip-verify: "true"
    vault.hashicorp.com/agent-inject-secret-helloworld: "secret/basic-secret/helloworld"
    vault.hashicorp.com/agent-inject-template-helloworld: |
      {{- with secret "secret/basic-secret/helloworld" -}}
      {
         export DB_USERNAME={{ .Data.username }}
         export DB_PASSWORD={{ .Data.password }}
      }
      {{- end }}
    vault.hashicorp.com/role: "basic-secret-role"
    vault.hashicorp.com/agent-inject-command-helloworld: "chmod +x /vault/secrets/helloworld"
  labels:
    app: basic-secret
spec:
  serviceAccountName: basic-secret
  containers:
  - name: app
    image: jweissig/app:0.0.1
    args: ["sh", "-c", "source /vault/secrets/helloworld && printenv >> /app/secret.txt && /app/web"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: basic-secret
  labels:
app: basic-secret

I did try to run it inside of single shell and verify what you wrote but no luck. The file is non-created and i don’t see any errors anywhere. Will have to look for other workaround then

As basically i think that running the upper mentioned command is the same as if you would perform:
kubectl exec -it <pod_name> -c <container_name> /bin/ash
source /vault/secrets/<secret_name>
printenv

In your example did it work if you would add after source to perform as i mentioned in above comment?
Meaning adding an:
&& printenv >> <someFile>
and then just run the
kubectl exec -it <pod_name> -c <container_name> --ash -c “cat /” to see if that would be really added ?

Try using this:

apiVersion: apps/v1
kind: Deployment
metadata:
name: orgchart
labels:
app: orgchart
namespace: internal-app
spec:
selector:
matchLabels:
app: orgchart
replicas: 1
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: “true”
vault.hashicorp.com/role: “internal-app”
vault.hashicorp.com/agent-inject-secret-config: “internal/data/database/config”
vault.hashicorp.com/agent-inject-template-config: |
{{ with secret “internal/data/database/config” -}}
export DB_USERNAME="{{ .Data.data.username }}"
export DB_PASSWORD="{{ .Data.data.password }}"
{{- end }}
labels:
app: orgchart
spec:
serviceAccountName: default
containers:
- name: orgchart
image: jweissig/app:0.0.1
command: ["/bin/sh", “-c”]
args: [". /vault/secrets/config && env >> /app/secret.txt && sleep 600"]

with other images you should change the part && env >> /app/ … with the entrypoint from original image.

example, a simple dockerfile with nginx:alpine:

FROM nginx:1.12-alpine

COPY --from=build-deps /usr/src/app/build /usr/share/nginx/html

EXPOSE 80

CMD [“nginx”, “-g”, “daemon off;”]

you have to do it like this:

command: ["/bin/sh", “-c”]
args: [". /vault/secrets/config && nginx -g ‘daemon off;’ "]

Works like charm!

Now i can finally move to set it up for all other application’s which have env variable dependency.

I am new to Vault secrets. I am moving our kubernetes secrets to Vault. I have configured like below
selector:
matchLabels:
app: app-name
replicas: 1
template:
metadata:
labels:
app: app-name
annotations:
vault.hashicorp.com/agent-inject: ‘true’
vault.hashicorp.com/role: ‘app-name’
vault.hashicorp.com/agent-inject-secret-config: “secret/data/modules/app-name/config”
vault.hashicorp.com/agent-inject-template-config: |
{{ with secret “secret/data/modules/app-name/config” -}}
export TEMP_KEY="{{ .Data.data.TEMP_KEY }}"
{{- end }}
spec:
serviceAccountName: default
containers:
- name: orgchart
image: jweissig/app:0.0.1
command: ["/bin/sh", “-c”]
args: [". /vault/secrets/config && env >> /app/secret.txt && sleep 600"].

I don’t know how to check the image name like jweissig/app:0.0.1 and accordingly change it in the args . can you please help me here.

I know this thread is focussed heavily on Vault Agent injector, but the only officially supported way to sync Vault secrets to environment variables in Kubernetes is using the Vault CSI provider. There’s an example of how to set that up here, and installation instructions here.

@tomhjp “officially supported way” doesn’t work; that’s why people are looking for workarounds here.

I’m having the same problem, I have a java app, and it receives environment variables, I’m having trouble adding the vault variables to the environment variables.

When looking at the guide it suggests:

args:
     ['sh', '-c', 'source /vault/secrets/mysecret && <entrypoint_script>']

Which I’ve applied to my nodejs container:

args:
     ["sh", "-c", "source /vault/secrets/test && /opt/app-root/src/communities/entry.sh && node /opt/app-root/src/communities/main.js"]

The openshift pod spins up, and the secrets are mounted at /vault/secrets. However the environment variables are not created!

Is there anything I am missing? Thanks

1 Like

Did you manage to solve this problem?