We are using HashiCorp Vault to store our secrets and our application runs in Kubernetes. The Authentication method which we are using is 'AppRole'
where we have defined the RoleID
and the SecretID
in a Kubernetes Secret and which is then used to authenticate with the Vault server and gets the Token.
Below are the Annotations which we’ve used in our K8S Deployment manifests.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-pod
spec:
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/agent-vault-addr: 'URL-Of-Vault-Server'
vault.hashicorp.com/auth-type: 'approle'
vault.hashicorp.com/auth-path: 'auth/approle'
vault.hashicorp.com/auth-config-role-id-file-path: '/vault/custom/role-id'
vault.hashicorp.com/auth-config-secret-id-file-path: '/vault/custom/secret-id'
vault.hashicorp.com/agent-extra-secret: 'my-approle'
vault.hashicorp.com/role: 'myrole'
vault.hashicorp.com/auth-config-remove_secret_id_file_after_reading: 'false'
vault.hashicorp.com/agent-inject-secret-config.env: 'kv/mysecrets/secrets'
vault.hashicorp.com/agent-inject-template-config.env: |
{{ with secret "kv/mysecrets/secrets/" -}}
export JAVA_TOOL_OPTIONS="-Dzookeeper.ssl.keyStore.password={{ .Data.data.SECRET }} "
{{- end }}
spec:
containers:
- name: mycontainer
image: someimage
command: [ "/bin/bash" ]
args: [ '-c', 'source /vault/secrets/config.env && some_command_here' ]
The Vault server runs in another K8S cluster and I’ve installed the Vault Agent in our cluster. It used to work with this setup but, our K8S cluster has the AutoScaling feature enabled. When a new node spins up and when the application pod gets rescheduled in that new node, the SideCar container doesn’t seems to be injected automatically.
Eventually, it shows 1/1 containers. Which means the sidecar is not injected when the Pod got ReScheduled in the newly spin K8S node.
Any idea what is wrong here?