Vault injector issue

Hi, we have an issue with Vault and would appreciate some help.

We are running Vault 1.6.0
Vault itself seems to be working, as
The web interface is up
Our playground k8s cluster connects to it and secrets are injected
On the WORKING playground k8, the vault injector is installed using helm as suggested on Vault website

Issue with production
The vault injector has NOT been installed using best practice (no helm) but nevertheless that had been working for months with no problem
Without apparent reason, it stopped working (JWT authentication problems)
I stopped the vault injector (scaled to 0) and scaled back to 1 but that failed
I removed the current, non-helm vault injector and tried to reinstall using helm, the helm installation goes through but the vault injector refuses to come up (permission denied on /var/secrets/……/token)

There is probably not enough information here for people to make any educated guesses about what is wrong. You would need to add as much specifics as you can (actual log messages, error messages, exact commands that fail, etc.) to have a good chance of a helpful reply.

One thing I would say though: Vault 1.6 is substantially end of life these days. It’s definitely time to start looking at upgrading, whether related to this issue, or not.

This is the log on the K8S that tries to connect to Vault

Error loading in-cluster K8S config: open /var/run/secrets/ permission denied

This is the log on the vault itself when k8S tries to connect:

auth.kubernetes.auth_kubernetes_22127372: login unauthorized due to: lookup failed: service account unauthorized; this could mean it has been deleted or recreated with a new token

Then I tried to recreate the access for the k8s cluster with instructions like

vault write auth/kubernetes/config token_reviewer_jwt="$SA_JWT_TOKEN" kubernetes_host="$K8S_HOST" kubernetes_ca_cert="$SA_CA_CRT" issuer="$ISSUER"

but I get

no handler for route ‘auth/kubernetes/config’

OK, there are at least two separate problems here:

This file is where the necessary credentials live, so if it can’t be read, this will prevent authentication working even if everything else is fine.

So, exec into the pod, work out what the file permissions actually are, and what user/groups the process trying to access it is running as.

This indicates you have no auth method named ‘kubernetes’ in your Vault server at all.

Use vault auth list to display your auth methods.

Many thanks for your reply, I really appreciate it

I am trying to to this from scratch now.
I am doing this in a different namespace called vault-next


This is the detailed list of operation, which worked perfectly on another cluster:

  • Create a namespace called vault-next
  • Create a service account called vault-auth (in namespace vault-next)
  • Create a clusterrolebinding
kind: ClusterRoleBinding
   name: role-tokenreview-binding
   namespace: default
   kind: ClusterRole
   name: system:auth-delegator
- kind: ServiceAccount
  name: vault-auth
  namespace: vault-next

Then I get this values which I will need to create the access

VAULT_SA_NAME=$(kubectl get sa vault-auth -n vault-next --output jsonpath="{.secrets[*][‘name’]}")

SA_JWT_TOKEN=$(kubectl get secret -n vault-next $VAULT_SA_NAME --output ‘go-template={{ .data.token }}’ | base64 --decode)

SA_CA_CRT=$(kubectl config view --raw --minify --flatten --output ‘jsonpath={.clusters.cluster.certificate-authority-data}’ | base64 --decode)

Then I bring up kubectl-proxy in order to get ISSUER as

ISSUER=$(curl --silent | jq -r .issuer)

At this point I have all the values I need to create the access

vault auth enable -path=“production-europe-west3” kubernetes

vault write auth/production-europe-west3/config token_reviewer_jwt="$SA_JWT_TOKEN" kubernetes_host="$K8S_HOST" kubernetes_ca_cert="$SA_CA_CRT" issuer="$ISSUER"

As a result, I have the access in the screenshot

Also, I create a test deployment

apiVersion: v1
kind: Pod
  name: testdevel
  namespace: vault-next
    app: testdevel
  serviceAccountName: vault-auth
    - name: testdevel
        - name: VAULT_ADDR
          value: ""
        - name: VAULT_TOKEN
          value: root

I “login” into the newly created pod and this works

curl -k --request POST --data ‘{“jwt”: "’"$KUBE_TOKEN"’", “role”: “testdevel”, “namespace”: “vault-next”}’ $VAULT_ADDR/v1/auth/production-europe-west3/login

it will return all the values of the lease
So I dont see a problem so far

Also, on the vault everything seems fine:

core: enabled credential backend: path=production-europe-west3/ type=kubernetes

THEN I install vault-injector using helm , and that’s where the trouble starts

helm install vault hashicorp/vault --namespace vault-next --values ./values.yaml

and in my helm values I have (among all the others)

authPath: “auth/production-europe-west3”

Helm installs vault, but the deployment fails to start with the “permission denied” error

Error loading in-cluster K8S config: open /var/run/secrets/ permission denied

I cannot looking into the pod, because

kubectl exec -it -n vault-next vault-agent-injector-77d7db77db-4svjh – /bin/sh


error: unable to upgrade connection: container not found (“sidecar-injector”)

If I am understanding correctly, you have freshly deployed an injector, but it immediately errors and exits with the message

which comes from very early on in the injector code:

The error states that the injector didn’t have permission to read /var/run/secrets/ which is a file injected by Kubernetes into all pods (that don’t explicitly turn this off), and which IIUC is set by Kubernetes to be owned by the same user that processes in the pod run as.

You are unable to exec into the pod because it exits once its main process errors and dies.

I am quite confused by this - as I can’t think how you’d be able to launch the injector so that it didn’t have permission to read its own injected /var/run/secrets/

Indeed, I even deployed the latest release of the Helm chart, with a minimal Helm values file:

  externalVaultAddr: "https://vault.example.local"
  logLevel: "trace"

  enabled: false

into a fresh minikube cluster, and confirmed it all just works.

About the only possibilities I can think of are:

  • Are you using the current release of the Helm chart?
  • Are you doing anything unusual in your customisation of values of the chart?
  • Are there unusual restrictive additions on your Kubernetes cluster which seek to constrain file permissions inside pods?

Beyond that, I’m out of ideas.