Kubernetes vault-agent-init sidecar error "context deadline exceeded"

Hi all, i was testing out the vault-agent-injector and was following one of the guides until i got stuck at this particular stage Injecting Secrets into Kubernetes Pods via Vault Agent Containers | Vault - HashiCorp Learn

Issue i am facing is, vault-agent-init sidecar container managed to be injected but its never in a “ready” state. from the vault-agent-init logs, i can see it’s having difficulty communicating to Vault server.

2021-05-20T10:33:21.828Z [INFO]  auth.handler: authenticating
2021-05-20T10:34:21.829Z [ERROR] auth.handler: error authenticating: error="context deadline exceeded" backoff=4m36.53s

My k8s environment (default namespace):

    root@kubemaster1:/home/vagrant# kubectl get po
NAME                                             READY   STATUS     RESTARTS   AGE
consul-consul-fqf4t                          1/1     Running    0          26h
consul-consul-kz7t7                        1/1     Running    0          26h
consul-consul-server-0                    1/1     Running    0          7h33m
consul-consul-server-1                     1/1     Running    0          26h
node-app-5bbfcff-vf2p6                   0/2     Init:0/1   0          19m             <------- faulty pods
node-app-5bbfcff-xkdt2                     0/2     Init:0/1   0          19m
vault-0                                                   1/1     Running    0          82m
vault-1                                                     1/1     Running    0          82m
vault-agent-injector-586c568bcb-cbkqd   1/1     Running    0          82m

What i have verified so far:

  • relevant vault policy created as per guide
  • relevant serviceaccount created and used
  • relevant vault role created and used
  • shell into vault-agent-init sidecar and verified no connectivity issue between it and Vault or vault-agent-injector

What else could i be missing? where can i look further into this? Will definitely appreciate any help on this.

I am having the EXACT same problem. I’m working through the same tutorial and get stuck at the same spot, although I’m not using minicube but rather a fresh 5-node (3x master, 2x worker) k3s cluster.

$ kubectl get pods
NAME                                   READY   STATUS     RESTARTS   AGE
orgchart-7457f8489d-w5xwc              1/1     Running    0          4m55s
orgchart-798cbc6c76-xxc8t              0/2     Init:0/1   0          3m59s
vault-0                                1/1     Running    0          8m47s
vault-agent-injector-6f87dd499-qswnt   1/1     Running    0          8m47s

The log output from the vault-agent-init container also has the below error

==> Vault agent started! Log data will stream in below:
==> Vault agent configuration:
Cgo: disabled
Log Level: info
Version: Vault v1.7.0
Version Sha: 4e222b85c40a810b74400ee3c54449479e32bb9f
2021-05-20T14:02:09.083Z [INFO] sink.file: creating file sink
2021-05-20T14:02:09.083Z [INFO] sink.file: file sink configured: path=/home/vault/.vault-token mode=-rw-r-----
2021-05-20T14:02:09.166Z [INFO] template.server: starting template server
2021-05-20T14:02:09.166Z [INFO] auth.handler: starting auth handler
[INFO] (runner) creating new runner (dry: false, once: false)
2021-05-20T14:02:09.166Z [INFO] sink.server: starting sink server
2021-05-20T14:02:09.166Z [INFO] auth.handler: authenticating
[INFO] (runner) creating watcher
2021-05-20T14:03:09.167Z [ERROR] auth.handler: error authenticating: error="context deadline exceeded" backoff=1s
2021-05-20T14:03:10.168Z [INFO] auth.handler: authenticating
2021-05-20T14:04:10.169Z [ERROR] auth.handler: error authenticating: error="context deadline exceeded" backoff=1.56s

I managed to get this working. I found the following command works when writing the kuberbetes/auth/config

vault write auth/kubernetes/config \
  issuer="https://kubernetes.default.svc.cluster.local" \
  token_reviewer_jwt="$TOKEN_REVIEW_JWT" \
  kubernetes_host="$KUBE_HOST" \
  kubernetes_ca_cert="$KUBE_CA_CERT"

Note the extra issuer="https://kubernetes.default.svc.cluster.local" line. I cannot for the life of me find where I read that is needed though!

1 Like

I also was having the exact same problem!

The problem was that I had two instances of Vault in my cluster (not minikube). Each Vault injector uses a mutating webhook, which is a cluster level resource. So the webhook for the first instance was interfering with the webhook of my new instance.

I found the problem by adding the following annotation and seeing in the logs that the Vault Address was for the wrong namespace.

vault.hashicorp.com/log-level: "debug"

I temporarily resolved the issue by deleting the first webhook.

1 Like

I have the same problem and none of the previous answers worked for me, what else I can do?

1 Like

@MushiTheMoshi facing similar issue, Got any solutions???

I had faced similar issues, but in my case it was kubernetes version 1.21. I had to downgrade to 1.18 or follow the updated token structure.
More Info: External vault init container stuck at Init:0/1 with context deadline exceeded error

I am getting this erro when I try this on vault-0 container.

Error writing data to auth/kubernetes/config: Error making API request.

URL: PUT http://127.0.0.1:8200/v1/auth/kubernetes/config
Code: 400. Errors:

* missing client token

@ukreddy-erwin Are you passing jwt token properly? seems like jwt is missed out !

Hi Prashant, I have done as it is the getting started guide, but same issue. I tried on azure kubernetes and also on kind.You could try on kind and check

This still seems to be the problem even with the “issuer” specified. I’m following the guide to the Tee and am on GKE 1.19.

1 Like

yes in vault UI, it specify image

The same problem was fixed after applying

vault write auth/kubernetes/config \
    kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" \
    token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
    kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt disable_iss_validation=true\
    issuer="https://kubernetes.default.svc.cluster.local"
1 Like

Disable IPTABLES in master and worker node

iptables --flush

this worked for me. The post above with a similar update to auth/kubernetes/config did not work for me. Now my devwebapp pods are Running 2/2. Thanks.

This works but i guess we do not need issuer. only kubernetes_host, token_reviewer_jwt, kubernetes_ca_cert and disable_iss_validation needs to be provided.