Please help me debug "claim iss is invalid"

I have HA vault/consul pods running with a local kubernetes cluster on minikube. I get “claim iss is invalid” when execing into another app pod and curling the k8s authentication endpoint with the JWT token. Not sure where to go from here.

deployment.yaml for my app code where I want to read the secret:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: web-pod
  template:
    metadata:
      labels:
        app: web-pod
    spec:
      containers:
      - name: web
        image: kahunacohen/hello-k8s
        imagePullPolicy: IfNotPresent
        env:
          - name: VAULT_ADDR
            value: 'http://vault:8200'
          - name: JWT_PATH
            value: '/var/run/secrets/kubernetes.io/serviceaccount/token'
          - name: SERVICE_PORT
            value: '8080'
        envFrom:
          - configMapRef:
              name: web-configmap
        ports:
        - containerPort: 3000
          protocol: TCP

I created a ClusterRoleBinding:

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: role-tokenreview-binding
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: vault-auth
  namespace: default---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: role-tokenreview-binding
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: vault-auth
  namespace: default

and I think I need this for the TokenReview API, whatever that is…

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: role-tokenreview-binding
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
  - kind: ServiceAccount
    name: vault-auth
    namespace: default

All my pods are up and running. vault is initialized and unsealed.

  1. Create service account vault-auth: $ kubectl create serviceaccount vault-auth
  2. Get the vault service account name: export VAULT_SA_NAME=$(kubectl get sa vault-auth --output jsonpath="{.secrets[*]['name']}")
  3. Get the SA_JWT_TOKEN: export SA_JWT_TOKEN=$(kubectl get secret $VAULT_SA_NAME \ --output 'go-template={{ .data.token }}' | base64 --decode)
  4. Get the service account crt: export SA_CA_CRT=$(kubectl config view --raw --minify --flatten \ --output 'jsonpath={.clusters[].cluster.certificate-authority-data}' | base64 --decode)
  5. Get the k8s host: export K8S_HOST=$(kubectl config view --raw --minify --flatten \ --output 'jsonpath={.clusters[].cluster.server}')
  6. Get the root token: root_token=$(cat cluster-keys.json | jq -r ".root_token")
  7. I then pass these all to a script that I run on the vault pod.
  8. The script logs in with the root token and does the following steps, after each of them it sleeps for 20 seconds to “ensure” the previous step completed.
  9. Enable secrets: $ vault secrets enable -path=secret kv-v2
  10. Set some dummy secrets: vault kv put secret/webapp/config username="static-user" password="static-password"
  11. Enable kubernets auth: vault auth enable kubernetes
  12. Configure the auth: vault write auth/kubernetes/config \ token_reviewer_jwt="$SA_JWT_TOKEN" \ kubernetes_host="$K8S_HOST" \ kubernetes_ca_cert="$SA_CA_CERT"
  13. Create a policy for accessing the secret: vault policy write myapp-kv-ro - <<EOF path "secret/data/webapp/config" { capabilities = ["read"] } EOF
  14. Create a role with the policy to access the secret: vault write auth/kubernetes/role/webapp \ bound_service_account_names=vault-auth \ bound_service_account_namespaces=default \ policies=myapp-kv-ro \ ttl=24h

All the steps above are successful as far as I can see.

If I create this script on the app pod:

JWT=$(cat $JWT_PATH);
echo $JWT_PATH;
echo "$JWT";
curl -s http://vault:8200/v1/sys/health?standbyok=true
curl --header "Content-Type: application/json" -v --trace-ascii /dev/stdout --request POST --data "{\"jwt\": \"$JWT\", \"role\": \"webapp\"}" http://vault:8200/v1/auth/kubernetes/login
  1. The health check looks good.
  2. The JWT matches what curl sends
  3. When I execute the script the login fails with claim iss is invalid. I am truncating the actual JWT token:
/var/run/secrets/kubernetes.io/serviceaccount/token

eyJ...

{"initialized":true,"sealed":false,"standby":true,"performance_standby":false,"replication_performance_mode":"disabled","replication_dr_mode":"disabled","server_time_utc":1628146564,"version":"1.8.0","cluster_name":"vault-cluster-b323e711","cluster_id":"084375b5-b89e-6ef8-05d3-d850ac24bdc9"}

Warning: --trace-ascii overrides an earlier trace/verbose option
Note: Unnecessary use of -X or --request, POST is already inferred.
== Info:   Trying 10.97.255.66...
== Info: TCP_NODELAY set
== Info: Connected to vault (10.97.255.66) port 8200 (#0)
=> Send header, 175 bytes (0xaf)
0000: POST /v1/auth/kubernetes/login HTTP/1.1
0029: Host: vault:8200
003b: User-Agent: curl/7.52.1
0054: Accept: */*
0061: Content-Type: application/json
0081: Content-Length: 1054
0097: Expect: 100-continue
00ad: 
<= Recv header, 23 bytes (0x17)
0000: HTTP/1.1 100 Continue
=> Send data, 1054 bytes (0x41e)
0000: {"jwt": "eyJ", "role": "webapp"}
== Info: We are completely uploaded and fine
<= Recv header, 36 bytes (0x24)
0000: HTTP/1.1 500 Internal Server Error
<= Recv header, 25 bytes (0x19)
0000: Cache-Control: no-store
<= Recv header, 32 bytes (0x20)
0000: Content-Type: application/json
<= Recv header, 37 bytes (0x25)
0000: Date: Thu, 05 Aug 2021 06:56:04 GMT
<= Recv header, 20 bytes (0x14)
0000: Content-Length: 40
<= Recv header, 2 bytes (0x2)
0000: 
<= Recv data, 40 bytes (0x28)
0000: {"errors":["claim \"iss\" is invalid"]}.
{"errors":["claim \"iss\" is invalid"]}
== Info: Curl_http_done: called premature == 0
== Info: Connection #0 to host vault left intact

I’m not sure what to try. Can someone tell me what I might be doing wrong in my setup? Is /var/run/secrets/kubernetes.io/serviceaccount/token the right path to this service account’s JWT?

I added “disable_iss_validation=true” to the auth/kubernetes/config, but now get:
{“errors”:[“service account name not authorized”]}.

“With the recent updates, you will need to make sure to add the issuer parameter. If you miss this step, you will get a claim "iss" is invalid error when attempting to start your pod.”
This is how I got the issuer:

kubectl proxy &
curl --silent http://127.0.0.1:8001/api/v1/namespaces/default/serviceaccounts/default/token \
  -H "Content-Type: application/json" \
  -X POST \
  -d '{"apiVersion": "authentication.k8s.io/v1", "kind": "TokenRequest"}' \
  | jq -r '.status.token' \
  | cut -d. -f2 \
  | base64 -D

Vault CSI Provider | Vault by HashiCorp (vaultproject.io)

I’ve followed the documentation to get the issuer which was computed to be https://master.cfcr.internal:8443 for a TKGI cluster and set it on the Vault endpoint. However I’m still getting the claim "iss" is invalid message. My Vault instance is running on a Kubernetes 1.20 cluster while the pods connecting to Vault are on a 1.21 cluster. Would this matter? I’m getting the issuer for the 1.21 cluster, although it appears to be the same for both clusters. Do the roles need to be recreated, or just the auth endpoint needs to be reconfigured to add the issuer? Any thoughts?

Update: I created a vanilla Vault 1.8.1 instance on a 1.21 cluster. I created service accounts there and still receiving {"errors":["claim \"iss\" is invalid"]} after setting the appropriate issuer.

Have you tried without the issuer? I was getting this error before and I added the disable_iss_validation and it worked without the issuer.

vault write auth/.../config \
   token_reviewer_jwt="$TOKEN_REVIEW_JWT" \
   kubernetes_host="$KUBE_HOST" \
   kubernetes_ca_cert="$KUBE_CA_CERT" \
   disable_iss_validation=true \
   disable_local_ca_jwt=true

I’m trying without the issuer and setting disable_iss_validation=true disable_local_ca_jwt=true and it’s throwing {"errors":["permission denied"]}. Still not sure how to resolve this issue.

One interesting thing I’ve found. When I decode the jwt of the service account that is set on the role I see "iss": "kubernetes/serviceaccount", but when I curl http://127.0.0.1:8001/api/v1/namespaces/default/serviceaccounts/default/token the issuer shows as https://master.cfcr.internal:8443.

I just tried setting the issuer on the Kubernetes config to kubernetes/serviceaccount and it throws the same {"errors":["permission denied"]} message.

When I was getting “permission denied”, I could solve it removing the double quotes from the SecretProviderClass.
Retrieve secrets from Vault using csi driver returning “permission denied” - Vault - HashiCorp Discuss

Hi @kahunacohen,

When you configured k8s auth, you ran:

vault write auth/kubernetes/config \
    token_reviewer_jwt="$SA_JWT_TOKEN" \
    kubernetes_host="$K8S_HOST" \
    kubernetes_ca_cert="$SA_CA_CERT"

As you point out, setting disable_iss_validation=true fixes your issue, and the alternative would be to set issuer to a value you can find following these instructions similar to what has been linked above. However, as the API docs note, those settings are now deprecated as of Vault 1.9, and disable_iss_validation will default to true for new mounts created in 1.9+.

For your service account name not authorized error, it looks like the issue is the service account your pod is running as does not match the service account you configured for the k8s auth role in Vault:

vault write auth/kubernetes/role/webapp \
    bound_service_account_names=vault-auth \
    bound_service_account_namespaces=default \
    policies=myapp-kv-ro \
    ttl=24h

AFAICT it looks like you’re using the default service account, so you should set bound_service_account_names to default instead of (or as well as) vault-auth.

Let us know if you’re still having issues after updating that.

I think the most likely candidates for permission denied are probably that the JWT given as token_reviewer_jwt does not have permission to the TokenReview API or the bound_service_account_names argument specifies the wrong service account name. If nothing existing in this thread helps you troubleshoot your issue, it might be worth starting a new thread detailing all your setup steps as the OP did, and we should be able to help.

Hi @tomhjp , my colleagues and I were able to resolve this issue. We have been using Terraform resource to create the service account, and another data resource to pull the token from the secret. Basically this,

resource "kubernetes_service_account" "service_account" {
  metadata {
    name      = var.token_reviewer_service_account
    namespace = var.token_reviewer_namespace
  }
  automount_service_account_token = true
}

And then using data "kubernetes_secret" "token_reviewer" {} to get the token and ca.crt from that service account secret to configure the Kubernetes auth backend. Something like this,

resource "vault_kubernetes_auth_backend_config" "kube_auth" {
  backend            = vault_auth_backend.kubernetes.path
  kubernetes_host    = var.kube_api_host
  kubernetes_ca_cert = data.kubernetes_secret.token_reviewer.data["ca.crt"]
  token_reviewer_jwt = data.kubernetes_secret.token_reviewer.data["token"]
  issuer             = "https://master.cfcr.internal:8443"
}

The problem was that the token in the secret has a different issuer than what is returned when we call,

curl --silent http://127.0.0.1:8001/api/v1/namespaces/system/serviceaccounts/token-reviewer/token -H "Content-Type: application/json" -X POST -d '{"apiVersion": "authentication.k8s.io/v1", "kind": "TokenRequest"}' | jq -r '.status.token'

So instead of using data.kubernetes_secret.token_reviewer.data["token"] we’re just pulling the token and using a variable to pass it in from a vars file. It’s not ideal, but at least things are working.

Thanks,
Weldon

The last method I described did not actually work because these tokens are short-lived. I believe the official documentation was updated after the above discussion and we ended up using the Vault client’s JWT as the reviewer JWT by following Kubernetes - Auth Methods | Vault by HashiCorp

The difference was to set disable_local_ca_jwt=true on the auth endpoint. And create a cluster role binding for each application’s service account. See Kubernetes - Auth Methods | Vault by HashiCorp

We also removed token_reviewer_jwt from the kube auth endpoint so that it defaults to the clients JWT.