Getting 403 permission denied when connecting to vault cluster from pod running on external kubernetes cluster

Hello All,

I am facing a problem where I cannot connect to vault from pod or run curl command using service account token from different kubernetes cluster. its giving me “permission denied”

Below is the config I have:

I have deployed HA vault cluster(with 3 nodes) on one of kubernetes cluster(lets call it cluster-A). I have enabled endpoint via ingress UI which I can access via browser or using curl with root credentials.

Then I have deployed pod on same kubernetes cluster-A with new service account “app” and able to access secrets with no issues.

Then, I tried deploying same above pod with service account on different kubernetes cluster-B and facing “permission denied”

Below are the steps I have followed on Cluster-B:

  1. deployed vault agent injector using helm chart with “injector.externalVaultAddr=” as below:
 helm install -f vault-custom-ha.yaml vault -n vault  . --set injector.externalVaultAddr=http://<vault_ingress_url>
  1. create new service account name “vault-auth” under “vault” namespace and then cluster rolebindings for review token to access vault cluster which is on kubernetes cluster-A
  2. I have got the secret token of “vault-auth” as below:
SA_JWT_TOKEN=$(kubectl get secret/vault-auth-token-xmhbx -n vault -o jsonpath='{.data.token}' | base64 -d) --> exported this variable under vault pod
  1. Got K8s host and cert using below:
kubectl config view --raw -o jsonpath='{.clusters[*].cluster.certificate-authority-data}' | base64 -d > /tmp/ca-cert --> stored this ca file under vault pod
K8S_HOST=$(kubectl config view --raw -o jsonpath='{.clusters[*].cluster.server}')
  1. Enabled kubernetes auth and added config as below in vault pod:
    vault auth enable -path=test-new kubernetes
vault write auth/test-new/config kubernetes_host="${K8S_HOST}" kubernetes_ca_cert=@/tmp/ca-cluster token_reviewer_jwt="${SA_JWT_TOKEN}"  

vault write auth/test-new/role/k8s-app \
   bound_service_account_names=vault-auth \
   bound_service_account_namespaces=vault \
   policies=k8s-secrets \
   ttl=10d

using same secret policy which I used for pod when deployed on same kubernetes cluster-A(k8s-secrets)

  1. Created below secret:
vault kv put internal/config1 \
      username='appuser' \
      password='suP3rsec(et!' \
      ttl='30h'
  1. Deployed application with new service account using below deployment yaml files and getting permission denied in agent-inector-init log file:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
  namespace: vault
  labels:
    app: vault-agent-demo
spec:
  selector:
    matchLabels:
      app: vault-agent-demo
  replicas: 1
  template:
    metadata:
      annotations:
        vault.hashicorp.com/agent-inject: "true"
        vault.hashicorp.com/agent-inject-secret-database: "internal/config1"
        vault.hashicorp.com/role: "k8s-app"
        vault.hashicorp.com/auth-path: "/auth/test-new"
        vault.hashicorp.com/agent-inject-template-database: |
          {{- with secret "internal/config1" -}}
          postgresql://{{ .Data.data.username }}:{{ .Data.data.password }}@postgres:5432/wizard
          {{- end }}
      labels:
        app: vault-agent-demo
    spec:
      serviceAccount: vault-auth
      serviceAccountName: vault-auth
      containers:
      - name: app
        image: public/nginx:1.19.2
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: app
  namespace: vault
  labels:
    app: vault-agent-demo

Below is the log from “vault-agent-init”:

2022-02-03T22:01:06.768Z [INFO] auth.handler: authenticating

2/3/2022 5:01:06 PM 2022-02-03T22:01:06.791Z [ERROR] auth.handler: error authenticating:

2/3/2022 5:01:06 PM error=

2/3/2022 5:01:06 PM | Error making API request.

2/3/2022 5:01:06 PM |

2/3/2022 5:01:06 PM | URL: PUT http://vault-ui.com/v1/auth/test-new/login

2/3/2022 5:01:06 PM | Code: 403. Errors:

2/3/2022 5:01:06 PM |

2/3/2022 5:01:06 PM | * permission denied

2/3/2022 5:01:06 PM backoff=1.8s

I see same error when executed via curl with same jwt token:

 curl --insecure --request POST --data '{"jwt": "'"$KUBE_TOKEN"'", "role": "k8s-app"}' http://<vault_ingress>/v1/auth/test-new/login
{"errors":["permission denied"]}

What am I missing here? Any help would be really appreciated

Below are versions I am using:

kubernetes deployed on rancher version: 1.18.20
vault version: 1.9.3

It sounds like you’re pretty close. There might be some useful hints in the Vault server logs in cluster A. Have you checked whether Vault in cluster A can reach the k8s API via the URL that you’ve set kubernetes_host to? It’s a little hard to understand that config line because it looks like the markdown renderer has mangled it a bit, but Vault will return permission denied if its TokenReview API call to kubernetes fails for any reason.

Also you mentioned applying the review token cluster role binding to the vault-auth service account, but if you are using that route (i.e. following this bit from the docs: Kubernetes - Auth Methods | Vault by HashiCorp) then you don’t need to set token_reviewer_jwt when configuring the k8s auth mount in Vault. Given that Vault is on a different Kubernetes cluster, you should either set kubernetes_host and disable_local_ca_jwt=true, or drop the cluster role binding for vault-auth and set kubernetes_host and token_reviewer_jwt.

Lastly, to stop the forum mangling your commands/config, you can wrap inline text in single backticks: `. Or you can wrap multi-line blocks in triple-backticks on their own line. It’s mostly pretty similar to GitHub’s markdown syntax.

thanks alot for info. I did tested telnet using nc command from vault pod and I see its open. I did tried below both options as per your suggestions and I got same response(403 permission denied).

Given that Vault is on a different Kubernetes cluster, you should either set `kubernetes_host` and `disable_local_ca_jwt=true` , or drop the cluster role binding for `vault-auth` and set `kubernetes_host` and `token_reviewer_jwt .

I did enabled audit logs and I see below info in logs:

{"time":"2022-02-04T14:21:06.419528693Z","type":"request","auth":{"token_type":"default"},"request":{"id":"9021b62b-90ab-e0d2-5baf-565e0f90baa1","operation":"update","mount_type":"kubernetes","namespace":{"id":"root"},"path":"auth/test-new1/login","data":{"jwt":"hmac-sha256:c80dc11e2422ae3171fe5d3d3798fb1958372a4929456a4391fb545dd258af6a","role":"hmac-sha256:b2d6f86db8f07c89db712013e0f2f003e1b77956b7f8433c9fb53ab3c47cf2ce"},"remote_address":"10.42.0.0"}}
{"time":"2022-02-04T14:21:06.435846098Z","type":"response","auth":{"token_type":"default"},"request":{"id":"9021b62b-90ab-e0d2-5baf-565e0f90baa1","operation":"update","mount_type":"kubernetes","namespace":{"id":"root"},"path":"auth/test-new1/login","data":{"jwt":"hmac-sha256:c80dc11e2422ae3171fe5d3d3798fb1958372a4929456a4391fb545dd258af6a","role":"hmac-sha256:b2d6f86db8f07c89db712013e0f2f003e1b77956b7f8433c9fb53ab3c47cf2ce"},"remote_address":"10.42.0.0"},"response":{"mount_type":"kubernetes"},"error":"permission denied"}

And sorry for formatting for deployment yaml in previous post. I have updated original post with format.

Any other suggestions please? Also, I see same 403 permission denied when tried using curl with same jwt token. Is this because, token is invalid or something else?

If you’re trying to authenticate to Vault in cluster A from an app in cluster B, then Vault needs to have a Kubernetes auth method set up for cluster B. It sounds like you set the auth method up for cluster A, so cluster B clients can’t authenticate using that auth method path. If that’s the case, you either need to recreate the auth method using the host and cert for cluster B, or add a new auth method for cluster B at a different path.

thanks for reply

I am created service account(vault-auth) on cluster-B and using that service account as auth method in vault cluster which is deployed on cluster-A using below:

vault write auth/test-new/config kubernetes_host="${K8S_HOST}" kubernetes_ca_cert=@/tmp/ca-cluster token_reviewer_jwt="${SA_JWT_TOKEN}"  

vault write auth/test-new/role/k8s-app \
   bound_service_account_names=vault-auth \
   bound_service_account_namespaces=vault \
   policies=k8s-secrets \
   ttl=10d

Then deploying application on cluster-B using same service account(vault-auth). Do I need to create one service account in cluster-A too? if so, how do I use it?

Before answering your question I see a couple of things that make me wonder:

  1. Why do you create a service account named app when you are not using it? I was under the assumption that this service account would be used for your deployment on cluster-B.
  2. I didn’t see this asked before but double check if the option disable_iss_validation is set to true. Especially important if you have been working with an older version and going to migrate to 1.9.3.

Answer to your reply:

Then deploying application on cluster-B using same service account(vault-auth). Do I need to create one service account in cluster-A too? if so, how do I use it?

You are authenticating from cluster-B to cluster-A. You do not have to create any service account for this on cluster-A.

Vault uses the service account token to review the sent-in/request supplied JWT token during login with cluster-B.
Any token from cluster-A can not do this, only tokens from cluster-B can.

Thanks alot for info. Then, looks like I sm doing right way. Created “vault-auth” service account on cluster-B and using it as vault auth on cluster-A where vault cluster is deployed. Somehow still seeing “403 permission denied error”.

Any idea what am I missing that would cause this error? Went through lot of blogs but couldn’t find answer what is causing this. Any help would be really appreciated.

In order to be able to say what might go wrong I would have to see your configuration setup for the auth method. Could you share that?

I think the issue might be that you configured the auth method with the host and cert info for cluster A, but it needs them for cluster B if the apps authenticating to Vault are in cluster B. Do you have two different kubeconfig files for the two different clusters, or are they different entries in the same kubeconfig?

Below is the config I am using:

Get the host and cert from Cluster-B using below commands:

kubectl config view --raw -o jsonpath='{.clusters[*].cluster.certificate-authority-data}' | base64 -d > /tmp/ca-cert --> stored this ca file under vault pod
K8S_HOST=$(kubectl config view --raw -o jsonpath='{.clusters[*].cluster.server}')

Enabled kubernetes auth and added config as below in vault pod which is running on cluster-A:

vault auth enable -path=test-new kubernetes

vault write auth/test-new/config kubernetes_host="${K8S_HOST}" kubernetes_ca_cert=@/tmp/ca-cluster token_reviewer_jwt="${SA_JWT_TOKEN}"  

vault write auth/test-new/role/k8s-app \
   bound_service_account_names=vault-auth \
   bound_service_account_namespaces=vault \
   policies=k8s-secrets \
   ttl=10d

Any suggestions what is wrong would be really helpful.

No. I have separate kubeconfig files both kubernetes clusters and I am running on 2 different machines. I am using host and cert of cluster-B.

thanks alot or everyone for their feedback. I got it resolved by using direct controlplane master node where kube-api is running and ca cert from within it.

So far, I am using rancher url directly thats causing failure. After using kube-api master node url and ca, it worked.

Also noticed, we need to deploy vault agent inector only after creating Service account on cluster-B. these steps seemed to help for my case.

We can consider this topic closed.

1 Like