Configure Vault k8s auth

Hello,

I’m working to migrate an EKS cluster and need to configure vault k8s auth method ,

Can i use the same path on my new cluster

My old cluster config

vault auth enable  --path="old-k8s-dev" kubernetes

vault write auth/old-k8s-dev/config \        
        token_reviewer_jwt="$SA_JWT_TOKEN" \
        kubernetes_host="$K8S_HOST" \
        kubernetes_ca_cert="$SA_CA_CRT"

For the new cluser can i do :

export VAULT_SA_NAME=$(kubectl get sa vault-auth -o jsonpath="{.secrets[*]['name']}")
export SA_JWT_TOKEN=$(kubectl get secret $VAULT_SA_NAME -o jsonpath="{.data.token}" | base64 --decode; echo)
export SA_CA_CRT=$(kubectl get secret $VAULT_SA_NAME -o jsonpath="{.data['ca\.crt']}" | base64 --decode; echo)
export K8S_HOST=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')


vault write auth/old-k8s-dev/config \        
        token_reviewer_jwt="$SA_JWT_TOKEN" \
        kubernetes_host="$K8S_HOST" \
        kubernetes_ca_cert="$SA_CA_CRT" 

Or i’ve to create a new path and migrate my secrets ?
Does this override my old conf and deny my old cluser from accessing vault ?

The k8s auth config API just stores the specified values, and doesn’t mess with any other data stored in the mount, so this should be fine, as long as your bound service account names and namespaces stay the same for any roles you have set up. I think you may run into some problems if you’ve specified audiences for any of your k8s auth roles though.

Having said that, I would definitely test it out first by creating a test mount configured for something like a local kind cluster, and then migrating it to point to your new EKS cluster using the commands you suggest.