Hello everyone,
I’m struggling to run Vault in the K8s cluster using the official Helm chart, precisely in conjunction with Azure Key Vault as auto unlock.
This is my values file with a config that IS working:
global:
enabled: true
tlsDisable: false
injector:
enabled: true
image:
repository: "hashicorp/vault-k8s"
tag: "1.1.0"
pullPolicy: IfNotPresent
resources:
requests:
memory: 50Mi
cpu: 50m
limits:
memory: 256Mi
cpu: 250m
server:
enabled: true
image:
repository: "hashicorp/vault"
tag: "1.12.1"
pullPolicy: IfNotPresent
resources:
requests:
memory: 50Mi
cpu: 50m
limits:
memory: 256Mi
cpu: 250m
ingress:
enabled: true
annotations: |
cert-manager.io/cluster-issuer: wildcard-k8s
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
pathType: Prefix
activeService: false
hosts:
- host: vault.k8s.local
paths:
- /
tls:
- secretName: ingress-vault-tls
hosts:
- vault.k8s.local
extraEnvironmentVars:
VAULT_CACERT: /vault/userconfig/tls-ca/ca.crt
volumes:
- name: tls-server
secret:
secretName: k8s-vault-secret
- name: ca-pemstore
configMap:
name: ca-pemstore
volumeMounts:
- mountPath: /vault/userconfig/tls-ca
name: tls-server
readOnly: true
- mountPath: /etc/ssl/certs
name: ca-pemstore
readOnly: false
standalone:
enabled: false
service:
enabled: true
active:
enabled: true
standby:
enabled: false
instanceSelector:
enabled: false
ha:
enabled: true
replicas: 1
config: |
ui = true
listener "tcp" {
tls_disable = 0
tls_cert_file = "/vault/userconfig/tls-ca/tls.crt"
tls_key_file = "/vault/userconfig/tls-ca/tls.key"
tls_min_version = "tls12"
address = "0.0.0.0:8200"
}
storage "postgresql" {
connection_url = "postgres://redacted/vault01?sslmode=verify-full"
table = "vault_kv_store"
ha_enabled = "true"
ha_table = "vault_ha_locks"
}
ui:
enabled: true
externalPort: 8200
Now I want to use auto unseal feature, I specified the following environmental variables in values for Helm in the ‘server’ object:
server:
extraEnvironmentVars:
VAULT_SEAL_TYPE: azurekeyvault
extraSecretEnvironmentVars:
- envName: AZURE_TENANT_ID
secretName: az-kv-secret
secretKey: AZURE_TENANT_ID
- envName: AZURE_CLIENT_ID
secretName: az-kv-secret
secretKey: AZURE_CLIENT_ID
- envName: AZURE_CLIENT_SECRET
secretName: az-kv-secret
secretKey: AZURE_CLIENT_SECRET
- envName: VAULT_AZUREKEYVAULT_VAULT_NAME
secretName: az-kv-secret
secretKey: VAULT_AZUREKEYVAULT_VAULT_NAME
- envName: VAULT_AZUREKEYVAULT_KEY_NAME
secretName: az-kv-secret
secretKey: VAULT_AZUREKEYVAULT_KEY_NAME
At this point Vault service inside of pod won’t even start.
When I run vault status inside of pod I get this error:
Error checking seal status: Get “https://127.0.0.1:8200/v1/sys/seal-status”: dial tcp 127.0.0.1:8200: connect: connection refused
And that’s it I can’t find any other error that might help to investigate where the problem might be.
I tried to enable trace to debug logging - nothing, search for any logs inside of pod - nothing, using vault server and vault operator diagnose commands against config file - nothing; command just hangs out.
I also tried to double-check if provided variables are correct and can authenticate to Key Vault. I deployed a standalone container, and passed the same variables with the same values and it indeed works with auto unlocking (but instead of Postgres backend I used file one if that makes any change)
Hope you can help me figure out what’s going on.
Thank you