Vault Auto unseal process



I deployed vault on a kubernetes (on premise) cluster, with three nodes. Manual unsealing goes well by following the instructions in this tutorial.

But I would like to manage this unsealing automatically, after deploying vault in my cluster.

I would also like to reassure myself that I have understood the process of setting up the secret transit engine.

My understanding: You need at least two vault deployer instances, one (Vault-0) which will host the transit engine and the other (vault-1) which will be uncelled via this transit engine.


  1. Does this assume vault-0 must be unsealed manually ?
  2. Is this unsealing procedure valid for all vault deployments on kubenertes (on premise)?
  3. In my case, I have three pods (vault-0, vault-1, vault-2).
    On my vault-0 instance, I installed the transit engine, and generated the wrapping_token.

But on vault-1 and vault-2 , when I do

VAULT_TOKEN=“hvs.CAESIAdlH3P-PviQbHGyI” vault unwrap

i have this message

Error unwrapping: Error making API request.

Code: 503. Errors:

  • Vault is sealed

however I connect well to vault-1 and vault-2
with this
kubectl -n vault exec --stdin=true --tty=true vault-1 --sh

My Vault Helm Chart Value Overrides

enabled: true
tlsDisable: false
enabled: false
repository: “hashicorp/vault-k8s”
tag: “latest”
memory: 256Mi
cpu: 250m
memory: 256Mi
cpu: 250m
server: #
enabled: true
enabled: true
memory: 1Gi
cpu: 1000m
memory: 1Gi
cpu: 2000m
repository: “hashicorp/vault”
tag: “latest”
pullPolicy: IfNotPresent
VAULT_CACERT: /vault/userconfig/tls-ca/tls.crt
- type: secret
name: tls-server
- type: secret
name: tls-ca
enabled: true
path: “/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204”
enable: true
path: “/v1/sys/health?standbyok=true”
initialDelaySeconds: 60
enabled: true
replicas: 3 # replicas Number
enabled: true
setNodeId: true

  config: |
    ui = true
    api_addr = "https://POD_IP:8200"
    listener "tcp" {
      tls_disable = 1
      address ="
      cluster_address = ""
      tls_cert_file = "/vault/userconfig/tls-server/tls.crt" 
      tls_key_file  = "/vault/userconfig/tls-server/tls.key"

    storage "raft" {
      path = "/vault/data"
        retry_join {
      leader_api_addr = "https://vault-0.vault-internal:8200"
      leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt" 
      leader_client_key_file  = "/vault/userconfig/tls-server/tls.key"
      retry_join {
      leader_api_addr = "https://vault-1.vault-internal:8200"
      leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt" 
      leader_client_key_file  = "/vault/userconfig/tls-server/tls.key"
       retry_join {
      leader_api_addr = "https://vault-3.vault-internal:8200"
     leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt" 
      leader_client_key_file  = "/vault/userconfig/tls-server/tls.key"
      autopilot {
        cleanup_dead_servers = "true"
        last_contact_threshold = "200ms"
        last_contact_failure_threshold = "10m"
        max_trailing_logs = 250000
        min_quorum = 5
        server_stabilization_time = "10s"


    service_registration "kubernetes" {}

enabled: true
serviceType: “LoadBalancer”
serviceNodePort: null
externalPort: 8200

Using the transit engine for unsealing is when you have two totally separate Vault clusters, with one cluster using the other cluster to handle unsealing. Within each of those clusters you could have multiple instances. For the cluster you are using for unsealing it is up to you how you’d unseal - you could use manual unsealing or a cloud system, HSM, etc.

It sounds like you are trying to use this mechanism with only a single Vault cluster?

I actually didn’t quite understand that aspect of things, thank you.

I have another concern, I have clustered applications that will use agents to retrieve secrets.

But I wonder how to retrieve the secrets for non-clustered applications.

I have uncontained legacy apps that need to communicate with vault deployed in kubernetes.
Do you have an idea of the flows needed to establish communication between these two parties?

There are likely to be a few differences between an application that runs in Kubernetes compared with one that runs elsewhere.

Firstly the authentication mechanism used to login to Vault will likely be different. For applications running within Kubernetes you’d often use the Kubernetes auth engine, which uses Service Accounts. Outside of Kubernetes you’d often use AppRole, which uses an ID and secret.

Secondly you may use different tools for communicating with Vault. If your application directly communicates with the Vault API nothing changes, but if not you might use the Vault Agent mechanism within Kubernetes that automatically populates files within your pod. Outside of Kubernetes you might use the Vault CLI agent command which can fetch secrets into files.

Thanks stuart-c for your replies,

I assume in the cases of communication via API or CLI agent command, the host is that of the address parameter.

listener “tcp” {
tls_disable = 0
address = “my-host-or-ip_adress:8200”
cluster_address = “”
tls_cert_file = “/vault/userconfig/tls-server/tls.crt”
tls_key_file = “/vault/userconfig/tls-server/tls.key”

Port 8200 on Vault is what you need to point all your CLI/API clients at. From your configuration that has TLS enabled, so the URL would start https://

How you handle that conectivity/naming is up to you. For example within a Kubernetes cluster you could use the built in DNS (for example vault.vault.svc.cluster.local), assuming you include that DNS name within your TLS certificate.

Outside of the cluster you’d need a way of getting to the Vault service, which could be using an Ingress or something like an extended service mesh. You’d probably want some sort of DNS to allow the access to be discovered, again ensuring it is included within the TLS certificate.

1 Like

Thank You stuart-c for you answer .