Stored unseal keys are supported, but none were found

I’m running vault on GKE using the official helm chart. This is my config:

ui = true
listener “tcp” {
tls_disable = 1
address = “[::]:8200”
cluster_address = “[::]:8201”
storage “gcs” {
bucket = “<>”
ha_enabled = “true”
service_registration “kubernetes” {}
# Example configuration for using auto-unseal, using Google Cloud KMS. The
# GKMS keys must already exist, and the cluster must have a service account
# that is authorized to access GCP KMS.
seal “gcpckms” {
project = “<>”
region = “>?”
key_ring = “<>”
crypto_key = “<>”

I think I configured everything correctly but still getting this unclear error:

==> Vault server configuration:

  GCP KMS Crypto Key: <>
    GCP KMS Key Ring: <>
     GCP KMS Project: <>
      GCP KMS Region: <>
           Seal Type: gcpckms
         Api Address: http://<>:8200
                 Cgo: disabled
     Cluster Address: https://vault-0.vault-internal:8201
          Listener 1: tcp (addr: "[::]:8200", cluster address: "[::]:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
           Log Level: trace
               Mlock: supported: true, enabled: false
       Recovery Mode: false
             Storage: gcs (HA available)
             Version: Vault v1.4.2

==> Vault server started! Log data will stream in below:

2020-06-03T14:51:59.943Z [INFO] proxy environment: http_proxy= https_proxy= no_proxy=
2020-06-03T14:51:59.944Z [DEBUG] storage.gcs: configuring backend
2020-06-03T14:51:59.944Z [DEBUG] storage.gcs: configuration: bucket=<> chunk_size=8388608 ha_enabled=true max_parallel=0
2020-06-03T14:51:59.944Z [DEBUG] storage.gcs: creating client
2020-06-03T14:52:05.934Z [DEBUG] service_registration.kubernetes: “namespace”: “<>”
2020-06-03T14:52:05.934Z [DEBUG] service_registration.kubernetes: “pod_name”: “vault-0”
2020-06-03T14:52:06.056Z [DEBUG] storage.cache: creating LRU cache: size=0
2020-06-03T14:52:06.056Z [DEBUG] cluster listener addresses synthesized: cluster_addresses=[[::]:8201]
2020-06-03T14:52:06.075Z [INFO] core: stored unseal keys supported, attempting fetch
2020-06-03T14:52:06.093Z [WARN] failed to unseal core: error=“stored unseal keys are supported, but none were found”
2020-06-03T14:52:06.419Z [INFO] core.autoseal: seal configuration missing, but cannot check old path as core is sealed: seal_type=recovery
2020-06-03T14:52:09.420Z [INFO] core.autoseal: seal configuration missing, but cannot check old path as core is sealed: seal_type=recovery
2020-06-03T14:52:11.093Z [INFO] core: stored unseal keys supported, attempting fetch

What am I missing? this is a clean install from scratch, not a restore


I’m passing through the same. Did you could resolve it?



I’m facing the same issue, any help please?

Hi ,

can anyone have any idea on it? i am facing the same issue.

This might be a permissions issue. The default GKE / GCP Run service accounts don’t have access rights to GCP KMS keys.

Did this ever get resolved for anyone? I am seeing this same issue trying to get nodes to join my initialized and unsealed first node, but it always failed. Using AWS KMS for auto-unseal.

Please check the KMS keys and Keyring you have provided correct one

Hi @PayalSasmal10,

Thanks for responding. Just to check, I arbitrarily switched my AWS profile to a user which didn’t have access to the KMS key, and changed the key to include a typo and observed the errors Vault output as a result; both prevented the server agents from starting. Switching them back, the initialized node started up, unsealed, and vault status reported a healthy node.

The second node I’m trying to get to join also spins up correctly with the same AWSKMS config used on the primary node, but won’t join stating the following:

an 07 01:08:38 security2 build-env-vars[4838]: 2021-01-07T01:08:38.176Z [INFO]  core: attempting to join possible raft leader node: leader_addr=
Jan 07 01:08:38 security2 build-env-vars[4838]: 2021-01-07T01:08:38.250Z [WARN]  core: join attempt failed: error="error during raft bootstrap init call: Error making API request.
Jan 07 01:08:38 security2 build-env-vars[4838]: URL: PUT
Jan 07 01:08:38 security2 build-env-vars[4838]: Code: 503. Errors:
Jan 07 01:08:38 security2 build-env-vars[4838]: * Vault is sealed"
Jan 07 01:08:38 security2 build-env-vars[4838]: 2021-01-07T01:08:38.250Z [ERROR] core: failed to join raft cluster: error="failed to join any raft leader node"

A little context, in case it’s relevant, I had a cluster overflow it’s CRL exhausting the available resources and making it unreasonably arduous and expensive to properly revoke and tidy the number of certs produced (still not entirely sure how so many got created, there were 735k --but that’s a topic for another day). Regardless, I’m essentially recycling the previous config, which had no trouble joining other nodes. I’m starting to think that I’m disobeying some important steps that I’m required to do when starting the cluster up. I have the certificates that were issued for the previous nodes, which are still valid, and all nodes and agents have the CA for verification, so I thought I would just turn on TLS. I have not seen any TLS errors, but I’m starting to suspect that Vault is returning an error that steers me in the wrong direction and I just can’t use TLS certs, however valid they might be, until the PKI engine is in place.

Does anyone know if one can just spin up using TLS if each node has their appropriate certs? Do I need to get the leader_node certs on to the other nodes before the join will work? I’ve seen the docs around using leader_certs, but the “why” of it all and what it buys me to have those in place are a little fuzzy to me.

Similarly, even if the KMS auto-unseal setup is working, is it a normal thing to just go straight to Auto-Unseal on initialization, or do I have to set up the network using the stock shamir seal first and then migrate? I feel like I read when the feature was first released that you had to migrate to KMS, but I’m not seeing it now as I google around, and feel like I’ve seen a few examples do so without comment.