Unsealing error after migration

Hello,
I am doing a migration from vault 1.7.10 backend file to a helm deployed vault with a raft backend:

I used this migration config:

storage_source “file” {
path = “/vault/file/”
}
storage_destination “raft” {
path = “/vault/data/”
}
cluster_addr = “https://127.0.0.1:8201

vault operator migrate -config migrate.hcl

config used in the helm values:

 ui = true
   listener "tcp" {
     address = "[::]:8200"
     cluster_address = "[::]:8201"
     tls_cert_file = "/vault/userconfig/tls-server/tls.crt"
     tls_key_file = "/vault/userconfig/tls-server/tls.key"
     tls_ca_cert_file = "/vault/userconfig/root-ca/rootCACert.pem"
   }
   storage "raft" {
     path = "/vault/data"
       retry_join {
       leader_api_addr = "https://vault-0.vault-internal:8200"
       leader_ca_cert_file = "/vault/userconfig/root-ca/rootCACert.pem"
       leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
       leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
     }
     retry_join {
       leader_api_addr = "https://vault-1.vault-internal:8200"
       leader_ca_cert_file = "/vault/userconfig/root-ca/rootCACert.pem"
       leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
       leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
     }
     retry_join {
       leader_api_addr = "https://vault-2.vault-internal:8200"
       leader_ca_cert_file = "/vault/userconfig/root-ca/rootCACert.pem"
       leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
       leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
     }
     autopilot {
       cleanup_dead_servers = "true"
       last_contact_threshold = "200ms"
       last_contact_failure_threshold = "10m"
       max_trailing_logs = 250000
       min_quorum = 5
       server_stabilization_time = "10s"
     }
   }
   service_registration "kubernetes" {}

The migration finishes successfully but when I try to unseal the vault I get
Error unsealing: context deadline exceeded after I enter the last unseal key.

You’ve not described the migration procedure you used in detail, which makes me think you have not taken special precautions to ensure that Vault is not running whilst you perform the migration. This is mentioned, but not explained very well, in operator migrate - Command | Vault | HashiCorp Developer.

Running migration with Kubernetes in the picture is complicated. You need to be substantially comfortable with Kubernetes and Helm to improvise temporary changes to the Kubernetes objects, to be in a situation where you have access to the target persistent volume, but not have a running Vault. (If you want to see just how complicated, here is a post from me, with a lot of discussion following on from it: Vault backend migration from Consul to Raft (K8S, Helm, KMS auto-unseal) - #2 by maxb)

For a one-time migration, you might find it easier to just run the migration entirely outside of Kubernetes, on a regular server or workstation. Then start up that migrated Raft Vault, and use the Raft snapshot save/restore functionality to save a snapshot and restore it into a new empty (just initialized) Vault cluster on Kubernetes.

I did the vault operator migrate offline, then copied it in the PV using another pod then did the Helm install, vault status gives me an Initialized, sealed vault. that I can not unseal

After countless tries I figured it out, For people finding this use this migrate.hcl

storage_source “file” {
path = “/vault/file/”
}
storage_destination “raft” {
path = “/vault/data/”
}
cluster_addr = “https://vault-0.vault-internal:8201

cluster_addr should be the same as the env variable VAULT_CLUSTER_ADDR in the pod you’re doing the migration
and you should not define a node_id as mentioned in the official documentation.
the unseal should work fine.

That’s interesting… the cluster_addr is used to seed the Raft configuration’s list of nodes, but I’m surprised that having it set to something unexpected causes issues at unseal time.