Hello, I’ve recently taken over managing Vault and am automating the installation of it onto a K8s cluster via the Helm chart. We have an existing working Vault install on cluster A am I am trying to bring up a new Vault install using Cluster B. I have successfully installed Vault in HA and my 3 server pods are “running”. I am using a fresh PostgreSQL backend and setting up the DB in the fashion mentioned in the docs. We are also using GCPKMS and have this “seal” config loaded into the pods, along with TLS enabled and the appropriate certs as secrets.
When I run vault status
immediately after an install, I see in the logs that the servers are already initialized
but sealed. The pod logs mention the server is in seal migrate state but never seem to complete successfully.
There are many moving parts here and I’m sure TLS is playing some roles (I am using a self signed cert with 3 replicas and a LB service with an external IP).
I’d like to know what kicks off the initialization automatically, and a bit more about the procedure flow of this seal migration process. Also I’d like to better understand how to troubleshoot this process.
I am using Vault 1.6.1.
I have tried running the following command and entering in 3 of the 5 existing recovery keys (from the original install in Cluster A) only to get the following error:
$ kubectl exec -ti vault-0 -- vault operator unseal -migrate
Unseal Key (will be hidden):
Error unsealing: Error making API request.
URL: PUT https://127.0.0.1:8200/v1/sys/unseal
Code: 500. Errors:
* unable to retrieve stored keys: failed to encrypt keys for storage: cipher: message authentication failed
command terminated with exit code 2
Regards,
Matt