First please donât post your unseal keys to a public forum.
The prompt asking you for your second key isnât a confirmation, itâs asking for the second out of three keys. You need to provide 3 different keys.
When you initially initialized Vault the default are generate 5 keys, need 3 of those to unseal.
Hi @aram - Thank you.
Yes, I tried to provide all of the 5 keys, the issue here is itâs failing when I pass the 3 rd key every time I see the below error.
However, the unseal keys worked on one of the nodes vault-2 but failed vault-1.
==>
$ vault operator unseal
Unseal Key (will be hidden):
Error unsealing: Error making API request.
Then youâre sending the wrong key (or a special character or space, before or after the key). The keys are not validated until the 3rd key is entered.
There isnât much to the unseal process â itâs possible that the claim that it has is corrupted or bad or the pod itself has some weird communication issue. If the keys work in other pods then itâs not a vault issue ⌠maybe delete the pod delete any claims it has and create a new one ⌠as long as you have a good cluster with at least 3 nodes the new node will just join the cluster and replicate the data.
I tried deleting the pods and recreating new ones.
After investigating the vault-csi-provider logs , found the below errors and I assume this one caused the unable to unseal keys.
2022-07-29T07:03:14.691Z [INFO] server: Finished unary gRPC call: grpc.method=/v1alpha1.CSIDriverProvider/Mount grpc.time=3.850639192s grpc.code=Unknown
err=
| error making mount request: failed to login: Error making API request.
|
| URL: POST http://vault:8200/v1/auth/kubernetes/login
| Code: 503. Errors:
|
| * Vault is sealed
2022-07-29T07:03:38.546Z [INFO] server: Finished unary gRPC call: grpc.method=/v1alpha1.CSIDriverProvider/Mount grpc.time=3.819827817s grpc.code=Unknown
err=
| error making mount request: couldnât read secret âxxxâ: Error making API request.
|
| URL: GET http://vault:8200/v1/xxx/data/configuration/xxx_config
| Code: 503. Errors:
|
| * Vault is sealed
I got the same issue when trying to restore the Cluster using the snapshot-force endpoint ( bypassing the checks which ensure that the Autounseal or shamir keys are consistent with the snapshot data).
So, it is always a better approach to use the same autounseal config, whenever you restore from a snapshot.
Hi EravinDar,
the same issue with me. could you share your final solution to it?
I am not sure whether the cause is from absence of TLS( /home/ubuntu/tls/*.ca .cert âŚ), while my first deployment according to openstack/juju guide, so i havenât processed the root ca (Managing TLS certificates Managing TLS certificates â charm-guide 0.0.1.dev717 documentation).
Could hint me anything about it?
Thanks!
William
You mention that this âsolved the issueâ, but isnât /vault/data the persistent data store for vault. Donât you lose all of your secrets by doing this?