Unable to unseal key unable to retrieve stored keys: failed to decrypt keys from storage: cipher: message authentication failed

I am trying to join the vaults to the vault cluster and when I ran the unseal command getting the below error.

How to debug and solve the below issue, please?

URL: PUT http://127.0.0.1:8200/v1/sys/unseal
Code: 500. Errors:

** unable to retrieve stored keys: failed to decrypt keys from storage: cipher: message authentication failed*
command terminated with exit code 2

---->

First please don’t post your unseal keys to a public forum.

The prompt asking you for your second key isn’t a confirmation, it’s asking for the second out of three keys. You need to provide 3 different keys.

When you initially initialized Vault the default are generate 5 keys, need 3 of those to unseal.

Hi @aram - Thank you.
Yes, I tried to provide all of the 5 keys, the issue here is it’s failing when I pass the 3 rd key every time I see the below error.

However, the unseal keys worked on one of the nodes vault-2 but failed vault-1.
==>
$ vault operator unseal
Unseal Key (will be hidden):
Error unsealing: Error making API request.

URL: PUT http://127.0.0.1:8200/v1/sys/unseal
Code: 500. Errors:

  • unable to retrieve stored keys: failed to decrypt keys from storage: cipher: message authentication failed

==>

Node Address State Voter


xxxx vault-0.vault-internal:8201 leader true
xxxx vault-2.vault-internal:8201 follower true

Then you’re sending the wrong key (or a special character or space, before or after the key). The keys are not validated until the 3rd key is entered.

But the same keys are worked on another node.

Even on the same node after re-entering the same key again.
I didn’t think the keys are wrong or any character or space issue though

JFYI,…

I’m not sure what is causing when i pass the key on the third attempt - its simply throwing an error.

I even skipped the unseal key3 and tried unseal key4 and 5 - still no luck - it only failing on 3rd attempt no matter however i try.

There isn’t much to the unseal process – it’s possible that the claim that it has is corrupted or bad or the pod itself has some weird communication issue. If the keys work in other pods then it’s not a vault issue … maybe delete the pod delete any claims it has and create a new one … as long as you have a good cluster with at least 3 nodes the new node will just join the cluster and replicate the data.

After doing the investigation on the logs I found the below error and how to fix this?

2022-08-02T20:53:54.478Z [INFO] server: Processing unary gRPC call: grpc.method=/v1alpha1.CSIDriverProvider/Mount
2022-08-02T20:53:58.024Z [INFO] server: Finished unary gRPC call: grpc.method=/v1alpha1.CSIDriverProvider/Mount grpc.time=3.546322504s grpc.code=Unknown
err=
| error making mount request: failed to login: Error making API request.
|
| URL: POST http://vault:8200/v1/auth/kubernetes/login
| Code: 503. Errors:
|
| * Vault is sealed

I tried deleting the pods and recreating new ones.
After investigating the vault-csi-provider logs , found the below errors and I assume this one caused the unable to unseal keys.

2022-07-29T07:03:14.691Z [INFO] server: Finished unary gRPC call: grpc.method=/v1alpha1.CSIDriverProvider/Mount grpc.time=3.850639192s grpc.code=Unknown
err=
| error making mount request: failed to login: Error making API request.
|
| URL: POST http://vault:8200/v1/auth/kubernetes/login
| Code: 503. Errors:
|
| * Vault is sealed

2022-07-29T07:03:38.546Z [INFO] server: Finished unary gRPC call: grpc.method=/v1alpha1.CSIDriverProvider/Mount grpc.time=3.819827817s grpc.code=Unknown
err=
| error making mount request: couldn’t read secret “xxx”: Error making API request.
|
| URL: GET http://vault:8200/v1/xxx/data/configuration/xxx_config
| Code: 503. Errors:
|
| * Vault is sealed

The issue was resolved after deleting the file in /vault/data and restarting the pod.

I got the same issue when trying to restore the Cluster using the snapshot-force endpoint ( bypassing the checks which ensure that the Autounseal or shamir keys are consistent with the snapshot data).
So, it is always a better approach to use the same autounseal config, whenever you restore from a snapshot.

Hi EravinDar,
the same issue with me. could you share your final solution to it?
I am not sure whether the cause is from absence of TLS( /home/ubuntu/tls/*.ca .cert …), while my first deployment according to openstack/juju guide, so i haven’t processed the root ca (Managing TLS certificates Managing TLS certificates — charm-guide 0.0.1.dev717 documentation).
Could hint me anything about it?
Thanks!
William

@WilliamLee1970 TLS is completely unrelated to the subject of this topic.

I strongly recommend you start a new topic for your issue, and carefully explain your own problem, without assuming it is related to past topics.