Vault's auto unseal behaviour when we change the KMS ID (Auto Unseal with KMS)

Hey Team,

We have been using vault in our k8s clusters using the helm chart. To make things simpler we opted to use the auto-unseal mechanism via the use of AWS KMS keys.

Something rather odd we have noticed is that after vault is initialised and unsealed, any update to the key id in the “seal” section of the config map would still result in vault being unsealed, despite the fact that vault was originally unsealed with a different KMS key.

Steps to reproduce.

  • Initialize and unseal vault with the help of “KMS KEY A”
  • Add some data just for a test case
  • Update the configmap’s “Seal” section to a different key ID
  • Restart vault pods

We would notice that the pods again get unsealed and all the data is accessible as it would be with the original KMS key.

Is this an expected behaviour. Does not feel like it.

If this is expected, would this not kill the purpose of having the need to have a process of Seal migration from KMS to KMS?

I’m not totally sure about this, but it looks like the key ID from the Vault config file is only used for new encryption operations. The key ID used for any particular decryption operation is stored with the saved encrypted data.

So I think you old KMS key is still being used for unseal, but if you rekeyed Vault now, it would switch to using the new KMS key. (Untested!)

Are you talking of individual nodes having a config of a different key in their config? or the entire cluster? Because that would make no sense. With your “steps” can you be more literal and provide actual commands and output of what you’re doing and seeing?

Sorry for the delayed reply.

I think I got the issue.
Appears that the KMS ID that has been picked up from the storage rather than the configmap after the initial setup.
I confirmed this be denying access to the specific key that was originally used to unseal.