Consul to Raft backend migration with existing audit storage (helm chart)

I have an HA Vault (v1.12) instance (3 replicas) with a Consul (v1.13) backend (3 replicas). It is deployed using the official vault-helm chart.

I am trying to migrate from Consul to the Raft integrated storage, but having issues because when I enable data storage I run into an error trying to update the Statefulset. “volumeClaimTemplates” cannot be changed. Here’s what I’m encountering,

helm upgrade vault . --values values.yaml --values values/vault.yaml -n $esimw --install
Error: UPGRADE FAILED: cannot patch "vault" with kind StatefulSet: StatefulSet.apps "vault" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are forbidden

I assumed as part of Storage Migration tutorial - Consul to Integrated Storage | Vault | HashiCorp Developer I could update my chart configuration, deploy, and only delete one of the pods to get the migration rolling.

I got around the STS issue by uninstalling and installing the updated chart. However Vault is sealed and those pods restart faster than I can work with them. By the time I exec in to do work the pod has restarted.

Does anyone have any recommendations on how to successfully migrate from Consul to Raft using the Helm chart?

In order to do this sort of migration, it is necessary to scale down your StatefulSet to zero replicas, and manually create a pod in which you can run the migrate operation, which does not run a Vault server.

Link to last time I thought this through in detail: Vault backend migration from Consul to Raft (K8S, Helm, KMS auto-unseal) - #2 by maxb

Thank you very much for the quick reply and link. What you have outlined seems like it may work. I thought I could use one of the running vault pods, but I think I will try your method. I’m hoping to have something working in the next couple days and will report back if so.

I replied on this thread since it’s still active and the same issue. Thanks