Hey there,
I have just deployed vault on k8s using the helm chart
injector:
enabled: true
csi:
enabled: true
server:
dataStorage:
enabled: true
size: 5Gi
storageClass: cephfs
accessMode: ReadWriteMany
ha:
enabled: true
replicas: 3
raft:
enabled: true
setNodeId: true
config: |
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "raft" {
path = "/vault/data"
retry_join {
leader_api_addr = "http://vault-0.vault-internal:8200"
}
retry_join {
leader_api_addr = "http://vault-1.vault-internal:8200"
}
retry_join {
leader_api_addr = "http://vault-2.vault-internal:8200"
}
}
service_registration "kubernetes" {}
ui:
enabled: true
All the pods are getting deployed correctly, and once i have issued the
vault operator init on vault-0
And then unseal all the pods (starting from 0 → 1 and so on).
After that all the pods are running and are healthy, and the raft cluster is healthy
kubectl exec -it vault-0 -n vault -- vault operator raft list-peers
Node Address State Voter
---- ------- ----- -----
vault-0 vault-0.vault-internal:8201 leader true
vault-1 vault-1.vault-internal:8201 follower true
vault-2 vault-2.vault-internal:8201 follower true
But when i intentionally kill any of the vault pods, it fails to start up again - i would expect it to be in a sealed state, and not actually start until its unsealed - but the pod is in a CrashLoopBackOff state, with the error:
k logs vault-0 -n vault
Error initializing storage of type raft: failed to create fsm: failed to open bolt file: invalid database
2026-01-03T16:52:15.140Z [INFO] proxy environment: http_proxy="" https_proxy="" no_proxy=""
2026-01-03T16:52:15.140Z [WARN] storage.raft.fsm: raft FSM db file has wider permissions than needed: needed=-rw------- existing=-rw-rw----
I have completely redeployed vault helm chart 3-4 times now, and the deployment and initialization is working perfectly - but once any of the pods are restarted / killed, it will never come back up again.
What am i missing here?