Investigate and fix whatever is causing PVCs to be provisioned with incorrect file permissions, such that the services intended to run in them cannot write to them.
dataStorage:
enabled: true
# Size of the PVC created
size: 10Gi
# Location where the PVC will be mounted.
mountPath: “/vault/data”
# Name of the storage class to use. If null it will use the
# configured default Storage Class.
storageClass:
# Access Mode of the storage device being used for the PVC
accessMode: ReadWriteOnce
# Annotations to apply to the PVC
annotations: {}
Today I learned that: It seems that Kubernetes is inconsistent about whether it implements permissions initialization during volume setup, and hostPath and NFS are kinds of volumes that it does not do this for.
Unfortunately the Kubernetes documentation fails to make this clear - I had to dig around in the source code to confirm.
As a result, when using volumes of these types with any deployment that expects to run pods as non-root and allow securityContext.fsGroup to take care of the permissions - which is probably lots of them these days - you’ll likely see failures of this sort, and have to manually adjust permissions.
I have no experience of OpenShift, only Kubernetes.
The Kubernetes setting within securityContext that triggers automatic permission setting is fsGroup.
However, during the course of this thread, I learnt that the Kubernetes built-in volume support for NFS does not implement that.
You mentioned using NFS.
As I have never used NFS with either Kubernetes or OpenShift, I’m not certain what the permissions would look like by default, and whether the default permissions would even allow for further chown-ing or not.
Ultimately, the issue is that the permissions do not allow Vault to write to its storage.
I suggest that you first attempt to solve that manually - by accessing the NFS volume and seeing if you can change the permissions, and then that will prove the concept of what needs to be automated.