Trying to deploy vault-kub helm chart in k8s as HA with integerated storage

I’m trying to deploy vault-helm 0.8.0 onto a 5 node cluster.

I cannot find where to override the name of the PV and PVC to point them to the vSphere disk I have created. I also tried the standalone mode and the same issue occurs.

The pods are never assigned to a node because of the lack of the PVC (assumption).

Any help would be greatly appreciated.

$ kubectl version --short 
Client Version: v1.20.1
Server Version: v1.20.1
$ helm version --short 

$ kubectl get nodes
NAME                  STATUS   ROLES                  AGE     VERSION
kub-01.basement.lab   Ready    infra                  3d11h   v1.20.1
kub-02.basement.lab   Ready    infra                  3d11h   v1.20.1
kub-03.basement.lab   Ready    compute                3d11h   v1.20.1
kub-04.basement.lab   Ready    compute                3d11h   v1.20.1
kub-05.basement.lab   Ready    compute                3d11h   v1.20.1
kub-master            Ready    control-plane,master   3d11h   v1.20.1

$ cat vault-ha.yaml
        enabled: true 
        replicas: 3
            enabled: true
            setNodeId: true
        enabled: true
        annotations: |

$ helm install vault hashicorp/vault --namespace vault -f vault-ha.yaml
NAME: vault
LAST DEPLOYED: Tue Dec 29 06:26:34 2020
STATUS: deployed
Thank you for installing HashiCorp Vault!

$ kubectl describe statefulset vaul -n vault
Volume Claims:
  Name:          data
  Labels:        <none>
  Annotations:   <none>
  Capacity:      10Gi
  Access Modes:  [ReadWriteOnce]
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  65s   statefulset-controller  create Pod vault-0 in StatefulSet vault successful
  Normal  SuccessfulCreate  64s   statefulset-controller  create Pod vault-1 in StatefulSet vault successful
  Normal  SuccessfulCreate  63s   statefulset-controller  create Pod vault-2 in StatefulSet vault successful

$ kubectl describe pods vault-0 -n vault 
  Type           Status
  PodScheduled   False
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)                                               
    ClaimName:  data-vault-0
    ReadOnly:   false
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      vault-config
    Optional:  false
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)                                                                      
    SizeLimit:  <unset>
    Type:        Secret (a volume populated by a Secret)
    SecretName:  vault-token-glfkp
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations: op=Exists for 300s                                                                         
        op=Exists for 300s                                                                       
  Type     Reason            Age                From               Message                                                                         
  ----     ------            ----               ----               -------                                                                         
  Warning  FailedScheduling  20s (x2 over 20s)  default-scheduler  0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims.

$ kubectl exec -n vault -ti vault-0 -- vault statustus
Error from server (BadRequest): pod vault-0 does not have a host assigned

$ k describe pv vaultpv
Name:            vaultpv
Labels:          <none>
Annotations:     <none>
Finalizers:      []
Status:          Available
Reclaim Policy:  Retain
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        10Gi
Node Affinity:   <none>
    Type:               vSphereVolume (a Persistent Disk resource in vSphere)
    VolumePath:         [ds0] /vault
    FSType:             ext4
Events:                 <none>

I also tried pre-creating the claim to see if it’ll use it …

$ kubectl describe pvc pvc0001
Name:          pvc0001
Namespace:     default
Status:        Bound
Volume:        pv0001
Labels:        <none>
Annotations: yes
Finalizers:    []
Capacity:      10Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       <none>
Events:        <none>

Hi @aram535,

I am assuming you want to provision storage manually because no dynamic storage provisioner is available. You can provision the persistent volumes manually, but the chart always creates the persistent volume claims.

Take a look at these options in the chart’s values.yaml:

  # This configures the Vault Statefulset to create a PVC for data
  # storage when using the file or raft backend storage engines.
  # See to know more
    enabled: true
    # Size of the PVC created
    size: 10Gi
    # Location where the PVC will be mounted.
    mountPath: "/vault/data"
    # Name of the storage class to use.  If null it will use the
    # configured default Storage Class.
    storageClass: null
    # Access Mode of the storage device being used for the PVC
    accessMode: ReadWriteOnce
    # Annotations to apply to the PVC
    annotations: {}

If a persistent volume that matches the specifications of the pvc (size, storageClass, accessMode) already exists, the existing pv will be bound to the pvc created by the Vault chart.


1 Like

Hi @Nick-Triller,
Thanks for the reply, although I’m not sure of the assumption as I had no idea what other options I had. At first try I had no PV or PVC created and wanted to use each node as the storage. That didn’t work and that’s when I saw that the message: “6 pod has unbound immediate PersistentVolumeClaims”.

I then looked up how to create the PV and the PVC, I tried to match the requirements but the pods are still not connecting to the PVC.

I’d appreciate any other avenues that I can try or if you can see anything I missed in creating the PV/PVC.


Hi @aram535,

the Vault Helm chart creates the PVCs. You should only create the PVs.

If your Kubernetes cluster had a dynamic storage provisioner, the provisioner would have detected PVCs that don’t have PVs bound. It would then provision PVs automatically. You wouldn’t get the error message “6 pod has unbound immediate PersistentVolumeClaims” if that would be the case.

Kubernetes continuously checks if PVs that satisfy the requirements (size, access mode, storage class) of a PVC without bound PV exist. If it finds a PV that satisfies the requirements of a PVC without bound PV, Kubernetes binds the PVC to the PV. Kubernetes is designed this way to allow Kubernetes admins to pre-provision PVs. Developers can create PVCs at a later point in time and the pre-provisioned storage will be bound to the PVCs automatically. All of this is not directly Vault related, but rather general Kubernetes functionality.

To summarize, you have to create the PVs manually. It’s important the storageClass and accessMode of the PVs is the same as in the PVCs the Vault chart creates, otherwise Kubernetes won’t bind the PVs to the PVCs. You can customize the storageClass and accessMode of the PVCs with the chart values from the post above. The size of the PVs must be equal or greater the size specified in the PVCs.