Trying to deploy vault-kub helm chart in k8s as HA with integerated storage

I’m trying to deploy vault-helm 0.8.0 onto a 5 node cluster.

I cannot find where to override the name of the PV and PVC to point them to the vSphere disk I have created. I also tried the standalone mode and the same issue occurs.

The pods are never assigned to a node because of the lack of the PVC (assumption).

Any help would be greatly appreciated.

$ kubectl version --short 
Client Version: v1.20.1
Server Version: v1.20.1
$ helm version --short 
v3.4.2+g23dd3af

$ kubectl get nodes
NAME                  STATUS   ROLES                  AGE     VERSION
kub-01.basement.lab   Ready    infra                  3d11h   v1.20.1
kub-02.basement.lab   Ready    infra                  3d11h   v1.20.1
kub-03.basement.lab   Ready    compute                3d11h   v1.20.1
kub-04.basement.lab   Ready    compute                3d11h   v1.20.1
kub-05.basement.lab   Ready    compute                3d11h   v1.20.1
kub-master            Ready    control-plane,master   3d11h   v1.20.1

$ cat vault-ha.yaml
server:
    ha:
        enabled: true 
        replicas: 3
        raft:
            enabled: true
            setNodeId: true
    ingres:
        enabled: true
        externalVaultAddr: 192.168.1.3
        annotations: |
            kubernetes.io/ingress.class: nginx
            ubernetes.io/tls-acme: 'true'

$ helm install vault hashicorp/vault --namespace vault -f vault-ha.yaml
NAME: vault
LAST DEPLOYED: Tue Dec 29 06:26:34 2020
NAMESPACE: vault
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing HashiCorp Vault!

$ kubectl describe statefulset vaul -n vault
....
Volume Claims:
  Name:          data
  StorageClass:  
  Labels:        <none>
  Annotations:   <none>
  Capacity:      10Gi
  Access Modes:  [ReadWriteOnce]
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  65s   statefulset-controller  create Pod vault-0 in StatefulSet vault successful
  Normal  SuccessfulCreate  64s   statefulset-controller  create Pod vault-1 in StatefulSet vault successful
  Normal  SuccessfulCreate  63s   statefulset-controller  create Pod vault-2 in StatefulSet vault successful

$ kubectl describe pods vault-0 -n vault 
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)                                               
    ClaimName:  data-vault-0
    ReadOnly:   false
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      vault-config
    Optional:  false
  home:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)                                                                      
    Medium:
    SizeLimit:  <unset>
  vault-token-glfkp:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  vault-token-glfkp
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s                                                                         
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s                                                                       
Events:
  Type     Reason            Age                From               Message                                                                         
  ----     ------            ----               ----               -------                                                                         
  Warning  FailedScheduling  20s (x2 over 20s)  default-scheduler  0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims.

$ kubectl exec -n vault -ti vault-0 -- vault statustus
Error from server (BadRequest): pod vault-0 does not have a host assigned

$ k describe pv vaultpv
Name:            vaultpv
Labels:          <none>
Annotations:     <none>
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    
Status:          Available
Claim:           
Reclaim Policy:  Retain
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        10Gi
Node Affinity:   <none>
Message:         
Source:
    Type:               vSphereVolume (a Persistent Disk resource in vSphere)
    VolumePath:         [ds0] /vault
    FSType:             ext4
    StoragePolicyName:  
Events:                 <none>

I also tried pre-creating the claim to see if it’ll use it …

$ kubectl describe pvc pvc0001
Name:          pvc0001
Namespace:     default
StorageClass:  
Status:        Bound
Volume:        pv0001
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      10Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       <none>
Events:        <none>

Hi @aram535,

I am assuming you want to provision storage manually because no dynamic storage provisioner is available. You can provision the persistent volumes manually, but the chart always creates the persistent volume claims.

Take a look at these options in the chart’s values.yaml:

  # This configures the Vault Statefulset to create a PVC for data
  # storage when using the file or raft backend storage engines.
  # See https://www.vaultproject.io/docs/configuration/storage/index.html to know more
  dataStorage:
    enabled: true
    # Size of the PVC created
    size: 10Gi
    # Location where the PVC will be mounted.
    mountPath: "/vault/data"
    # Name of the storage class to use.  If null it will use the
    # configured default Storage Class.
    storageClass: null
    # Access Mode of the storage device being used for the PVC
    accessMode: ReadWriteOnce
    # Annotations to apply to the PVC
    annotations: {}

If a persistent volume that matches the specifications of the pvc (size, storageClass, accessMode) already exists, the existing pv will be bound to the pvc created by the Vault chart.

Best
Nick

1 Like

Hi @Nick-Triller,
Thanks for the reply, although I’m not sure of the assumption as I had no idea what other options I had. At first try I had no PV or PVC created and wanted to use each node as the storage. That didn’t work and that’s when I saw that the message: “6 pod has unbound immediate PersistentVolumeClaims”.

I then looked up how to create the PV and the PVC, I tried to match the requirements but the pods are still not connecting to the PVC.

I’d appreciate any other avenues that I can try or if you can see anything I missed in creating the PV/PVC.

Thanks.

Hi @aram535,

the Vault Helm chart creates the PVCs. You should only create the PVs.

If your Kubernetes cluster had a dynamic storage provisioner, the provisioner would have detected PVCs that don’t have PVs bound. It would then provision PVs automatically. You wouldn’t get the error message “6 pod has unbound immediate PersistentVolumeClaims” if that would be the case.

Kubernetes continuously checks if PVs that satisfy the requirements (size, access mode, storage class) of a PVC without bound PV exist. If it finds a PV that satisfies the requirements of a PVC without bound PV, Kubernetes binds the PVC to the PV. Kubernetes is designed this way to allow Kubernetes admins to pre-provision PVs. Developers can create PVCs at a later point in time and the pre-provisioned storage will be bound to the PVCs automatically. All of this is not directly Vault related, but rather general Kubernetes functionality.

To summarize, you have to create the PVs manually. It’s important the storageClass and accessMode of the PVs is the same as in the PVCs the Vault chart creates, otherwise Kubernetes won’t bind the PVs to the PVCs. You can customize the storageClass and accessMode of the PVCs with the chart values from the post above. The size of the PVs must be equal or greater the size specified in the PVCs.

Best
Nick