Vault pod going to crashLoopBackOff state even though it is unsealed

I have a Hashicorp vault HA-mode deploy for 1 replica. I completed the unseal procedure by providing 3 keys but despite this the pod is still restarting.

To obtain the keys I ran :

kubectl exec -it vault-0 -- sh
vault operator init

To unseal I ran the following (for 3 unique keys) :

vault operator unseal

and for the 3rd attempt the pod confirms that it is unsealed :

Unseal Key (will be hidden): 
Key                    Value
---                    -----
Seal Type              shamir
Initialized            true
Sealed                 false
Total Shares           5
Threshold              3
Version                1.9.2
Storage Type           consul
Cluster Name           vault-cluster-1g78888f
Cluster ID             8cc8888c-c88d-5858-bd88-8888873f88k4
HA Enabled             true
HA Cluster             n/a
HA Mode                standby
Active Node Address    <none>
/ $ command terminated with exit code 137

The output shows that the pod has been unsealed (Sealed false) but it immediately restarts.

I have scaled down the deploy from the standard 3 replicas to 1 because of resource constraints on my cluster. Resource/request limits are also scaled down from the standard because of the same resource constraints.

Logs for the pod :

kubectl logs vault-0
    Api Address: http://10.222.222.22:8200
                     Cgo: disabled
         Cluster Address: https://vault-0.vault-internal:8201
              Go Version: go1.17.5
              Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
               Log Level: info
                   Mlock: supported: true, enabled: false
           Recovery Mode: false
                 Storage: consul (HA available)
                 Version: Vault v1.9.2
             Version Sha: 873f88k48888873f88k48888873f88k4

==> Vault server started! Log data will stream in below:

2022-06-24T12:02:02.334Z [INFO]  proxy environment: http_proxy="\"\"" https_proxy="\"\"" no_proxy="\"\""
2022-06-24T12:02:02.334Z [WARN]  storage.consul: appending trailing forward slash to path
2022-06-24T12:02:02.394Z [INFO]  core: Initializing VersionTimestamps for core
==> Vault shutdown triggered
2022-06-24T12:03:16.005Z [INFO]  service_registration.consul: shutting down consul backend

My vault is deployed with this config : vault.yml

Consul is deployed with this config : consul.yml

Does the procedure for unsealing the vault require a quorum of exactly 3 replicas?

My expectation is that the vault should still unseal even when running on 1 replica.

What am I missing?

Please post the contents of your values.yml file – not going to click on an unknown link.

Most likely you have a ha.replicas set which forces a minimum number of good nodes before the cluster becomes available.

@aram here is my values.yml :

---
# Source: vault/templates/server-disruptionbudget.yaml
# PodDisruptionBudget to prevent degrading the server cluster through
# voluntary cluster changes.
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: vault
  namespace: labs
  labels:
    helm.sh/chart: vault-0.19.0
    app.kubernetes.io/name: vault
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
spec:
  maxUnavailable: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: vault
      app.kubernetes.io/instance: vault
      component: server
---
# Source: vault/templates/injector-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: vault-agent-injector
  namespace: labs
  labels:
    app.kubernetes.io/name: vault-agent-injector
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
---
# Source: vault/templates/server-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: vault
  namespace: labs
  labels:
    helm.sh/chart: vault-0.19.0
    app.kubernetes.io/name: vault
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
---
# Source: vault/templates/server-config-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: vault-config
  namespace: labs
  labels:
    helm.sh/chart: vault-0.19.0
    app.kubernetes.io/name: vault
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
data:
  extraconfig-from-values.hcl: |-
    disable_mlock = true
    ui = true
     
    listener "tcp" {
      tls_disable = 1
      address     = "0.0.0.0:8200"
    }
    
    storage "consul" {
      path = "vault"
      address = "consul-consul-server:8500"
    }
---
# Source: vault/templates/injector-clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: vault-agent-injector-clusterrole
  labels:
    app.kubernetes.io/name: vault-agent-injector
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
rules:
- apiGroups: ["admissionregistration.k8s.io"]
  resources: ["mutatingwebhookconfigurations"]
  verbs: 
    - "get"
    - "list"
    - "watch"
    - "patch"
---
# Source: vault/templates/injector-clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: vault-agent-injector-binding
  labels:
    app.kubernetes.io/name: vault-agent-injector
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: vault-agent-injector-clusterrole
subjects:
- kind: ServiceAccount
  name: vault-agent-injector
  namespace: labs
---
# Source: vault/templates/server-clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: vault-server-binding
  labels:
    helm.sh/chart: vault-0.19.0
    app.kubernetes.io/name: vault
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: vault
  namespace: labs
---
# Source: vault/templates/server-discovery-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: labs
  name: vault-discovery-role
  labels:
    helm.sh/chart: vault-0.19.0
    app.kubernetes.io/name: vault
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list", "update", "patch"]
---
# Source: vault/templates/server-discovery-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: vault-discovery-rolebinding
  namespace: labs
  labels:
    helm.sh/chart: vault-0.19.0
    app.kubernetes.io/name: vault
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: vault-discovery-role
subjects:
- kind: ServiceAccount
  name: vault
  namespace: labs
---
# Source: vault/templates/injector-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: vault-agent-injector-svc
  namespace: labs
  labels:
    app.kubernetes.io/name: vault-agent-injector
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
  
spec:
  ports:
  - name: http
    port: 80
    targetPort: 80
  selector:
    app.kubernetes.io/name: vault-agent-injector
    app.kubernetes.io/instance: vault
    component: webhook
---
# Source: vault/templates/server-ha-active-service.yaml
# Service for active Vault pod
apiVersion: v1
kind: Service
metadata:
  name: vault-active
  namespace: labs
  labels:
    helm.sh/chart: vault-0.19.0
    app.kubernetes.io/name: vault
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm

spec:
  publishNotReadyAddresses: true
  ports:
    - name: http
      port: 8200
      targetPort: 8200
    - name: http-internal
      port: 8201
      targetPort: 8201
  selector:
    app.kubernetes.io/name: vault
    app.kubernetes.io/instance: vault
    component: server
    vault-active: "true"
---
# Source: vault/templates/server-ha-standby-service.yaml
# Service for standby Vault pod
apiVersion: v1
kind: Service
metadata:
  name: vault-standby
  namespace: labs
  labels:
    helm.sh/chart: vault-0.19.0
    app.kubernetes.io/name: vault
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm

spec:
  publishNotReadyAddresses: true
  ports:
    - name: http
      port: 8200
      targetPort: 8200
    - name: http-internal
      port: 8201
      targetPort: 8201
  selector:
    app.kubernetes.io/name: vault
    app.kubernetes.io/instance: vault
    component: server
    vault-active: "false"
---
# Source: vault/templates/server-headless-service.yaml
# Service for Vault cluster
apiVersion: v1
kind: Service
metadata:
  name: vault-internal
  namespace: labs
  labels:
    helm.sh/chart: vault-0.19.0
    app.kubernetes.io/name: vault
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm

spec:
  clusterIP: None
  publishNotReadyAddresses: true
  ports:
    - name: "http"
      port: 8200
      targetPort: 8200
    - name: http-internal
      port: 8201
      targetPort: 8201
  selector:
    app.kubernetes.io/name: vault
    app.kubernetes.io/instance: vault
    component: server
---
# Source: vault/templates/server-service.yaml
# Service for Vault cluster
apiVersion: v1
kind: Service
metadata:
  name: vault
  namespace: labs
  labels:
    helm.sh/chart: vault-0.19.0
    app.kubernetes.io/name: vault
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm

spec:
  # We want the servers to become available even if they're not ready
  # since this DNS is also used for join operations.
  publishNotReadyAddresses: true
  ports:
    - name: http
      port: 8200
      targetPort: 8200
    - name: http-internal
      port: 8201
      targetPort: 8201
  selector:
    app.kubernetes.io/name: vault
    app.kubernetes.io/instance: vault
    component: server
---
# Source: vault/templates/ui-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: vault-ui
  namespace: labs
  labels:
    helm.sh/chart: vault-0.19.0
    app.kubernetes.io/name: vault-ui
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
spec:
  selector:
    app.kubernetes.io/name: vault
    app.kubernetes.io/instance: vault
    component: server
  publishNotReadyAddresses: true
  ports:
    - name: http
      port: 8200
      targetPort: 8200
  type: ClusterIP
---
# Source: vault/templates/server-statefulset.yaml
# StatefulSet to run the actual vault server cluster.
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: vault
  namespace: labs
  labels:
    app.kubernetes.io/name: vault
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
spec:
  serviceName: vault-internal
  podManagementPolicy: Parallel
  replicas: 1
  updateStrategy:
    type: OnDelete
  selector:
    matchLabels:
      app.kubernetes.io/name: vault
      app.kubernetes.io/instance: vault
      component: server
  template:
    metadata:
      labels:
        helm.sh/chart: vault-0.19.0
        app.kubernetes.io/name: vault
        app.kubernetes.io/instance: vault
        component: server
    spec:
      
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchLabels:
                  app.kubernetes.io/name: vault
                  app.kubernetes.io/instance: "vault"
                  component: server
              topologyKey: kubernetes.io/hostname
  
      
      
      terminationGracePeriodSeconds: 10
      serviceAccountName: vault
      
      securityContext:
        runAsNonRoot: true
        runAsGroup: 1000
        runAsUser: 100
        fsGroup: 1000
      volumes:
        
        - name: config
          configMap:
            name: vault-config

        - name: home
          emptyDir: {}
      containers:
        - name: vault
          resources:
            limits:
              cpu: 250m
              memory: 1Gi
            requests:
              cpu: 250m
              memory: 50Mi
  
          image: hashicorp/vault:1.9.2
          imagePullPolicy: IfNotPresent
          command:
          - "/bin/sh"
          - "-ec"
          args: 
          - |
            cp /vault/config/extraconfig-from-values.hcl /tmp/storageconfig.hcl;
            [ -n "${HOST_IP}" ] && sed -Ei "s|HOST_IP|${HOST_IP?}|g" /tmp/storageconfig.hcl;
            [ -n "${POD_IP}" ] && sed -Ei "s|POD_IP|${POD_IP?}|g" /tmp/storageconfig.hcl;
            [ -n "${HOSTNAME}" ] && sed -Ei "s|HOSTNAME|${HOSTNAME?}|g" /tmp/storageconfig.hcl;
            [ -n "${API_ADDR}" ] && sed -Ei "s|API_ADDR|${API_ADDR?}|g" /tmp/storageconfig.hcl;
            [ -n "${TRANSIT_ADDR}" ] && sed -Ei "s|TRANSIT_ADDR|${TRANSIT_ADDR?}|g" /tmp/storageconfig.hcl;
            [ -n "${RAFT_ADDR}" ] && sed -Ei "s|RAFT_ADDR|${RAFT_ADDR?}|g" /tmp/storageconfig.hcl;
            /usr/local/bin/docker-entrypoint.sh vault server -config=/tmp/storageconfig.hcl 
   
          securityContext:
            allowPrivilegeEscalation: false
          env:
            - name: HOST_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: VAULT_K8S_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: VAULT_K8S_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: VAULT_ADDR
              value: "http://127.0.0.1:8200"
            - name: VAULT_API_ADDR
              value: "http://$(POD_IP):8200"
            - name: SKIP_CHOWN
              value: "true"
            - name: SKIP_SETCAP
              value: "true"
            - name: HOSTNAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: VAULT_CLUSTER_ADDR
              value: "http://$(HOSTNAME).vault-internal:8201"
            - name: HOME
              value: "/home/vault"

          volumeMounts:
          
  
  
            - name: config
              mountPath: /vault/config
 
            - name: home
              mountPath: /home/vault
          ports:
            - containerPort: 8200
              name: http
            - containerPort: 8201
              name: http-internal
            - containerPort: 8202
              name: http-rep
          readinessProbe:
            httpGet:
              path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
              port: 8200
              scheme: HTTP
            failureThreshold: 2
            initialDelaySeconds: 5
            periodSeconds: 5
            successThreshold: 1
            timeoutSeconds: 3
          livenessProbe:
            httpGet:
              path: "/v1/sys/health?standbyok=true"
              port: 8200
              scheme: HTTP
            failureThreshold: 2
            initialDelaySeconds: 60
            periodSeconds: 5
            successThreshold: 1
            timeoutSeconds: 3
          lifecycle:
            # Vault container doesn't receive SIGTERM from Kubernetes
            # and after the grace period ends, Kube sends SIGKILL.  This
            # causes issues with graceful shutdowns such as deregistering itself
            # from Consul (zombie services).
            preStop:
              exec:
                command: [
                  "/bin/sh", "-c",
                  # Adding a sleep here to give the pod eviction a
                  # chance to propagate, so requests will not be made
                  # to this pod while it's terminating
                  "sleep 5 && kill -SIGTERM $(pidof vault)",
                ]

---
# Source: vault/templates/tests/server-test.yaml
apiVersion: v1
kind: Pod
metadata:
  name: "vault-server-test"
  namespace: labs
  annotations:
    "helm.sh/hook": test
spec:
  
  containers:
    - name: vault-server-test
      image: hashicorp/vault:1.9.2
      imagePullPolicy: IfNotPresent
      env:
        - name: VAULT_ADDR
          value: http://vault.vault.svc:8200
        
      command:
        - /bin/sh
        - -c
        - |
          echo "Checking for sealed info in 'vault status' output"
          ATTEMPTS=10
          n=0
          until [ "$n" -ge $ATTEMPTS ]
          do
            echo "Attempt" $n...
            vault status -format yaml | grep -E '^sealed: (true|false)' && break
            n=$((n+1))
            sleep 5
          done
          if [ $n -ge $ATTEMPTS ]; then
            echo "timed out looking for sealed info in 'vault status' output"
            exit 1
          fi

          exit 0

  restartPolicy: Never

@aram for consul here is the manifest also :

---
# Source: consul/templates/server-disruptionbudget.yaml
# PodDisruptionBudget to prevent degrading the server cluster through
# voluntary cluster changes.
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: consul-consul-server
  namespace: labs
  labels:
    app: consul
    chart: consul-helm
    heritage: Helm
    release: consul
    component: server
spec:
  maxUnavailable: 0
  selector:
    matchLabels:
      app: consul
      release: "consul"
      component: server
---
# Source: consul/templates/client-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: consul-consul-client
  namespace: labs
  labels:
    app: consul
    chart: consul-helm
    heritage: Helm
    release: consul
    component: client
---
# Source: consul/templates/server-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: consul-consul-server
  namespace: labs
  labels:
    app: consul
    chart: consul-helm
    heritage: Helm
    release: consul
    component: server
---
# Source: consul/templates/client-config-configmap.yaml
# ConfigMap with extra configuration specified directly to the chart
# for client agents only.
apiVersion: v1
kind: ConfigMap
metadata:
  name: consul-consul-client-config
  namespace: labs
  labels:
    app: consul
    chart: consul-helm
    heritage: Helm
    release: consul
    component: client
data:
  extra-from-values.json: |-
    {}
    
  central-config.json: |-
    {
      "enable_central_service_config": true
    }
---
# Source: consul/templates/server-config-configmap.yaml
# StatefulSet to run the actual Consul server cluster.
apiVersion: v1
kind: ConfigMap
metadata:
  name: consul-consul-server-config
  namespace: labs
  labels:
    app: consul
    chart: consul-helm
    heritage: Helm
    release: consul
    component: server
data:
  extra-from-values.json: |-
    {}
    
  central-config.json: |-
    {
      "enable_central_service_config": true
    }
---
# Source: consul/templates/client-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: consul-consul-client
  namespace: labs
  labels:
    app: consul
    chart: consul-helm
    heritage: Helm
    release: consul
    component: client
rules: []
---
# Source: consul/templates/server-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: consul-consul-server
  namespace: labs
  labels:
    app: consul
    chart: consul-helm
    heritage: Helm
    release: consul
    component: server
rules: []
---
# Source: consul/templates/client-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: consul-consul-client
  namespace: labs
  labels:
    app: consul
    chart: consul-helm
    heritage: Helm
    release: consul
    component: client
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: consul-consul-client
subjects:
  - kind: ServiceAccount
    name: consul-consul-client
---
# Source: consul/templates/server-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: consul-consul-server
  namespace: labs
  labels:
    app: consul
    chart: consul-helm
    heritage: Helm
    release: consul
    component: server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: consul-consul-server
subjects:
  - kind: ServiceAccount
    name: consul-consul-server
---
# Source: consul/templates/dns-service.yaml
# Service for Consul DNS.
apiVersion: v1
kind: Service
metadata:
  name: consul-consul-dns
  namespace: labs
  labels:
    app: consul
    chart: consul-helm
    heritage: Helm
    release: consul
    component: dns
spec:
  type: ClusterIP
  ports:
    - name: dns-tcp
      port: 53
      protocol: "TCP"
      targetPort: dns-tcp
    - name: dns-udp
      port: 53
      protocol: "UDP"
      targetPort: dns-udp
  selector:
    app: consul
    release: "consul"
    hasDNS: "true"
---
# Source: consul/templates/server-service.yaml
# Headless service for Consul server DNS entries. This service should only
# point to Consul servers. For access to an agent, one should assume that
# the agent is installed locally on the node and the NODE_IP should be used.
# If the node can't run a Consul agent, then this service can be used to
# communicate directly to a server agent.
apiVersion: v1
kind: Service
metadata:
  name: consul-consul-server
  namespace: labs
  labels:
    app: consul
    chart: consul-helm
    heritage: Helm
    release: consul
    component: server
  annotations:
    # This must be set in addition to publishNotReadyAddresses due
    # to an open issue where it may not work:
    # https://github.com/kubernetes/kubernetes/issues/58662
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
  clusterIP: None
  # We want the servers to become available even if they're not ready
  # since this DNS is also used for join operations.
  publishNotReadyAddresses: true
  ports:
    - name: http
      port: 8500
      targetPort: 8500
    - name: serflan-tcp
      protocol: "TCP"
      port: 8301
      targetPort: 8301
    - name: serflan-udp
      protocol: "UDP"
      port: 8301
      targetPort: 8301
    - name: serfwan-tcp
      protocol: "TCP"
      port: 8302
      targetPort: 8302
    - name: serfwan-udp
      protocol: "UDP"
      port: 8302
      targetPort: 8302
    - name: server
      port: 8300
      targetPort: 8300
    - name: dns-tcp
      protocol: "TCP"
      port: 8600
      targetPort: dns-tcp
    - name: dns-udp
      protocol: "UDP"
      port: 8600
      targetPort: dns-udp
  selector:
    app: consul
    release: "consul"
    component: server
---
# Source: consul/templates/ui-service.yaml
# UI Service for Consul Server
apiVersion: v1
kind: Service
metadata:
  name: consul-consul-ui
  namespace: labs
  labels:
    app: consul
    chart: consul-helm
    heritage: Helm
    release: consul
    component: ui
spec:
  selector:
    app: consul
    release: "consul"
    component: server
  ports:
    - name: http
      port: 80
      targetPort: 8500
---
# Source: consul/templates/client-daemonset.yaml
# DaemonSet to run the Consul clients on every node.
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: consul-consul
  namespace: labs
  labels:
    app: consul
    chart: consul-helm
    heritage: Helm
    release: consul
    component: client
spec:
  selector:
    matchLabels:
      app: consul
      chart: consul-helm
      release: consul
      component: client
      hasDNS: "true"
  template:
    metadata:
      labels:
        app: consul
        chart: consul-helm
        release: consul
        component: client
        hasDNS: "true"
      annotations:
        "consul.hashicorp.com/connect-inject": "false"
        "consul.hashicorp.com/config-checksum": 797b3593a73b78fc74f3b1e3b978107b3022d4649802185631f959f000234331
    spec:
      terminationGracePeriodSeconds: 10
      serviceAccountName: consul-consul-client
      securityContext:
        fsGroup: 1000
        runAsGroup: 1000
        runAsNonRoot: true
        runAsUser: 100

      volumes:
        - name: data
          emptyDir: {}
        - name: config
          configMap:
            name: consul-consul-client-config
      containers:
        - name: consul
          image: "hashicorp/consul:1.11.1"
          imagePullPolicy: "IfNotPresent"
          env:
            - name: ADVERTISE_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: NODE
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: HOST_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: CONSUL_DISABLE_PERM_MGMT
              value: "true"
            
          command:
            - "/bin/sh"
            - "-ec"
            - |
              CONSUL_FULLNAME="consul-consul"

              mkdir -p /consul/extra-config
              cp /consul/config/extra-from-values.json /consul/extra-config/extra-from-values.json
              [ -n "${HOST_IP}" ] && sed -Ei "s|HOST_IP|${HOST_IP?}|g" /consul/extra-config/extra-from-values.json
              [ -n "${POD_IP}" ] && sed -Ei "s|POD_IP|${POD_IP?}|g" /consul/extra-config/extra-from-values.json
              [ -n "${HOSTNAME}" ] && sed -Ei "s|HOSTNAME|${HOSTNAME?}|g" /consul/extra-config/extra-from-values.json

              exec /usr/local/bin/docker-entrypoint.sh consul agent \
                -node="${NODE}" \
                -advertise="${ADVERTISE_IP}" \
                -bind=0.0.0.0 \
                -client=0.0.0.0 \
                -node-meta=host-ip:${HOST_IP} \
                -node-meta=pod-name:${HOSTNAME} \
                -hcl='leave_on_terminate = true' \
                -hcl='ports { grpc = 8502 }' \
                -config-dir=/consul/config \
                -datacenter=vault-kubernetes-guide \
                -data-dir=/consul/data \
                -retry-join="${CONSUL_FULLNAME}-server-0.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc:8301" \
                -config-file=/consul/extra-config/extra-from-values.json \
                -domain=consul
          volumeMounts:
            - name: data
              mountPath: /consul/data
            - name: config
              mountPath: /consul/config
          ports:
            - containerPort: 8500
              hostPort: 8500
              name: http
            - containerPort: 8502
              hostPort: 8502
              name: grpc
            - containerPort: 8301
              protocol: "TCP"
              name: serflan-tcp
            - containerPort: 8301
              protocol: "UDP"
              name: serflan-udp
            - containerPort: 8600
              name: dns-tcp
              protocol: "TCP"
            - containerPort: 8600
              name: dns-udp
              protocol: "UDP"
          readinessProbe:
            # NOTE(mitchellh): when our HTTP status endpoints support the
            # proper status codes, we should switch to that. This is temporary.
            exec:
              command:
                - "/bin/sh"
                - "-ec"
                - |
                  curl http://127.0.0.1:8500/v1/status/leader \
                  2>/dev/null | grep -E '".+"'
          resources:
            limits:
              cpu: 100m
              memory: 100Mi
            requests:
              cpu: 100m
              memory: 100Mi
---
# Source: consul/templates/server-statefulset.yaml
# StatefulSet to run the actual Consul server cluster.
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: consul-consul-server
  namespace: labs
  labels:
    app: consul
    chart: consul-helm
    heritage: Helm
    release: consul
    component: server
spec:
  serviceName: consul-consul-server
  podManagementPolicy: Parallel
  replicas: 1
  selector:
    matchLabels:
      app: consul
      chart: consul-helm
      release: consul
      component: server
      hasDNS: "true"
  template:
    metadata:
      labels:
        app: consul
        chart: consul-helm
        release: consul
        component: server
        hasDNS: "true"
      annotations:
        "consul.hashicorp.com/connect-inject": "false"
        "consul.hashicorp.com/config-checksum": c9b100f895d5bda6a5c8bbebac73e1ab5bdc4cad06b04e72eb1b620677bfe41d
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchLabels:
                  app: consul
                  release: "consul"
                  component: server
              topologyKey: kubernetes.io/hostname
      terminationGracePeriodSeconds: 30
      serviceAccountName: consul-consul-server
      securityContext:
        fsGroup: 1000
        runAsGroup: 1000
        runAsNonRoot: true
        runAsUser: 100
      volumes:
        - name: config
          configMap:
            name: consul-consul-server-config
      containers:
        - name: consul
          image: "hashicorp/consul:1.11.1"
          imagePullPolicy: "IfNotPresent"
          env:
            - name: ADVERTISE_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: HOST_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: CONSUL_DISABLE_PERM_MGMT
              value: "true"
            
          command:
            - "/bin/sh"
            - "-ec"
            - |
              CONSUL_FULLNAME="consul-consul"

              mkdir -p /consul/extra-config
              cp /consul/config/extra-from-values.json /consul/extra-config/extra-from-values.json
              [ -n "${HOST_IP}" ] && sed -Ei "s|HOST_IP|${HOST_IP?}|g" /consul/extra-config/extra-from-values.json
              [ -n "${POD_IP}" ] && sed -Ei "s|POD_IP|${POD_IP?}|g" /consul/extra-config/extra-from-values.json
              [ -n "${HOSTNAME}" ] && sed -Ei "s|HOSTNAME|${HOSTNAME?}|g" /consul/extra-config/extra-from-values.json

              exec /usr/local/bin/docker-entrypoint.sh consul agent \
                -advertise="${ADVERTISE_IP}" \
                -bind=0.0.0.0 \
                -bootstrap-expect=1 \
                -client=0.0.0.0 \
                -config-dir=/consul/config \
                -datacenter=vault-kubernetes-guide \
                -data-dir=/consul/data \
                -domain=consul \
                -hcl="connect { enabled = true }" \
                -ui \
                -retry-join="${CONSUL_FULLNAME}-server-0.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc:8301" \
                -serf-lan-port=8301 \
                -config-file=/consul/extra-config/extra-from-values.json \
                -server
          volumeMounts:
            - name: data-vault
              mountPath: /consul/data
            - name: config
              mountPath: /consul/config
          ports:
            - name: http
              containerPort: 8500
            - name: serflan-tcp
              containerPort: 8301
              protocol: "TCP"
            - name: serflan-udp
              containerPort: 8301
              protocol: "UDP"
            - name: serfwan-tcp
              containerPort: 8302
              protocol: "TCP"
            - name: serfwan-udp
              containerPort: 8302
              protocol: "UDP"
            - name: server
              containerPort: 8300
            - name: dns-tcp
              containerPort: 8600
              protocol: "TCP"
            - name: dns-udp
              containerPort: 8600
              protocol: "UDP"
          readinessProbe:
            # NOTE(mitchellh): when our HTTP status endpoints support the
            # proper status codes, we should switch to that. This is temporary.
            exec:
              command:
                - "/bin/sh"
                - "-ec"
                - |
                  curl http://127.0.0.1:8500/v1/status/leader \
                  2>/dev/null | grep -E '".+"'
            failureThreshold: 2
            initialDelaySeconds: 5
            periodSeconds: 3
            successThreshold: 1
            timeoutSeconds: 5
          resources:
            limits:
              cpu: 100m
              memory: 100Mi
            requests:
              cpu: 100m
              memory: 100Mi
  volumeClaimTemplates:
    - metadata:
        name: data-vault
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
---
# Source: consul/templates/tests/test-runner.yaml
apiVersion: v1
kind: Pod
metadata:
  name: "consul-consul-test"
  namespace: labs
  labels:
    app: consul
    chart: consul-helm
    heritage: Helm
    release: consul
  annotations:
    "helm.sh/hook": test-success
spec:
  containers:
    - name: consul-test
      image: "hashicorp/consul:1.11.1"
      imagePullPolicy: "IfNotPresent"
      env:
        - name: HOST_IP
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP
        - name: CONSUL_HTTP_ADDR
          value: http://$(HOST_IP):8500
      command:
        - "/bin/sh"
        - "-ec"
        - |
            consul members | tee members.txt
            if [ $(grep -c consul-server members.txt) != $(grep consul-server members.txt | grep -c alive) ]
            then
              echo "Failed because not all consul servers are available"
              exit 1
            fi

  restartPolicy: Never

This is not a values.yaml - values.yaml is the name for configuration you provide to Helm to customize the deployment of a chart - not the output of Helm.

Here are some Kubernetes commands you should look at to find out why your pod is restarting:

kubectl describe pod vault-0
kubectl logs vault-0
kubectl logs --previous vault-0