Consul helm k8 ERROR

Hi I’m trying to deploy consul on 3 master(Centos) and 17 worker nodes(Ubuntu) cluster

I’m using helm commands on 1st master

Consul-server-0 consul-server-0.txt (119.1 KB)

Consul-client pod consul-qxm8b.txt (48.1 KB)

consul-connect-injector-webhook-deployment-5c59d679b4-gvmgb.txt (778 Bytes)

Kubectl get nodes

NAME                                                          READY   STATUS    RESTARTS   AGE   IP               NODE              NOMINATED NODE   READINESS GATES
consul-22r2x                                                  0/1     Running   0          58m    va-k8s-apps-w12   <none>           <none>
consul-2bsn9                                                  0/1     Running   0          58m    va-k8s-apps-w17   <none>           <none>
consul-46ssz                                                  0/1     Running   0          58m    va-k8s-apps-w4    <none>           <none>
consul-47sbg                                                  0/1     Running   0          58m     va-k8s-apps-w10   <none>           <none>
consul-5rz8m                                                  0/1     Running   0          58m    va-k8s-apps-w2    <none>           <none>
consul-8cw6g                                                  0/1     Running   0          58m     va-k8s-apps-w11   <none>           <none>
consul-9jr7b                                                  0/1     Running   0          58m    va-k8s-apps-w16   <none>           <none>
consul-bnrx5                                                  0/1     Running   0          58m    va-k8s-apps-w3    <none>           <none>
consul-connect-injector-webhook-deployment-5c59d679b4-gvmgb   1/1     Running   0          58m     va-k8s-apps-w6    <none>           <none>
consul-connect-injector-webhook-deployment-5c59d679b4-q4ntm   1/1     Running   0          58m    va-k8s-apps-w15   <none>           <none>
consul-controller-84cc448cb5-zdgwm                            1/1     Running   0          58m     va-k8s-apps-w1    <none>           <none>
consul-hh7cl                                                  0/1     Running   0          58m      va-k8s-apps-w8    <none>           <none>
consul-j4qdp                                                  0/1     Running   0          58m    va-k8s-apps-w7    <none>           <none>
consul-jm7bz                                                  0/1     Running   0          58m    va-k8s-apps-w15   <none>           <none>
consul-mprln                                                  0/1     Running   0          58m    va-k8s-apps-w13   <none>           <none>
consul-pnqch                                                  0/1     Running   0          58m   va-k8s-apps-w14   <none>           <none>
consul-qvz65                                                  0/1     Running   0          58m   va-k8s-apps-w5    <none>           <none>
consul-qxm8b                                                  0/1     Running   0          58m     va-k8s-apps-w1    <none>           <none>
consul-rz9xg                                                  0/1     Running   0          58m     va-k8s-apps-w6    <none>           <none>
consul-server-0                                               0/1     Running   0          58m   va-k8s-apps-w5    <none>           <none>
consul-server-1                                               0/1     Running   0          58m   va-k8s-apps-w14   <none>           <none>
consul-server-2                                               0/1     Running   0          58m     va-k8s-apps-w6    <none>           <none>
consul-webhook-cert-manager-65b8bb9785-ddcft                  1/1     Running   0          58m   va-k8s-apps-w5    <none>           <none>
consul-z6k8p                                                  0/1     Running   0          58m   va-k8s-apps-w9    <none>           <none>


  name: consul
  datacenter: AcuityDC1
  # kubectl create secret generic consul-gossip-encryption-key --from-literal=key=$(consul keygen)
  # Run above command for gossip key
#  gossipEncryption:
#    secretName: 'consul-gossip-encryption-key'
#    secretKey: 'key'
#  tls:
#    enabled: true
    # This configuration sets `verify_outgoing`, `verify_server_hostname`,
    # and `verify_incoming` to `false` on servers and clients,
    # which allows TLS-disabled nodes to join the cluster.
#    enableAutoEncrypt: true
#    verify: true
#  acls:
#    manageSystemACLs: true
  replicas: 3
  bootstrapExpect: 3
    enabled: true
    maxUnavailable: 0
    runAsNonRoot: false
    runAsUser: 0
# Add service Loadbalancer for consul ui to be on a random port. Check in kubernetes services
    type: "LoadBalancer"
  enabled: true
  enabled: true
  enabled: true
  enabled: true
  enabled: true

Please share what I’m missing this it setup.
Thank you

Hi, sorry that you’re having trouble with this!, is this a duplicate of Consul not starting with helm in K8 Client Pods: Reason: BadRequest (400) ?
This looks like a similar IO timeout issue, so possibly an underlying networking issue in your kubernetes cluster.