Consul clients not deployed on every kubernetes node

Values.yaml:
global:
domain: consul
datacenter: dc1
enabled: true
syncCatalog:

True if you want to enable the catalog sync. Set to “-” to inherit from

global.enabled.

enabled: true
server:
replicas: 3
bootstrapExpect: 3
storage: 64Mi
storageClass: local-path
extraVolumes:
- type: secret
name: vault-config
load: true
items:
- key: config
path: vault-config.json
- type: secret
name: vault-ca
load: false
client:
enabled: true
grpc: true

helm install -f values-consul.yaml consul hashicorp/consul

k get pods
NAME READY STATUS RESTARTS AGE
consul-consul-connect-injector-webhook-deployment-5796fc9774sf4 1/1 Running 0 10m
consul-consul-server-0 1/1 Running 0 9m28s
consul-consul-server-1 1/1 Running 0 10m
consul-consul-server-2 1/1 Running 0 10m
consul-consul-sync-catalog-7f7fb45954-9sf6w 0/1 CrashLoopBackOff 7 10m

k exec -it consul-consul-server-0 /bin/sh
consul members
Node Address Status Type Build Protocol DC Segment
consul-consul-server-0 172.23.8.147:8301 alive server 1.8.2 2 dc1
consul-consul-server-1 172.23.2.67:8301 alive server 1.8.2 2 dc1
consul-consul-server-2 172.23.11.131:8301 alive server 1.8.2 2 dc1

No Consul agents created on kubenetes nodes. Please help.
Thanks

Hi, can you paste your yaml config again but code formatted?

like this
1 Like
global:
  domain: consul
  datacenter: dc1
  enabled: true
server:
  replicas: 3
  bootstrapExpect: 3
  storage: 64Mi
  storageClass: local-path
  extraVolumes:
     - type: secret
       name: vault-config
       load: true
       items:
       - key: config
         path: vault-config.json
     - type: secret
       name: vault-ca
       load: false

Hi Lkysow,
Without providing a custom values.yaml, I still not see the consul client agents created as well.
helm install consul hashicorp/consul
Thanks for your help.

Can you run kubectl get daemonset and kubectl describe daemonset consul-consul

Lkysow,

k describe daemonset consul-consul
Name: consul-consul
Selector: app=consul,chart=consul-helm,component=client,hasDNS=true,release=consul
Node-Selector:
Labels: app=consul
app.kubernetes.io/managed-by=Helm
chart=consul-helm
heritage=Helm
release=consul
Annotations: deprecated.daemonset.template.generation: 1
meta.helm.sh/release-name: consul
meta.helm.sh/release-namespace: consulns
Desired Number of Nodes Scheduled: 0
Current Number of Nodes Scheduled: 0
Number of Nodes Scheduled with Up-to-date Pods: 0
Number of Nodes Scheduled with Available Pods: 0
Number of Nodes Misscheduled: 0
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=consul
chart=consul-helm
component=client
hasDNS=true
release=consul
Annotations: consul.hashicorp.com/config-checksum: ca3d163bab055381827226140568f3bef7eaac187cebd76878e0b63e9e442356
consul.hashicorp.com/connect-inject: false
Service Account: consul-consul-client
Containers:
consul:
Image: consul:1.8.2
Ports: 8500/TCP, 8502/TCP, 8301/TCP, 8301/UDP, 8302/TCP, 8300/TCP, 8600/TCP, 8600/UDP
Host Ports: 8500/TCP, 8502/TCP, 0/TCP, 0/UDP, 0/TCP, 0/TCP, 0/TCP, 0/UDP
Command:
/bin/sh
-ec
CONSUL_FULLNAME=“consul-consul”

  exec /bin/consul agent \
    -node="${NODE}" \
    -advertise="${ADVERTISE_IP}" \
    -bind=0.0.0.0 \
    -client=0.0.0.0 \
    -node-meta=pod-name:${HOSTNAME} \
    -hcl='leave_on_terminate = true' \
    -hcl='ports { grpc = 8502 }' \
    -config-dir=/consul/config \
    -datacenter=dc1 \
    -data-dir=/consul/data \
    -retry-join="${CONSUL_FULLNAME}-server-0.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc" \
    -retry-join="${CONSUL_FULLNAME}-server-1.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc" \
    -retry-join="${CONSUL_FULLNAME}-server-2.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc" \
    -domain=consul

Limits:
  cpu:     100m
  memory:  100Mi
Requests:
  cpu:      100m
  memory:   100Mi
Readiness:  exec [/bin/sh -ec curl http://127.0.0.1:8500/v1/status/leader \

2>/dev/null | grep -E ‘".+"’
] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
ADVERTISE_IP: (v1:status.podIP)
NAMESPACE: (v1:metadata.namespace)
NODE: (v1:spec.nodeName)
HOST_IP: (v1:status.hostIP)
Mounts:
/consul/config from config (rw)
/consul/data from data (rw)
Volumes:
data:
Type: EmptyDir (a temporary directory that shares a pod’s lifetime)
Medium:
SizeLimit:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: consul-consul-client-config
Optional: false
Events:
Type Reason Age From Message


Warning FailedCreate 25s (x14 over 66s) daemonset-controller Error creating: pods “consul-consul-” is forbidden: unable to validate against any pod security policy: [spec.containers[0].hostPort: Invalid value: 8500: Host port 8500 is not allowed to be used. Allowed ports: spec.containers[0].hostPort: Invalid value: 8502: Host port 8502 is not allowed to be used. Allowed ports: ]

Thanks (pro-tip enclose any pasted output within three backticks (without the leading \)

\```
\```

So this is your issue:

Warning FailedCreate 25s (x14 over 66s) daemonset-controller Error creating: pods “consul-consul-” is forbidden: unable to validate against any pod security policy: [spec.containers[0].hostPort: Invalid value: 8500: Host port 8500 is not allowed to be used. Allowed ports:  spec.containers[0].hostPort: Invalid value: 8502: Host port 8502 is not allowed to be used. Allowed ports: ]

It looks like you have pod security policies in your cluster so you’ll need to set

global:
  enablePodSecurityPolicies: true

lkysow,
Thanks a ton.