Install consul by helm chart - Readiness probe failed

Hello,

Im deploy fresh consul install on on-premiss kubernetes cluster by helm chart.

helm install consul hashicorp/consul --set global.name=consul --create-namespace -n consul10 --set connectInject.enabled=false

PVC is create, pod is create and Readiness probe failed.

kubectl describe pod consul-server
Name: consul-server-0
Namespace: consul10
Priority: 0
Service Account: consul-server
Node: node3/10.10.10.22
Start Time: Mon, 08 Apr 2024 13:18:12 +0200
Labels: app=consul
chart=consul-helm
component=server
controller-revision-hash=consul-server-5fdb95d677
hasDNS=true
release=consul
statefulset.kubernetes.io/pod-name=consul-server-0
Annotations: cni.projectcalico.org/containerID: 0847d9509c912cd4a5ce468c21e1c6d14339f2b791a9fb483b6b8d2506bd9aa9
cni.projectcalico.org/podIP: 10.233.92.27/32
cni.projectcalico.org/podIPs: 10.233.92.27/32
consul.hashicorp.com/config-checksum: 54b5fcc4b9196fa402fe5a46c7fe03ff7b9bc52035f59085ddc320df9b826a02
consul.hashicorp.com/connect-inject: false
consul.hashicorp.com/mesh-inject: false
container.seccomp.security.alpha.kubernetes.io/consul: runtime/default
container.seccomp.security.alpha.kubernetes.io/locality-init: runtime/default
Status: Running
IP: 10.233.92.27
IPs:
IP: 10.233.92.27
Controlled By: StatefulSet/consul-server
Init Containers:
locality-init:
Container ID: containerd://2ed23c1c49603f778d4c0d7e6baac3ac98b51812a62a41b02c442ebee3ba0ec1
Image: hashicorp/consul-k8s-control-plane:1.4.1
Image ID: docker.io/hashicorp/consul-k8s-control-plane@sha256:9bcbf07a652553ca17921f9f5b880f7ef774d23a475586d2bea53005aede626c
Port:
Host Port:
SeccompProfile: RuntimeDefault
Command:
/bin/sh
-ec
exec consul-k8s-control-plane fetch-server-region -node-name “$NODE_NAME” -output-file /consul/extra-config/locality.json

State:          Terminated
  Reason:       Completed
  Exit Code:    0
  Started:      Mon, 08 Apr 2024 13:18:14 +0200
  Finished:     Mon, 08 Apr 2024 13:18:14 +0200
Ready:          True
Restart Count:  0
Environment:
  NODE_NAME:   (v1:spec.nodeName)
Mounts:
  /consul/extra-config from extra-config (rw)
  /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kgcr6 (ro)

Containers:
consul:
Container ID: containerd://9b273928218b8f65302ece217c5e866023ddb1e52e0f327a6df3113fdbbaf716
Image: hashicorp/consul:1.18.1
Image ID: docker.io/hashicorp/consul@sha256:cec07d239898707600ce3d502e72e803759950d0645009ff0156ca32ec5854c1
Ports: 8500/TCP, 8502/TCP, 8301/TCP, 8301/UDP, 8302/TCP, 8302/UDP, 8300/TCP, 8600/TCP, 8600/UDP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/UDP, 0/TCP, 0/UDP, 0/TCP, 0/TCP, 0/UDP
SeccompProfile: RuntimeDefault
Command:
/bin/sh
-ec

  cp /consul/tmp/extra-config/extra-from-values.json /consul/extra-config/extra-from-values.json
  [ -n "${HOST_IP}" ] && sed -Ei "s|HOST_IP|${HOST_IP?}|g" /consul/extra-config/extra-from-values.json
  [ -n "${POD_IP}" ] && sed -Ei "s|POD_IP|${POD_IP?}|g" /consul/extra-config/extra-from-values.json
  [ -n "${HOSTNAME}" ] && sed -Ei "s|HOSTNAME|${HOSTNAME?}|g" /consul/extra-config/extra-from-values.json

  exec /usr/local/bin/docker-entrypoint.sh consul agent \
    -advertise="${ADVERTISE_IP}" \
    -config-dir=/consul/config \
    -config-dir=/consul/extra-config \

State:          Running
  Started:      Mon, 08 Apr 2024 13:18:14 +0200
Ready:          False
Restart Count:  0
Limits:
  cpu:     100m
  memory:  200Mi
Requests:
  cpu:      100m
  memory:   200Mi
Readiness:  exec [/bin/sh -ec curl http://127.0.0.1:8500/v1/status/leader \

2>/dev/null | grep -E ‘“.+”’
] delay=50s timeout=50s period=30s #success=1 #failure=20
Environment:
ADVERTISE_IP: (v1:status.podIP)
HOST_IP: (v1:status.hostIP)
POD_IP: (v1:status.podIP)
CONSUL_DISABLE_PERM_MGMT: true
Mounts:
/consul/config from config (rw)
/consul/data from data-consul10 (rw)
/consul/extra-config from extra-config (rw)
/consul/tmp/extra-config from tmp-extra-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kgcr6 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data-consul10:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-consul10-consul-server-0
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: consul-server-config
Optional: false
extra-config:
Type: EmptyDir (a temporary directory that shares a pod’s lifetime)
Medium:
SizeLimit:
tmp-extra-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: consul-server-tmp-extra-config
Optional: false
kube-api-access-kgcr6:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Normal Scheduled 44m default-scheduler Successfully assigned consul10/consul-server-0 to node3
Normal Pulled 44m kubelet Container image “hashicorp/consul-k8s-control-plane:1.4.1” already present on machine
Normal Created 44m kubelet Created container locality-init
Normal Started 44m kubelet Started container locality-init
Normal Pulled 44m kubelet Container image “hashicorp/consul:1.18.1” already present on machine
Normal Created 44m kubelet Created container consul
Normal Started 44m kubelet Started container consul
Warning Unhealthy 3m46s (x111 over 43m) kubelet Readiness probe failed:

Maybe who Any idea ?