Name: consul-server-0 Namespace: consul Priority: 0 Node: xxx-k8s-devworker1/137.237.137.163 Start Time: Tue, 09 Nov 2021 14:32:25 -0500 Labels: app=consul chart=consul-helm component=server controller-revision-hash=consul-server-5f46698944 hasDNS=true release=consul statefulset.kubernetes.io/pod-name=consul-server-0 Annotations: consul.hashicorp.com/config-checksum: 0b3381f06ef605a76b464c6966396e4eeb62f920b9ba61dd1be867fe39e14930 consul.hashicorp.com/connect-inject: false Status: Running IP: 10.254.3.69 IPs: IP: 10.254.3.69 Controlled By: StatefulSet/consul-server Containers: consul: Container ID: docker://f7879e3bbcf7086bfd04ce58efb035fcbe0b9a70895e99104cf9921f32b85bb1 Image: hashicorp/consul:1.10.0 Image ID: docker-pullable://hashicorp/consul@sha256:94acb77f733edd4e1b68317c517321d8ce6c914e78ee5fb0d1db06a12fd1da46 Ports: 8500/TCP, 8301/TCP, 8301/UDP, 8302/TCP, 8302/UDP, 8300/TCP, 8600/TCP, 8600/UDP Host Ports: 0/TCP, 0/TCP, 0/UDP, 0/TCP, 0/UDP, 0/TCP, 0/TCP, 0/UDP Command: /bin/sh -ec CONSUL_FULLNAME="consul" exec /bin/consul agent \ -advertise="${ADVERTISE_IP}" \ -bind=0.0.0.0 \ -bootstrap-expect=3 \ -client=0.0.0.0 \ -config-dir=/consul/config \ -datacenter=xxx-k8s \ -data-dir=/consul/data \ -domain=consul \ -encrypt="${GOSSIP_KEY}" \ -hcl="connect { enabled = true }" \ -ui \ -retry-join="${CONSUL_FULLNAME}-server-0.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc:8301" \ -retry-join="${CONSUL_FULLNAME}-server-1.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc:8301" \ -retry-join="${CONSUL_FULLNAME}-server-2.${CONSUL_FULLNAME}-server.${NAMESPACE}.svc:8301" \ -serf-lan-port=8301 \ -server State: Running Started: Tue, 09 Nov 2021 14:32:28 -0500 Ready: False Restart Count: 0 Limits: cpu: 100m memory: 100Mi Requests: cpu: 100m memory: 100Mi Readiness: exec [/bin/sh -ec curl http://127.0.0.1:8500/v1/status/leader \ 2>/dev/null | grep -E '".+"' ] delay=5s timeout=5s period=3s #success=1 #failure=2 Environment: ADVERTISE_IP: (v1:status.podIP) POD_IP: (v1:status.podIP) NAMESPACE: consul (v1:metadata.namespace) GOSSIP_KEY: Optional: false Mounts: /consul/config from config (rw) /consul/data from data-consul (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jxcqb (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: data-consul: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: data-consul-consul-server-0 ReadOnly: false config: Type: ConfigMap (a volume populated by a ConfigMap) Name: consul-server-config Optional: false kube-api-access-jxcqb: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Guaranteed Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 50m default-scheduler Successfully assigned consul/consul-server-0 to wtl-k8s-devworker1 Normal Pulled 50m kubelet Container image "hashicorp/consul:1.10.0" already present on machine Normal Created 50m kubelet Created container consul Normal Started 50m kubelet Started container consul Warning Unhealthy 15s (x1037 over 50m) kubelet Readiness probe failed: