I have a consul-k8s (1.13.1) cluster of 3 nodes running on EKS 1.22. Every 20 minutes or so running fine, they all got killed and all Pods recreated and running and then got killed again after 20 plus minutes running.
kubectl get events -n consul --watch
as well as:
kubectl describe pod/consul-server-2 -n consul
Type Reason Age From Message
Normal Created 4m42s kubelet Created container consul
Normal Started 4m42s kubelet Started container consul
Warning Unhealthy 4m36s (x2 over 4m37s) kubelet Readiness probe failed:
However, when I hop on the node and run the Readiness probe, it returns the leader:
Does anyone know why the kubelet failed to check the Readiness probe and decided to kill the consul Pod?