Consul-k8s servers and consul clients crashed/killed every 20 minutes of running

Hi,
I have a consul-k8s (1.13.1) cluster of 3 nodes running on EKS 1.22. Every 20 minutes or so running fine, they all got killed and all Pods recreated and running and then got killed again after 20 plus minutes running.
Running
kubectl get events -n consul --watch
as well as:
kubectl describe pod/consul-server-2 -n consul
shows:
Events:
Type Reason Age From Message


Normal Created 4m42s kubelet Created container consul
Normal Started 4m42s kubelet Started container consul
Warning Unhealthy 4m36s (x2 over 4m37s) kubelet Readiness probe failed:

However, when I hop on the node and run the Readiness probe, it returns the leader:
curl http://127.0.0.1:8500/v1/status/leader

Does anyone know why the kubelet failed to check the Readiness probe and decided to kill the consul Pod?
Thanks!

Updated:
Found the issue on our deployment of consul-k8s. All good now.
Thanks!