Hi Nico, thank you for raising this question!
Could you expand a bit more on your configuration and your expectations?
The current design of the health checks system will only address connect-injected pods, and the way it works is by registering a (new) health check with Consul which reflects the k8s readiness status for that pod.
When a k8s readiness probe fails for that pod the (new) health check in Consul will be marked failed (critical), which will subsequently cause the service instance’s health check to be marked critical. This will not modify the serf health check which could still be passing, however since the service instance’s health is critical it will no longer participate in service mesh traffic for that service.
There is a known issue where rarely a pod which is terminated by k8s in a manner by which it will not execute it’s preStop condition which is where we deregister the service and the health check, this could leave stray services in your UI but it is a pretty rare corner case.
The health checking system will not in any circumstance remove or restart the pod, but you could enforce this behaviour by using a liveness probe in k8s for your pods.
I hope this helps clear up things a bit, do let me know if you have other questions!