Auto re-register application pods to consul cluster

Hi Team,

Suppose for a scenario, when the consul agent (client) pod is deleted from a k8s worker node, we see that the application pod gets de-registered.
Now, when the consul agent comes back up and running, we noticed that the application pod is not getting re-registered.

This also means, the application pod will not be ready to except any packets until it gets re-registered to consul

While this would work for applications running will multiple replicas, it will be a down time for the one running with only 1 replica.

This becomes a hard stop for us until we know the right way to mitigate this issue.

hey @ashwinkupatkar

If you’re using the consul-k8s official helm chart, we should re-register the service when agent pod goes down. Which version of the helm chart are you using? Do you see that the re-registration doesn’t happen every time client agent pod goes down or does it happen intermittently?

Hi @ishustava1, thanks for responding. We are using 0.42.0 consul-k8s helm chart.

This issue happens everytime. It’s not intermittent.

@ishustava1 could you reproduce the issue ?

Hi @kschoche can you please help check with this issue ? thanks

Hi @ashwinkupatkar could you file an issue on the consul-k8s repo and provide details on how to exactly reproduce the issue? We would need the helm config and deployment manifests to take a look.

Sure @david-yu I will do that and update here on the issue filed.

Hi @david-yu, I have filled the issue on consull-k8s project. Here are the details: