Consul login failed

Hi everyone need help here, currently, I try to deploy a new pod in k8s with the consul, but it fail in the init container, here are the logs I got from the pod init container (consul-connect-inject-init), my pod are on consul-client

$ kubectl logs POD_NAME -n POD_NAMESPACE -c consul-connect-inject-init

2022-01-06T08:18:03.685Z [ERROR] Consul login failed; retrying: error="error logging in: Unexpected response code: 500 (rpc error making call: rpc error making call: lookup failed: [invalid bearer token, unknown])"

2022-01-06T08:18:04.699Z [ERROR] Consul login failed; retrying: error="error logging in: Unexpected response code: 500 (rpc error making call: lookup failed: [invalid bearer token, unknown])"

2022-01-06T08:18:05.860Z [ERROR] Consul login failed; retrying: error="error logging in: Unexpected response code: 500 (rpc error making call: rpc error making call: lookup failed: [invalid bearer token, unknown])"

2022-01-06T08:18:06.874Z [ERROR] Consul login failed; retrying: error="error logging in: Unexpected response code: 500 (rpc error making call: lookup failed: [invalid bearer token, unknown])"

2022-01-06T08:18:06.874Z [ERROR] Hit maximum retries for consul login: error="error logging in: Unexpected response code: 500 (rpc error making call: lookup failed: [invalid bearer token, unknown])"

and I get many WARN logs and some INFO logs from my consul server :

[WARN]agent.server.rpc: RPC request for DC is currently failing as no path was found: datacenter= method=Internal.ServiceDump

[INFO]agent.server.memberlist.lan: memberlist: Suspect NODE_NAME has failed, no acks received

Currently, my consul client connects to the consul server on different k8s clusters.

I’ve experienced this before, and I can fix it by reinstalling the consul client, but this happens again in 3-4 days. How do I fix this permanently so I don’t need to reinstall my consul every 3-4 days?

thank you for your reply.

hi @wendy.thedy could you possibly post the yaml file for deploying this deployment/pod?

The warn message you get is not an issue perse, it is consul trying to connect to consul using UDP, that one fails (perhaps firewall?) consul will try TCP next, hence why you get a warning

Hey @wendy.thedy

What is the k8s version you’re using?

I know that as of k8s 1.21 tokens expire and should be rotated for you by kubernetes. It looks like what could be happening here is that the token you’re using to login could potentially have expired and when consul tries to validate it with kubernetes, it gets this error.

This incident disappear when I upgrade my Consul server to the latest version, I used K8s version 1.19 BTW and thanks for the response.