Healthcheck failed on main container after integrating consul


I have Consul installed on my k8s cluster through helm, and have one service currently running with ‘true’

Every random minutes, the main container would throw a 503 healthcheck error, leading to crashloopbackoff.

Looking at kubectl describe pods, the healthcheck seems to be altered by consul to port 20300/20400, not the port 8080 described in my deployment yml.

Do you know what triggers the 503 error? Looking at app logs, there’s no indication of any error/crash, and app health checks ALWAYS return 200 on all conditions.


    Container ID:   docker://24f9d1934f64bb2c8b44029a215c97d5d100290a86305c93a73442c534f957e7
    Image:          API-IMAGE:e2541c9
    Image ID:       docker-pullable://API-IMAGE@sha256:ae1d51c07d69d15651bfdc1f4bb7a59e74bd78dc9a08fbbed34ff531fac3f0f0
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 28 Jul 2022 19:03:14 +0700
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Thu, 28 Jul 2022 19:00:10 +0700
      Finished:     Thu, 28 Jul 2022 19:02:53 +0700
    Ready:          True
    Restart Count:  2
      cpu:  750m
      cpu:        150m
    Liveness:     http-get http://:20300/health delay=5s timeout=5s period=5s #success=1 #failure=1
    Readiness:    http-get http://:20400/health delay=5s timeout=5s period=5s #success=1 #failure=1
    Environment:  <none>
      /var/run/secrets/ from kube-api-access-59j6t (ro)
    Container ID:  docker://6c0fb1e86f619144ccb23205117dbdc79ceda8351c19ef84f16ce798cec635c0
    Image:         envoyproxy/envoy:v1.22.2
    Image ID:      docker-pullable://envoyproxy/envoy@sha256:1f343072a58e74644b7adc8d2d877071f846fc77166295a6d2686aee6cf58162
    Port:          <none>
    Host Port:     <none>
    State:          Running
      Started:      Thu, 28 Jul 2022 18:56:07 +0700
    Ready:          True
    Restart Count:  0
      HOST_IP:   (v1:status.hostIP)
      /consul/connect-inject from consul-connect-inject-data (rw)
      /var/run/secrets/ from kube-api-access-59j6t (ro)

back to square 0.

deleted annotation, and healthcheck is back on envoy’s hands.

I was able to determine why healthcheck returned 503, this is the error message:

upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: delayed connect error: 111

ANY help is appreciated. don’t know what to do here…

1 Like

same here, any news @gvccvwangmingn2 ?

Why the fk have you made a product when you can’t support it? It has less features than it has bugs.

1 Like