Hi,
I have Consul installed on my k8s cluster through helm, and have one service currently running with consul.hashicorp.com/connect-inject: ‘true’
Every random minutes, the main container would throw a 503 healthcheck error, leading to crashloopbackoff.
Looking at kubectl describe pods, the healthcheck seems to be altered by consul to port 20300/20400, not the port 8080 described in my deployment yml.
Do you know what triggers the 503 error? Looking at app logs, there’s no indication of any error/crash, and app health checks ALWAYS return 200 on all conditions.
Thanks!
Containers:
api:
Container ID: docker://24f9d1934f64bb2c8b44029a215c97d5d100290a86305c93a73442c534f957e7
Image: API-IMAGE:e2541c9
Image ID: docker-pullable://API-IMAGE@sha256:ae1d51c07d69d15651bfdc1f4bb7a59e74bd78dc9a08fbbed34ff531fac3f0f0
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 28 Jul 2022 19:03:14 +0700
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Thu, 28 Jul 2022 19:00:10 +0700
Finished: Thu, 28 Jul 2022 19:02:53 +0700
Ready: True
Restart Count: 2
Limits:
cpu: 750m
Requests:
cpu: 150m
Liveness: http-get http://:20300/health delay=5s timeout=5s period=5s #success=1 #failure=1
Readiness: http-get http://:20400/health delay=5s timeout=5s period=5s #success=1 #failure=1
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-59j6t (ro)
envoy-sidecar:
Container ID: docker://6c0fb1e86f619144ccb23205117dbdc79ceda8351c19ef84f16ce798cec635c0
Image: envoyproxy/envoy:v1.22.2
Image ID: docker-pullable://envoyproxy/envoy@sha256:1f343072a58e74644b7adc8d2d877071f846fc77166295a6d2686aee6cf58162
Port: <none>
Host Port: <none>
Command:
envoy
--config-path
/consul/connect-inject/envoy-bootstrap.yaml
--concurrency
2
State: Running
Started: Thu, 28 Jul 2022 18:56:07 +0700
Ready: True
Restart Count: 0
Environment:
HOST_IP: (v1:status.hostIP)
Mounts:
/consul/connect-inject from consul-connect-inject-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-59j6t (ro)