I moved to consul1.8.0 for my k8 deployment. After which i see that the destination alias proxy health check is failing. Any clue what might have gone wrong.
I am still using old consul-helm charts with envoy deployment.
Hi, you need to upgrade to the latest version of consul-k8s (https://github.com/hashicorp/consul-k8s/releases/tag/v0.16.0) and re-roll all your connect-injected pods unfortunately. We weren’t creating the alias check properly before and now it fails on 1.8.0.
Ah i see… Is there any workaround that i can use to use. Like disabling alias check at global level.
Also if i use consul-k8s ver 0.16.0 to inject envoy sidecar to my pod i see the resources are automatically taken as one shown below.
Limits:
cpu: 50m
memory: 25Mi
Requests:
cpu: 50m
memory: 25Mi
wondering if i can reduce the requests cpu and memory by any overrride ?
Hi, there’s no workaround other than maybe exec’ing into every connect pod and deleting the alias check from the service config file. I’m not sure if that will actually delete the alias check from the consul client so you might need to also make an API call to delete the check even after changing the file.
For the resources, there is no way to override this right now. Note that the resources you posted are for the init container only. The lifecycle sidecar container only requires 25Mi/10m and the envoy proxy sidecar by default has no resources set.
If this is a problem for you we’d like to hear why and we’re open to making it configurable.