Consul 1.8.0 Proxy health fails at destination alias

I moved to consul1.8.0 for my k8 deployment. After which i see that the destination alias proxy health check is failing. Any clue what might have gone wrong.
I am still using old consul-helm charts with envoy deployment.

Destination Alias

ServiceName

apiserver-sidecar-proxy

CheckID

service:apiserver-6dbcb68646-jfp4z-apiserver-sidecar-proxy:2

Type

alias

Notes

Output

Service apiserver could not be found on node 

Hi, you need to upgrade to the latest version of consul-k8s (https://github.com/hashicorp/consul-k8s/releases/tag/v0.16.0) and re-roll all your connect-injected pods unfortunately. We weren’t creating the alias check properly before and now it fails on 1.8.0.

Ah i see… Is there any workaround that i can use to use. Like disabling alias check at global level.

Also if i use consul-k8s ver 0.16.0 to inject envoy sidecar to my pod i see the resources are automatically taken as one shown below.
Limits:
cpu: 50m
memory: 25Mi
Requests:
cpu: 50m
memory: 25Mi

wondering if i can reduce the requests cpu and memory by any overrride ?

Hi, there’s no workaround other than maybe exec’ing into every connect pod and deleting the alias check from the service config file. I’m not sure if that will actually delete the alias check from the consul client so you might need to also make an API call to delete the check even after changing the file.

For the resources, there is no way to override this right now. Note that the resources you posted are for the init container only. The lifecycle sidecar container only requires 25Mi/10m and the envoy proxy sidecar by default has no resources set.

If this is a problem for you we’d like to hear why and we’re open to making it configurable.

I am hitting OOM on my pods what have envoys via connect inject. Are the resource limits correct ?

Here is an example
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Thu, 25 Jun 2020 19:01:00 -0700
Finished: Thu, 25 Jun 2020 19:01:47 -0700
Ready: False
Restart Count: 1
Limits:
cpu: 50m
memory: 25Mi
Requests:
cpu: 50m
memory: 25Mi

From
kubectl describe node
Warning SystemOOM 63s kubelet, fn1 System OOM encountered, victim process: cp, pid: 155829

From Dmesg

[ 4208.499935] Memory cgroup stats for /kubepods/burstable/pod96d1cc35-d499-4d1a-bed7-1226b704a709/9187086be842ca23ef51532123e36f62d648161f73fa6a10796950acc3308396:
[ 4208.499948] anon 0
file 23957504
kernel_stack 73728
slab 1556480
sock 0
shmem 0
file_mapped 0
file_dirty 23924736
file_writeback 0
anon_thp 0
inactive_anon 0
active_anon 0
inactive_file 11833344
active_file 12152832
unevictable 0
slab_reclaimable 962560
slab_unreclaimable 593920
pgfault 13794
pgmajfault 0
workingset_refault 462
workingset_activate 198
workingset_nodereclaim 0
pgrefill 46799
pgscan 67568
pgsteal 14548
pgactivate 50952
pgdeactivate 46799
pglazyfree 1089
pglazyfreed 0
thp_fault_alloc 0
thp_collapse_alloc 0
[ 4208.499949] Tasks state (memory values in pages):
[ 4208.499950] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
[ 4208.499953] [ 159035] 0 159035 381 3 32768 0 999 cp
[ 4208.499954] oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=9187086be842ca23ef51532123e36f62d648161f73fa6a10796950acc3308396,mems_allowed=0,oom_memcg=/kubepods/burstable/pod96d1cc35-d499-4d1a-bed7-1226b704a709/9187086be842ca23ef51532123e36f62d648161f73fa6a10796950acc3308396,task_memcg=/kubepods/burstable/pod96d1cc35-d499-4d1a-bed7-1226b704a709/9187086be842ca23ef51532123e36f62d648161f73fa6a10796950acc3308396,task=cp,pid=159035,uid=0
[ 4208.499964] Memory cgroup out of memory: Killed process 159035 (cp) total-vm:1524kB, anon-rss:12kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:32kB oom_score_adj:999
[ 4208.502317] oom_reaper: reaped process 159035 (cp), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB