But I can’t make it to work with “transparent proxy” commenting upstream configuration and using ServiceResolver (e.g. failover).
I tried lot of serviceB dns names but all returns:
curl: (6) Could not resolve host: serviceB
command terminated with exit code 6
As Cross-Datacenter dns resolution (Consul DNS lookups across WAN-federated datacenters) is not working as stated here.
A sample objective is to deploy serviceA in dc1 and deploy serviceB in dc1 and dc2, allowing serviceA to use serviceB in dc1 with failover (ServiceResolver) to serviceB in dc2 if ServiceB in dc1 is down ?
serviceA in dc1 will point by default to serviceB in dc1 serviceA in dc2 will point by default to serviceB in dc2.
If serviceB in one of the dc goes down, serviceA in that dc will automatically failover to serviceB in the other dc. This is ok !
But if I remove upstreams and try to resolve serviceB using “Transparent Proxy” (pointing to: “serviceB.virtual.consul”) i get intermittent resolution:
Right response from serviceB (also respecting cross-dc ServiceResolver rule)
curl: (6) Could not resolve host: serviceB
command terminated with exit code 6
The error you are getting seems to be due to name resolution issues. Are you using alpine container for serviceA? Could you switch to a non-alpine container and see if you still have intermittent resolution issues?
Hi @Ranjandas,
you’re right about Alpine. I use a new image so search and TCP dns query are supported.
I done many test about alpine/musl parrallel nameserver that cannot be disabled and that I initially thought to be the issue but, also glibc uses parallel by default without this resolution issue.
Finally I was able to simulate and reproduce the issue using “options rotate” in glibc:
/etc/resolv.conf configuration with consul for k8s (helm chart) with dns enabled and dns.enableRedirection enable (dns forwording/redirection):
will show the intermittent resolution issue. Then removing rotate the issue disappears (also if glibc seems to use parallel queries to namserver by default).
That said finally I opted for a clean coredns setup with kubernetes standard dns local resolver config (only kubernetes coredns service as nameserver):
Not a super-k8s-native solution (as coredns does not support referncing consul dns “service name” instead of static consul dns “service ip”) but avoiding many issues based on differences and behaviors of different linux resolvers logic.