Hello,
My team and I are trying establish comms between [K8s service using Consul Connect
] → [Nomad Service using Consul Connect
]. Services in both systems are using Consul Connect, /w K8S following the How does Consul Service Mesh Work on Kubernetes? | Consul | HashiCorp Developer.
We verified that [K8s service using Consul Connect
] → [K8s service using Consul Connect
] works great just as in the docs, but when we try to communicate to the [Nomad service using Consul Connect
], we get DNS lookup failures.
We have verified:
- ServiceSync has services registered in k8s with
type:ExternalName
- Intentions are allowed between [
K8s service
] → [Nomad service
]
- K8s service envoy config has values for the Nomad service, but there are more values present for the other K8s upstream that it can connect to, so something is missing.
Using the static-client
from official docs, we were hoping to run a simple:
kubectl exec deploy/static-client -- curl --silent http://<CONSUL_NAME_FOR_NOMAD_SERVICE>/
…but no good.
Anyone have any ideas what we might be missing…?
Thanks in advance!
(Additional context: Nomad services using Consul Connect is configured standard following Consul Service Mesh | Nomad | HashiCorp Developer, our current production system runs entirely on this.)
Hi @djenriquez,
Could you please share the Helm Values file used to deploy consul-k8s, and the version of Consul-K8S? First, considering the transparent proxy requires DNS resolution to work, the fact that you are hitting DNS resolution error itself is a major blocker.
Here are some options you can try:
-
Try without Transparent Proxy and see if everything works fine. This would help validate whether its transparent proxy is the leading cause and whether the connectivity between the envoy running on K8S and Nomad exists.
-
If the above works, then switch to using transparent proxy and use the following approach:
I hope this helps.
EDIT: the dialedDirectly option to connect to Nomad services won’t work as Nomad atm doesn’t support Transparent Proxy.
The possible options are to define explicit upstream or use virtual tagged addresses. This is similar to dialing services across K8S cluster documented here: Enable transparent proxy mode | Consul | HashiCorp Developer
Hi @Ranjadas,
So turns out what we were able to get working was adding an annotation that explicitly defines the upstream: Annotations and Labels | Consul | HashiCorp Developer.
However, this solution is not ideal, as we have to call the service via localhost:<port>
, where the port is set in the upstream annotation. Our goal was to call the service using just the service name, with http://<service_name>
, just like how http://static-server/
is called in the example in the docs.
From your reply, it sounds we won’t be able to do this until Transparent proxies are supported in Nomad, is that right? I’m curious why Nomad support would be needed since the Nomad service is just any other Consul Connect service. I can understand if we wish to call [Nomad Service using Consul Connect
] → [K8s service using Consul Connect
], but we’re just looking for the other way around.
Thanks!
Hi @djenriquez,
You can still talk from K8S Consul Connect Service to Nomad Consul Connect Service using the virtual tagged address (eg: http://static-server.virtual.consul).
If this works for you, as a workaround, you can have your K8S deployments have custom search domain (eg: virtual.consul
) and start calling services like http://static-server
(which would expand to static-server.virtual.consul
).
ref: DNS for Services and Pods | Kubernetes
Could you try the above and see if it works for you?
In the case of transparent proxy with the dialledDirectly scenario (where you are not using one of the tagged addresses), when a source service calls the destination in the form of http://static-server (without even specifying a port), the request reaches the destination pod on port 80 (default port for HTTP request. You may not have anything running on port 80), due to the iptables rules on the pod, the request is re-routed to the public_listener of Envoy, and envoy fulfils the request.
This diagram may help to understand it a bit better: Transparent proxy overview | Consul | HashiCorp Developer
Now if you try to apply the same concept in K8S → Nomad, when you call the nomad service using http://<svc-in-nomad>
(and let us assume your K8S resolves the name correctly), what will happen is the request will reach the nomad alloc on Port 80. There is nothing that handles the re-routing of that request to the envoy running inside the alloc.
I hope this helps.