Hi,
I am new to Consul and, typically, bitten off more than I should!
I have a Consul service mesh installation across AWS and GCP Kubernetes clusters with each cluster being a separate datacenter.
I can see the services across the mesh from each datacenter (consul catalog services) and I have confirmed that DNS routing is working.
The application is deployed using a StatefulSet that creates 3 pods in each cluster (6 in total) and I have added ‘consul.hashicorp.com/connect-inject’: ‘true’. I can see that each of the pods has the appropriate Consul sidecars.
Each pod connects to each of the other pods in a peer-to-peer network. In order to form the network I need to connect each of the pods in dc2 to each of the pods in dc1.
The problem is that each pod is exposed in Consul as an instance of a service not as a separate service. Consequently I cannot use a reference such as <service_name>.service.dc1.consul from a pod in dc2.
I have also tried ‘consul.hashicorp.com/connect-service-upstreams’ providing a local port and then changed the pod to connect to that port. The result is that the application is sent the FQDN of the local pod in dc1 (i.e. .default.svc.cluster.local) which, naturally, will not resolve in the dc2 cluster.
How do I connect from a pod in dc2 to a pod in dc1?
Hi and thanks for using Consul. Could you provide more context around your use case? Specifically, why do you want to reach the service in the remote datacenter rather than the local one?
The pods in the local datacenter (say dc1) peer nicely (as you would expect) but when the pods from the other datacenter (say dc2) attempt to connect to the pods in dc1 they cannot.
All the pods need to connect in order to form a cluster of 6 pods. This is part of a pod-failure survival use case. The cluster of pods can then survive the loss of a pod in either datacenter and continue to operate.
The connect-service-upstreams annotation takes an optional third argument to specify the data center in which to resolve the upstream service.
A pod in DC1 configured with an annotation of consul.hashicorp.com/connect-service-upstreams":"backend:1234:dc2" will always route connections received on port 1234 to a service named backend in dc2. Mesh gateways can be used to provide service-to-service connectivity between the two data centers.
You can also configure this explicit redirection, failover to DC2 by creating a service-resolver. See the Redirect and Failover stanzas for more info.
I have tried using consul.hashicorp.com/connect-service-upstreams. What I see is that the pod receives back a list of IPs/FQDN local to the dc1 service which, naturally, are no use in the dc2 pod.
I have the same problem or requirement. More specifically, I want to connect MongoDB pods across Kubernetes clusters so that they form a single ReplicaSet.
I could expose every pod via a NodePort Service. I would prefer using Consul Federation and being able to address each pod individually by what @cjireland suggested: <instance>.<service>.service.dc1.consul.
Is this somehow possible? Otherwise consul would not really be of use for me I’m afraid.