Kubernetes - Service Mesh - Access to service instances

Hi,
I am new to Consul and, typically, bitten off more than I should!

I have a Consul service mesh installation across AWS and GCP Kubernetes clusters with each cluster being a separate datacenter.

I can see the services across the mesh from each datacenter (consul catalog services) and I have confirmed that DNS routing is working.

The application is deployed using a StatefulSet that creates 3 pods in each cluster (6 in total) and I have added ‘consul.hashicorp.com/connect-inject’: ‘true’. I can see that each of the pods has the appropriate Consul sidecars.

Each pod connects to each of the other pods in a peer-to-peer network. In order to form the network I need to connect each of the pods in dc2 to each of the pods in dc1.

The problem is that each pod is exposed in Consul as an instance of a service not as a separate service. Consequently I cannot use a reference such as <service_name>.service.dc1.consul from a pod in dc2.

I have also tried ‘consul.hashicorp.com/connect-service-upstreams’ providing a local port and then changed the pod to connect to that port. The result is that the application is sent the FQDN of the local pod in dc1 (i.e. .default.svc.cluster.local) which, naturally, will not resolve in the dc2 cluster.

How do I connect from a pod in dc2 to a pod in dc1?

Thanks!

1 Like

Hi and thanks for using Consul. Could you provide more context around your use case? Specifically, why do you want to reach the service in the remote datacenter rather than the local one?

Hi Derek,

Thanks for responding.

The pods in the local datacenter (say dc1) peer nicely (as you would expect) but when the pods from the other datacenter (say dc2) attempt to connect to the pods in dc1 they cannot.

All the pods need to connect in order to form a cluster of 6 pods. This is part of a pod-failure survival use case. The cluster of pods can then survive the loss of a pod in either datacenter and continue to operate.

Thanks,
Chris

The connect-service-upstreams annotation takes an optional third argument to specify the data center in which to resolve the upstream service.

A pod in DC1 configured with an annotation of consul.hashicorp.com/connect-service-upstreams":"backend:1234:dc2" will always route connections received on port 1234 to a service named backend in dc2. Mesh gateways can be used to provide service-to-service connectivity between the two data centers.

You can also configure this explicit redirection, failover to DC2 by creating a service-resolver. See the Redirect and Failover stanzas for more info.

Example config entries which utilize these stanzas can be found in the service resolver docs, or this post on Implicit connection across datacenters - #4 by blake.

Thanks Blake.

I have tried using consul.hashicorp.com/connect-service-upstreams. What I see is that the pod receives back a list of IPs/FQDN local to the dc1 service which, naturally, are no use in the dc2 pod.

So, for example:

'consul.hashicorp.com/connect-service-upstreams': '<service>:26999:eks_consul_dc1'

and then changing my app (pod) to connect to port 26999:

--join 127.0.0.1:26999

The app receives back and address such as:

app-aws-1.app-aws.default.svc.cluster.local:26257

Where

  1. app-aws-1 is an instance of the service app-aws; and
  2. the FQDN is local to the dc1 pod so cannot be used in the dc2 pod

If I try <service>.service.eks_consul_dc1.consul so,

--join app.service.eks_consul_dc1.consul

The pod in dc2 gets back the IP of the pod in dc1 rather than a FQDN. This again is of no use to the pod in dc2 because the IP is local to dc1.

I’d really like to be able to reference directly from dc2 a pod as part of a StatefulSet from dc1, e.g.:

<instance>.<service>.service.dc1.consul

Perhaps this is a L7 concept along with ServiceResolver?

Also, can I define a ServiceResolver as part of my Kubernetes YAML, and if so, how would I do that?

Thanks.

1 Like

I have the same problem or requirement. More specifically, I want to connect MongoDB pods across Kubernetes clusters so that they form a single ReplicaSet.

I could expose every pod via a NodePort Service. I would prefer using Consul Federation and being able to address each pod individually by what @cjireland suggested: <instance>.<service>.service.dc1.consul.

Is this somehow possible? Otherwise consul would not really be of use for me I’m afraid.

EDIT: let’s see :slight_smile: