Connecting to existing external consul cluster from new kubernetes datacenter

Running into an issue where My existing cluster cannot see inside the kubernetes cluster I have. I get a 500 error and in my logs it just times out when the GET request is being performed.
I have attempted to open ports on the ingress and the datacenter shows up in my list from the webui on the external cluster. If i open the UI from my kubernetes cluster it is able to see services listed in external clusters. So the requests are not making it back in.

I notice it is trying to connect to the internal pod IP so perhaps that is the issue?

Using nginx ingress.

Any help or direction on adding a cluster in kubernetes to an existing external cluster would be great.

Hi Brandon, if you’ve deployed your servers inside of Kubernetes then they use their Pod IP as their WAN address. The WAN address is what the external cluster will try to use and so it makes sense that this isn’t working because obviously the Pod IP is not routable from outside Kubernetes.

Unfortunately you aren’t going to be able to use an NGINX ingress for this. Each Consul server needs to be directly routable from the other Consul servers. This means you need to make each node routable with a public IP and then use a hostPort for the WAN server ports (8300 and 8302). Then you need to set the -advertise-wan flag to the node IP for each Consul server. Or you need to create a LoadBalancer in front of each Consul server and then set the -advertise-ip to it’s IP.

Unfortunately there’s no way right now in the Helm chart to do any of this. You’re going to have to manually edit the chart yourself.

I’m actually working on this as we speak and will update this ticket with my progress: https://github.com/hashicorp/consul-helm/issues/28.

1 Like

Is there an example you can give perhaps or someone else doing it manually that worked?
Would there need to be a LoadBalancer for each server pod or the service as a whole?

There’s this PR for using a load balancer: https://github.com/hashicorp/consul-helm/pull/27. You need to have one LB per consul server pod because each server needs its own address for gossip and rpc purposes.

If you’re able to make the node IPs routable then I’ve put up this PR: https://github.com/hashicorp/consul-helm/pull/332. This will advertise the node IPs and make the WAN ports hostPorts.