Multi-datacenter consul with kubernetes

Hello I’m having difficulties to set up a multi-datacenter consul cluster where the servers are inside the cluster. I have a kind of load balancer that send requests to the servers inside the clusters but the IPs of individual servers are not exposed only the load balancer IP. How is wan join supposed to work with k8s, do I need to expose the servers as nodeports and/or using hostport? I’m asking because the consul-helm is set to have the servers as headless but not exposed externally.

BTW, I was referred here from github to ask for questions but do not seem to get any answer to my questions so far.

Thanks

1 Like

Hi Filinto,
There’s no built-in way to the helm chart to make this work (at least right now). You can either use NodePort/HostPort’s or create a LoadBalancer for each server. If you create a LoadBalancer for each server, note that there will be errors in the logs that UDP isn’t working but Consul falls back to TCP so the join will still work.

1 Like

Watching this for future.

I started to play with Consul a month ago and ran into the same problem. Which approach would you recommend actually? I can’t really do LoadBalancer because I am hosting it in a Kubernetes cluster on premise, and I am trying to federate to a Consul cluster in a Kubernetes cluster on AWS EKS. And HostPort doesn’t seems to make too much sense, since if I want to follow the recommendation from Hashi (5 servers) I would requires 5 worker nodes. Please help! Thank you very much!

I actually went the hostport route. The main problem is the Consul expectation of a flat network. If you have the enterprise solution, they offer some partition thing that might work but I’m not sure as I’m not using it. They are supposedly coming with a solution with consul connect (service mesh) but don’t know much about it either. In AWS EKS you would need to have a load balancer per server as mentioned by Ikysow above so no need for hostport there but on prem the hostport would be required unless the consul connect solution solves the problem somehow. On prem you could also have something like kuberouter or some other CNI that could do the load balancing for you too. Even though 5 nodes is recommended, 3 would also work, it depends on your load, you could start with 3 and add the other two later on if you don’t have enough nodes to start with.

1 Like

Thanks for the reply, filintod! Unfortunately that seems like the way to go, although its not very elegant. Are you currently using this workaround in production?

I am and it works fine. The only issue I had was with UDP and conntrack but that is related to the CNI plugin we are using not removing old UDP entries in conntrack table, and as Consul pings each other very frequently they don’t timeout and can stay there forever creating a black hole. Again, you might be able to use consul connnect and Hashi’s upcoming solution with it (it is in a branch in consul helm repo - https://github.com/hashicorp/consul-helm/tree/wan-federation-base).

1 Like

Also you can check my PoC of multi-dc service mesh in Kuberentes here