Hi,
I have a kubernetes cluster with 2 master nodes and 3 worker nodes. I have used HELM to install consul setup which has 3 consul-servers and 5 consul-clients running.
here is how the consul server pods and consul client pods are placed on the Kubernetes nodes:
[root@k8masterg2m1 autoinstall]# kubectl get po -o wide | grep consul
consul-consul-4lxtr 1/1 Running 0 103m 192.168.139.139 k8masterg2m1
consul-consul-6wv9w 1/1 Running 0 103m 192.168.118.215 k8workerg2w3
consul-consul-pc562 1/1 Running 0 103m 192.168.108.162 k8workerg2w2
consul-consul-server-0 1/1 Running 0 107m 192.168.118.214 k8workerg2w3
consul-consul-server-1 1/1 Running 0 9m15s 192.168.227.91 k8workerg2w1
consul-consul-server-2 1/1 Running 0 107m 192.168.108.161 k8workerg2w2
consul-consul-tg4kz 1/1 Running 0 103m 192.168.139.72 k8masterg2m2
consul-consul-tj7h5 1/1 Running 0 103m 192.168.227.90 k8workerg2w1
On the other side I have installed consul client on a local VM, which is on the same networks as the Kubernetes nodes.
From the consul server pods running in Kubernetes, I have used the below command to join the local VM(10.0.20.102).
/ # consul join 10.0.20.102
Successfully joined cluster by contacting 1 nodes.
I could see the below output in both the VM and consul pods in the Kubernetes:
/ # consul members
Node Address Status Type Build Protocol DC Segment
consul-consul-server-0 192.168.118.214:8301 alive server 1.8.1 2 dc1
consul-consul-server-1 192.168.227.91:8301 alive server 1.8.1 2 dc1
consul-consul-server-2 192.168.108.161:8301 alive server 1.8.1 2 dc1
k8masterg1m2 10.0.20.102:8301 alive client 1.8.1 2 dc1
k8masterg2m1 192.168.139.139:8301 alive client 1.8.1 2 dc1
k8masterg2m2 192.168.139.72:8301 alive client 1.8.1 2 dc1
k8workerg2w1 192.168.227.90:8301 alive client 1.8.1 2 dc1
k8workerg2w2 192.168.108.162:8301 alive client 1.8.1 2 dc1
k8workerg2w3 192.168.118.215:8301 alive client 1.8.1 2 dc1
Now, when I try to list the services in Kubernetes consul pods it works fine as shown below:
/ # consul catalog services
consul
consul-consul-dns-default
consul-consul-server-default
consul-consul-ui-default
ha-rabbitmq-rabbitmq-ha-default
ha-rabbitmq-rabbitmq-ha-discovery-default
kubernetes-default
vault-agent-injector-svc-default
vault-internal-default
but, when I try to run the same command in local VM it gives the below error:
[root@k8masterg1m2 autoinstall]# consul catalog services
Error listing services: Unexpected response code: 500 (rpc error getting client: failed to get conn: rpc error: lead thread didn’t get connection)
Since on the consul agent running in the local VM, it is able to list the members but not services/nodes.
Is this the expected behavior or is there any other configuration which has to be done to get this work.
Also, I wanted to know how the communication happens between consul servers and consul agent which is outside Kubernetes cluster.
Any help is appreciated.
Thanks in Advance!!