I’m trying to connect a VM to a K8s cluster inside the same dc for a POC.
I have already read the guide that you provide (Consul Clients Outside of Kubernetes - Kubernetes | Consul by HashiCorp) but there are few details inside.
However, without any additional configuration, no port used by Consul are open to the outside at the K8s cluster level. You need to do something to get it working (use NodePort or add an ingress controller???).
But the guide doesn’t provide any clue.
I’ve got the following architecture:
- 1 Kubernetes cluster (1 master: kube1, 2 workers: kube2 & kube3) with Calico as CNI
kube1: VM=CentOS 7, IP=192.168.131.101
kube2: VM=CentOS 7, IP=192.168.131.102
kube3: VM=CentOS 7, IP=192.168.131.103
VM: VM=CentOS 7, IP=192.168.131.8
Consul 1.8.4 installed on the Kubernetes cluster (with your official Helm chart) and on the VM (official HashiCorp repo).
All VMs can communicate each other because there are in the same network and there is no firewall (and same DC).
However, pods and VM can’t see each other (10.244.0.0/16) can’t see 192.168.131.0/24):
$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
consul-consul-server-0 1/1 Running 0 17d 10.244.2.15 kube3
consul-consul-vxzn6 1/1 Running 0 17d 10.244.1.18 kube2
consul-consul-xxchw 1/1 Running 0 17d 10.244.2.14 kube3
In this configuration, what can be done?