Communication between a K8s cluster and a legacy VM

I’m trying to connect a VM to a K8s cluster inside the same dc for a POC.
I have already read the guide that you provide (Consul Clients Outside of Kubernetes - Kubernetes | Consul by HashiCorp) but there are few details inside.
However, without any additional configuration, no port used by Consul are open to the outside at the K8s cluster level. You need to do something to get it working (use NodePort or add an ingress controller???).
But the guide doesn’t provide any clue.

I’ve got the following architecture:

  • 1 Kubernetes cluster (1 master: kube1, 2 workers: kube2 & kube3) with Calico as CNI
    kube1: VM=CentOS 7, IP=192.168.131.101
    kube2: VM=CentOS 7, IP=192.168.131.102
    kube3: VM=CentOS 7, IP=192.168.131.103
    VM: VM=CentOS 7, IP=192.168.131.8
    Consul 1.8.4 installed on the Kubernetes cluster (with your official Helm chart) and on the VM (official HashiCorp repo).
    All VMs can communicate each other because there are in the same network and there is no firewall (and same DC).
    However, pods and VM can’t see each other (10.244.0.0/16) can’t see 192.168.131.0/24):
    $ kubectl get pod -o wide
    NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    consul-consul-server-0 1/1 Running 0 17d 10.244.2.15 kube3
    consul-consul-vxzn6 1/1 Running 0 17d 10.244.1.18 kube2
    consul-consul-xxchw 1/1 Running 0 17d 10.244.2.14 kube3

In this configuration, what can be done?

Hi @lebrisg,

Consul requires that pods and VMs have direct IP connectivity to one another in order for service-to-service communication to function within the service mesh.

Calico does have the ability to provide Pod IP routability outside of the cluster. You’ll want to configure Calico appropriately for your network environment. Once this is done, and the IPs are reachable, pod-to-VM communication should work.

Configuring Calico with BGP isn’t something easy (at least for me).
I don’t want to use NodePort (relatively unsecure). I tried MetalLB but it doesn’t allow TCP & UDP protocols at the same time (Consul requires both).
Is there any other network configuration that you are aware of to get the same kind of feature? What do you use when you build a POC on this subject?

Hi @lebrisg,

This type of network connectivity can be achieved with various cloud provider’s managed Kubernetes distributions using the correct network CNI.

There are other CNI plugins which offer L2 connectivity that can be used on-premises. See Cluster Networking | Kubernetes.