Does consul Transparent Proxy supported in VM service mesh?

Hi, we feel the Transparent Proxy is really useful, but from the doc it only shows how to use it in k8s. We want to enable consul service mesh on the VM services, so we want to conform that does consul Transparent Proxy supported in VM?
And to make sure, when using transparent proxy in k8s, is it right that can connect the upstream service by KubeDNS hostname and don’t need to use in downstream service’s code?

Hi @hxidkd, it is possible to use transparent proxy on VMs, but it’s not currently documented as the user experience needs improvement to be on par with Consul on Kubernetes.

I gave a demo of this during the last Consul community office hours https://youtu.be/pJCprMwfUPw. The code shown in the video can be found at https://github.com/blake/ansible-collection-consul and https://github.com/blake/vagrant-consul-tproxy/tree/fake-service-l7/examples/vagrant/prebuilt-image.

I plan to eventually write a blog or short docs to show how to use tproxy with on VMs, and accessing K8s services with KubeDNS names.

2 Likes

Hi @blake, i’m trying to get this working with VMs. Have deployed a test service for now, but this VM would be hosting additional multiple services together later on. Confused on how should i set consul redirect-traffic for my test service? My test service listens on port 5000 and sidecar envoy proxy has a listener on port 20000. Running consul as root process for now.

Hi @narendrapatel,

At the moment the process I’ve documented only works when there is a single application running on a VM. I’ve done some exploration on how to run multiple apps on a single machine, with each app separated by namespaces. I don’t yet have a working solution for this, but I’ll publish some info on my GitHub once I figure it out.

1 Like

Hi @blake ,

Thanks for the revert :slight_smile:
Eagerly waiting for your working solution but namespaces are an enterprise feature. Anything we can do on the OSS front.

Also, is there any road map to make transparent proxies available generally for non k8s VM based services? At present the documentation around it and consul-redirect-traffic is a bit unclear.

HI @narendrapatel,

My apologies, I wasn’t clear in my previous post. I actually referring to Linux network namespaces. Kubernetes provisions a separate network namespace for each pod. Consul’s redirect-traffic command modifies the iptables rules within the namespace to redirect traffic to Envoy.

I am looking to document a method to provision individual applications into their own net NS when deploying the apps on a single Linux VM.

1 Like

@blake Thx for bringing these explanations but here are some implementations:

  • servces running only in kubernetes (covered by the documentation)
  • services running between VMs (helped by your explanations but not in the documentation)
  • services running on kubernetes but trying to join service like database in a VM.

My problem is for the last part. I enabled transparent proxy on my testing database but my pods are not able to reach the service database.
Do we have to stick to consul upstreams in this case or there is a solution?
Any plan to have an official documentation?

Thx.

@Lord-Y Did you assign a virtual IP to the database service? Are your services inside of Kubernetes trying to access this database by its assigned virtual IP? If so, they should be able to successfully connect to that service running on the VM.

FYI, Consul 1.11 now automatically allocates virtual IPs (https://www.consul.io/docs/discovery/dns#service-virtual-ip-lookups) to services registered in the mesh. You can use these IPs to reach services running on VMs and K8s, in addition to using the Kube-allocated ClusterIPs for Kubernetes-based services.

hey @blake this is fantastic work! All the scripts and systemd processes, wow. Thank you!
I’m trying to set this up myself here with 2 VMs (Clients) and a Consul Server. It’s integrating to a somewhat prod system guarded by firewalls. So I need to be careful. Currently I’m failing to understand high level how this supposed to work. Given we have grpc, tagged addresses in the mix here and transparent mode I’m missing an overview of what talks to what, through which port. The envoy setup is very different to the built-in consul connect proxy. Is there anything you can send me documentation wise so I can comprehend better?

Hi @carlos-lehmann,

I don’t believe we have any documentation that currently covers this in sufficient detail. I’ll try my best to answer these questions here in this thread.

The grpc port exposes the xDS API to Envoy proxies. When Envoy starts, it connects to the local Consul agent (e.g., localhost:8502) on this port in order to receive its configuration from the mesh. The TCP connections to this port are always initiated by the proxy to the control plane, not the other way around. Although there is bi-directional communication between the control plane and proxy once the connection is open.

The virtual tagged address is used to specify a fixed IP address that downstream applications within the mesh can contact to reach one or more instances of an upstream service.

For example, when Consul is deployed on Kubernetes, a Kubernetes Service’s ClusterIP is copied into this virtual tagged address. A downstream app will typically connect to an upstream using its in-cluster DNS hostname. The application will issue a DNS query for that hostname, and the KubeDNS server will return the ClusterIP in the DNS response.

The downstream will then initiate an outgoing TCP connection to the upstream IP. That connection will be redirected using iptables rules to the local Envoy proxy which is listening on port 15001 (consul connect redirect-traffic -proxy-outbound-port). Envoy will match the destination IP (i.e., cluster IP) to the correct upstream service (or cluster, in Envoy terminology), and then initiate a mTLS connection to the Envoy proxy associated with one of the available upstream service instances (or endpoints, in Envoy terminology).

As I mentioned earlier, Consul 1.11 now automatically allocates virtual IPs to services, which it then stores in the new consul-virtual tagged address, so you no longer need to manually configure a virtual address if you are attempting to use tproxy on VMs. (That is, unless you have a specific reason to define your own upstream service IPs.)

I hope this makes sense. Let me know if you have any other questions. If things still aren’t clear, I can try to draw up some diagrams to help you understand these concepts.