Hi, we feel the Transparent Proxy is really useful, but from the doc it only shows how to use it in k8s. We want to enable consul service mesh on the VM services, so we want to conform that does consul Transparent Proxy supported in VM?
And to make sure, when using transparent proxy in k8s, is it right that can connect the upstream service by KubeDNS hostname and don’t need to use in downstream service’s code?
Hi @hxidkd, it is possible to use transparent proxy on VMs, but it’s not currently documented as the user experience needs improvement to be on par with Consul on Kubernetes.
I gave a demo of this during the last Consul community office hours https://youtu.be/pJCprMwfUPw. The code shown in the video can be found at https://github.com/blake/ansible-collection-consul and https://github.com/blake/vagrant-consul-tproxy/tree/fake-service-l7/examples/vagrant/prebuilt-image.
I plan to eventually write a blog or short docs to show how to use tproxy with on VMs, and accessing K8s services with KubeDNS names.
Hi @blake, i’m trying to get this working with VMs. Have deployed a test service for now, but this VM would be hosting additional multiple services together later on. Confused on how should i set consul redirect-traffic for my test service? My test service listens on port 5000 and sidecar envoy proxy has a listener on port 20000. Running consul as root process for now.
Hi @narendrapatel,
At the moment the process I’ve documented only works when there is a single application running on a VM. I’ve done some exploration on how to run multiple apps on a single machine, with each app separated by namespaces. I don’t yet have a working solution for this, but I’ll publish some info on my GitHub once I figure it out.
Hi @blake ,
Thanks for the revert
Eagerly waiting for your working solution but namespaces are an enterprise feature. Anything we can do on the OSS front.
Also, is there any road map to make transparent proxies available generally for non k8s VM based services? At present the documentation around it and consul-redirect-traffic is a bit unclear.
HI @narendrapatel,
My apologies, I wasn’t clear in my previous post. I actually referring to Linux network namespaces. Kubernetes provisions a separate network namespace for each pod. Consul’s redirect-traffic
command modifies the iptables rules within the namespace to redirect traffic to Envoy.
I am looking to document a method to provision individual applications into their own net NS when deploying the apps on a single Linux VM.
@blake Thx for bringing these explanations but here are some implementations:
- servces running only in kubernetes (covered by the documentation)
- services running between VMs (helped by your explanations but not in the documentation)
- services running on kubernetes but trying to join service like database in a VM.
My problem is for the last part. I enabled transparent proxy on my testing database but my pods are not able to reach the service database.
Do we have to stick to consul upstreams in this case or there is a solution?
Any plan to have an official documentation?
Thx.
@Lord-Y Did you assign a virtual IP to the database service? Are your services inside of Kubernetes trying to access this database by its assigned virtual IP? If so, they should be able to successfully connect to that service running on the VM.
FYI, Consul 1.11 now automatically allocates virtual IPs (https://www.consul.io/docs/discovery/dns#service-virtual-ip-lookups) to services registered in the mesh. You can use these IPs to reach services running on VMs and K8s, in addition to using the Kube-allocated ClusterIPs for Kubernetes-based services.
hey @blake this is fantastic work! All the scripts and systemd processes, wow. Thank you!
I’m trying to set this up myself here with 2 VMs (Clients) and a Consul Server. It’s integrating to a somewhat prod system guarded by firewalls. So I need to be careful. Currently I’m failing to understand high level how this supposed to work. Given we have grpc, tagged addresses in the mix here and transparent mode I’m missing an overview of what talks to what, through which port. The envoy setup is very different to the built-in consul connect proxy. Is there anything you can send me documentation wise so I can comprehend better?
Hi @carlos-lehmann,
I don’t believe we have any documentation that currently covers this in sufficient detail. I’ll try my best to answer these questions here in this thread.
The grpc
port exposes the xDS API to Envoy proxies. When Envoy starts, it connects to the local Consul agent (e.g., localhost:8502
) on this port in order to receive its configuration from the mesh. The TCP connections to this port are always initiated by the proxy to the control plane, not the other way around. Although there is bi-directional communication between the control plane and proxy once the connection is open.
The virtual
tagged address is used to specify a fixed IP address that downstream applications within the mesh can contact to reach one or more instances of an upstream service.
For example, when Consul is deployed on Kubernetes, a Kubernetes Service’s ClusterIP is copied into this virtual
tagged address. A downstream app will typically connect to an upstream using its in-cluster DNS hostname. The application will issue a DNS query for that hostname, and the KubeDNS server will return the ClusterIP in the DNS response.
The downstream will then initiate an outgoing TCP connection to the upstream IP. That connection will be redirected using iptables rules to the local Envoy proxy which is listening on port 15001 (consul connect redirect-traffic -proxy-outbound-port
). Envoy will match the destination IP (i.e., cluster IP) to the correct upstream service (or cluster, in Envoy terminology), and then initiate a mTLS connection to the Envoy proxy associated with one of the available upstream service instances (or endpoints, in Envoy terminology).
As I mentioned earlier, Consul 1.11 now automatically allocates virtual IPs to services, which it then stores in the new consul-virtual
tagged address, so you no longer need to manually configure a virtual address if you are attempting to use tproxy on VMs. (That is, unless you have a specific reason to define your own upstream service IPs.)
I hope this makes sense. Let me know if you have any other questions. If things still aren’t clear, I can try to draw up some diagrams to help you understand these concepts.
Hi @blake, thanks for posting your video and github links, has really helped me out.
But Im still struggling with one thing, not sure if you would mind helping me out? I’m trying to run a primary k8s datacenter and a secondary datacenter on VM’s and using mesh gateways to connect the two together. I have setup the services on both to use transparent proxy and it works 100% if services call each other within the DC’s.
But it does not work if services try to call each other and are in different DC’s. I know in the end of your video you mentioned creating a service-resolver, Im not sure if thats my missing piece, if so I dont really understand how to setup that part up.
Any help would be appreciated.
Hi @zane007bloom,
Cross-DC service connectivity is not currently supported. This is mentioned under the known limitations section in the documentation for transparent proxy.
Traffic can only be transparently proxied when the address dialed corresponds to the address of a service in the transparent proxy’s datacenter. Services can also dial explicit upstreams in other datacenters without transparent proxy, for example, by adding an annotation such as
"consul.hashicorp.com/connect-service-upstreams": "my-service:1234:dc2"
to reach an upstream service calledmy-service
in the datacenterdc2
.
We are looking to add support for cross-DC transparent proxy in the second half of this year.
Are there any updates in regards to updating the documentation to include the use of Transparent-Proxy on VMs?
Hello folks!
For reference, we have added a blog with details on how to configure it on VMs.
If you have any additional questions, we’ll continue to monitor this thread.