@banks Iāve forked @nicās consul-demo-traffic-splitting repo and added an Edge proxy example.
The two key configuration pieces of the example are:
- the custom envoy.yaml configuration to configure the public listeners on port 80 / 443, and a static list of HTTP hosts / path routes.
- the services.hcl file that defines the edge-proxy service in consul and all itās upstream clusters.
It had been months since I first worked on this Edge proxy configuration and when I went back and looked at the config I realized I had not used any Consul escape hatches, but instead created a custom envoy.yaml to bootstrap Envoy. In the envoy.yaml, the envoy frontend listener is given an HTTP routing configuration that maps hostnames and/or paths to named upstream clusters. However the cluster definitions are not provided statically, and will be provided by Consul dynamically. This also allows Consul to configure envoy with the mTLS certs & keys for communication with the rest of the service mesh.
The tricky part is in knowing in advance what Consul will name the upstream clusters when it generates the envoy cluster configuration.
In Consul versions before 1.6, it was fairly predictable, and a upstream service named web would be represented by an Envoy cluster configuration called web. However in Consul 1.6+ the cluster name includes Consul cluster DC and UUID
e.g. web.default.dc1.internal.f9dd0678-47c1-dc0a-b2cd-6d424cf73955.consul
It is almost be possible to achieve what Iāve done by configuring a custom envoy_bootstrap_json_tpl escape hatch - however the cluster UUID is not one of the interpolated variables when Consul renders envoy_bootstrap_json_tpl
making it difficult to generate the correct upstream cluster names.
Running the Example
Steps for spinning up the example stack using docker-compose:
1 - clone the repo
git clone https://github.com/sl1pm4t/consul-demo-traffic-splitting.git
2 - use docker-compose to bring up all containers:
cd consul-demo-traffic-splitting
git checkout edge-proxy
docker-compose up -d
3 - while testing this example I almost always hit this Consul bug which means the edge proxy service doesnāt get registered with Consul at startup, and subsequently the consul agent does not supply cluster configuration to envoy. This will be seen as a 503 response in step 4 below.
As a workaround, trigger a consul configuration reload:
docker exec -it consul-demo-traffic-splitting_edge_consul_1 consul reload
4 - attempt to browse / curl the edge proxy (listening on port 80 & 443)
Web Service
$ curl -k https://localhost:443/
Hello World
###Upstream Data: localhost:9091###
Service V1
API Service
$ curl -k https://localhost:443/api
Service V1