Routing Traditional Hostnames to Consul Connect Services

Curious what the best way is to route traffic using traditional hostnames to their containers. For example, I have a setup where systems in the environment are referring to, let’s say an FTP service, using the convention dev-ftp-101.dev.example.com. The FTP service is running as a Docker container in Nomad, among other services. Now, I can set up this FTP service to use a static port on the host, but that would not be ideal, and would limit the number of instances of this FTP service I could run on each Nomad client. Alternatively, I could use Nginx, and have it route traffic to the containers using the service keyword in a Consul Template file, like so:

stream {
  upstream ftp {
    {{- range service "ftp" }}
    server {{ .Address }}:{{ .Port }};
    {{- end }}
  }

  server {
    listen 21;
    proxy_pass ftp;
  }
}

But this bypasses Consul Connect / Service Mesh and goes to whatever dynamic port was allocated to the FTP container by Nomad. Also, this requires that the service is up and running before starting Nginx or it will complain that no server was defined.

Or, I can leverage Consul Connect by running:

consul connect proxy -service forwarder -upstream ftp:21

And setting the Nginx Consul Template as such:

stream {
  upstream ftp {
    server localhost:21;
  }

  server {
    listen <host_ip>:21;
    proxy_pass ftp;
  }
}

which allows us to route external traffic on this port to the same port bound to localhost.

With the latter approach, you are limited to the number of services running on the same port because Nginx doesn’t allow you to filter stream (TCP/UDP) traffic on destination name like you can with HTTP requests.

Trying to figure out what the best approach is, and perhaps one that I have not illustrated, that will help bridge systems calling traditional hostname with containers.

You can make Nginx part of the mesh and configure it to connect directly to Connect-enabled services. The tutorial Proxy Ingress to Consul Service Mesh walks through this in detail. The connect, caRoots, and caLeaf parameters referenced in the tutorial are documented at https://github.com/hashicorp/consul-template/blob/main/docs/templating-language.md.

You could potentially use a conditional statement here to only render the server block there are registered instances of the ftp service. An example config for this can be found here https://github.com/hashicorp/consul-template/tree/main#multi-phase-execution.

@blake, thanks! This was very helpful and I was able to get an example working. However, I noticed that when I try to route HTTPS to HTTPS, via the proxy, which also uses HTTPS, I get odd results.

For example, the flow would look something like: frontend.example.com:443 (Nginx) => connect-frontend-proxy (Sidecar Proxy) => localhost:443 (Frontend container). When I do this using HTTP, everything seems to work fine, but it breaks down when using HTTPS. Not exactly sure what is going on there, but I am guessing the SSL certificate and key is not getting propagated to the container when localhost:443 is getting called internally via the connect-frontend-proxy sidecar proxy.

Here is an example Nginx config:

upstream frontend {
  {{- range connect "frontend" }}
  server {{.Address}}:{{.Port}};
  {{- end }}
}

server {
  listen 443 ssl;
  server_name frontend.example.com;

  ssl_certificate frontend/certificate.crt;
  ssl_certificate_key frontend/certificate.key;

  location / {
    proxy_ssl_certificate proxy/cert.pem;
    proxy_ssl_certificate_key proxy/cert.key;
    proxy_ssl_trusted_certificate proxy/ca.crt;

    proxy_pass https://frontend;
  }
}

Nginx reports upstream sent no valid HTTP/1.0 header while reading response header from upstream in the error logs when using this configuration.

Here is the request information I am seeing across the service logs:

Nginx receives the in inbound request:

10.50.20.244 - - [05/Feb/2023:19:08:13 +0000] "GET / HTTP/1.1" 009 229 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36"

Frontend (which is also listening on HTTPS) receives the proxied request from Nginx:

- 127.0.0.1 - - [05/Feb/2023:19:09:28 +0000] "GET /" 302 229 "-" "-" - -

It is almost like all the information was stripped from the request. And passing in the headers specifically via proxy_set_header does not resolve it.

However, if I forward the request to Frontend on port 80 instead, I get expected results:

10.50.20.244 127.0.0.1 - - [05/Feb/2023:18:46:20 +0000] "GET / HTTP/1.0" 200 2886 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36" frontend.example.com klgrc1hsnqvbhu5aqa06d3k2e3

The HTTP and HTTPS requests are both going through the Sidecar Proxy, so that doesn’t seem to be the culprit here.

@blake, not sure how to solve the client (HTTPS) => proxy (HTTPS) => frontend (HTTPS) issue. But since I only have a few services that fall into this category, I can just fallback on one of the other methods I outlined initially for now. Note that client (HTTPS) => proxy (HTTPS) => frontend (HTTP) works just fine though.

Marking your response as the solution. Thanks again.

Hi there, sorry for the late reply. For this scenario, you’ll want to look at using ngx_stream_ssl_preread_module on the public facing listener. This will allow you to match on SNI and route a connection to a specific backend that will actually perform the TLS termination.