Curious what the best way is to route traffic using traditional hostnames to their containers. For example, I have a setup where systems in the environment are referring to, let’s say an FTP service, using the convention dev-ftp-101.dev.example.com
. The FTP service is running as a Docker container in Nomad, among other services. Now, I can set up this FTP service to use a static port on the host, but that would not be ideal, and would limit the number of instances of this FTP service I could run on each Nomad client. Alternatively, I could use Nginx, and have it route traffic to the containers using the service
keyword in a Consul Template file, like so:
stream {
upstream ftp {
{{- range service "ftp" }}
server {{ .Address }}:{{ .Port }};
{{- end }}
}
server {
listen 21;
proxy_pass ftp;
}
}
But this bypasses Consul Connect / Service Mesh and goes to whatever dynamic port was allocated to the FTP container by Nomad. Also, this requires that the service is up and running before starting Nginx or it will complain that no server was defined.
Or, I can leverage Consul Connect by running:
consul connect proxy -service forwarder -upstream ftp:21
And setting the Nginx Consul Template as such:
stream {
upstream ftp {
server localhost:21;
}
server {
listen <host_ip>:21;
proxy_pass ftp;
}
}
which allows us to route external traffic on this port to the same port bound to localhost.
With the latter approach, you are limited to the number of services running on the same port because Nginx doesn’t allow you to filter stream (TCP/UDP) traffic on destination name like you can with HTTP requests.
Trying to figure out what the best approach is, and perhaps one that I have not illustrated, that will help bridge systems calling traditional hostname with containers.