Hello!
I am loving the Consul integration with Nomad however I am struggling in how I can effectively resolve Consul services from a docker container running in Nomad.
Let’s say I have a service: webserver.dc1.consul that lives in nomad and has a dynamic port.
group "webserver" {
network {
port "http" {
to = 8080
}
[...]
service {
name = "webserver"
port = "http"
}
Using the template stanza I can write out the IP / Port dynamically which is great for small config files, ala:
template {
data = <<EOF
middleware:
network:
server:
{{- range service "webserver"}}
- {{ .Address }}:{{ .Port }}
{{- end }}
EOF
However let’s say I need to use the configuration in multiple configuration files or a very large configuration file with multiple references that is mounted from an NFS service. I am finding that using this template is pretty limiting.
I then looked into using DNS resolution where I could possibly simply reference the service as webserver.dc1.consul however since we run in bridge networking I am unable to resolve Consul DNS queries to the consul agent (127.0.0.1 port 8600) from inside the container.
Few parts here I am struggling to understand in trying to tackle this issue.
Best practice on accessing consul running in a container. - Either DNS or templating seem limiting based on my experience above but I could be missing something. Basically - what is the “Nomadic” solution to this.
If I did get DNS working - how can I resolve the port for a service from this definition in the case of dynamic ports.
I struggled with these same questions @brettpatricklarson and came up with a few strategies depending on the application.
In our HAProxy VMs I’m using consul-template to continuously rewrite the frontend/backend mappings using similar templating code to what you have above.
For our container deployments I’m using levant to recursively render the job definitions to get the dynamic data I need in there. This lets me keep my job templates in git and use them in CI to generate the job files. You also have the option of breaking the job files into pieces for easier management.
As for DNS, I configured forwarding for .consul in our DNS servers so its transparent from the client’s perspective and works inside the containers without a problem. I believe that’s the best practice approach
root@csi-cluster-node02:~# ping redis.service.consul
PING redis.service.consul (192.168.64.46) 56(84) bytes of data.
64 bytes from 192.168.64.46 (192.168.64.46): icmp_seq=1 ttl=64 time=0.656 ms
64 bytes from 192.168.64.46 (192.168.64.46): icmp_seq=2 ttl=64 time=0.651 ms
But inside container, no.
/ # ping redis.service.consul
ping: bad address 'redis.service.consul'
Docker container not use consul connect.
I don’t change any dns configuration in nomad job. Because I think docker container will use the host DNS configuration Systemd-Resolved.
Does I need to install a coredns to solve my problem or there is another magic solution?
once dig/nslookup is working on the base machine, to make things work inside the container, you need to pass the first dns server as the Docker bridge IP (usually 172.17.0.1)
In our architecture we decided to setup a dnsmasq server on each consul server that will forward lookup to consul.
NB: our consul server have static local ip in our private lan
/etc/dnsmasq.d/10-consul
# Enable forward lookup of the 'consul' domain:
server=/consul/127.0.0.1#8600
Since our dns server have fixed ip, we can setup our nomad task dns as such:
network {
dns {
servers = ["consul server 1", "consul server 3", "consul server 3"]
}
}
So that our container will use dnsmasq for dns resolution and will be able to resolve any service in our architecture.
That worked well so far.