How to resolve Consul services from in a Nomad Docker container

I am loving the Consul integration with Nomad however I am struggling in how I can effectively resolve Consul services from a docker container running in Nomad.

Let’s say I have a service:
webserver.dc1.consul that lives in nomad and has a dynamic port.

  group "webserver" {
    network {
      port "http" {
        to     = 8080


      service {
        name     = "webserver"
        port     = "http"

Using the template stanza I can write out the IP / Port dynamically which is great for small config files, ala:

      template {
        data = <<EOF
                  {{- range service "webserver"}}
                  - {{ .Address }}:{{ .Port }}
                  {{- end }}

However let’s say I need to use the configuration in multiple configuration files or a very large configuration file with multiple references that is mounted from an NFS service. I am finding that using this template is pretty limiting.

I then looked into using DNS resolution where I could possibly simply reference the service as webserver.dc1.consul however since we run in bridge networking I am unable to resolve Consul DNS queries to the consul agent ( port 8600) from inside the container.

Few parts here I am struggling to understand in trying to tackle this issue.

  1. Best practice on accessing consul running in a container. - Either DNS or templating seem limiting based on my experience above but I could be missing something. Basically - what is the “Nomadic” solution to this.
  2. If I did get DNS working - how can I resolve the port for a service from this definition in the case of dynamic ports.

I struggled with these same questions @brettpatricklarson and came up with a few strategies depending on the application.

In our HAProxy VMs I’m using consul-template to continuously rewrite the frontend/backend mappings using similar templating code to what you have above.

For our container deployments I’m using levant to recursively render the job definitions to get the dynamic data I need in there. This lets me keep my job templates in git and use them in CI to generate the job files. You also have the option of breaking the job files into pieces for easier management.

As for DNS, I configured forwarding for .consul in our DNS servers so its transparent from the client’s perspective and works inside the containers without a problem. I believe that’s the best practice approach

1 Like

Bonjour :wave:

Sorry I used this topic, but I think I’m in same situation.

I followed the hashicorp learn about forwarding DNS.
Ubuntu 20.04
Nomad 1.3.5 / Consul 1.13.2
5 nodes

With ansible, I deploy the resolved/consul.conf


It works for ping services from host.

root@csi-cluster-node02:~# ping redis.service.consul
PING redis.service.consul ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=64 time=0.656 ms
64 bytes from ( icmp_seq=2 ttl=64 time=0.651 ms

But inside container, no.

/ # ping redis.service.consul
ping: bad address 'redis.service.consul'

Docker container not use consul connect.
I don’t change any dns configuration in nomad job. Because I think docker container will use the host DNS configuration Systemd-Resolved.

Does I need to install a coredns to solve my problem or there is another magic solution?


once dig/nslookup is working on the base machine, to make things work inside the container, you need to pass the first dns server as the Docker bridge IP (usually

Example ref: