Correct way to share DNS entires with tasks

I have a simple configuration that launches two tasks, on redis and one httpd; both are bare bones setups.

I’m trying to find the correct way to have the httpd task be able to connect to the redis port of the redis task. After starting the job and running nomad alloc exec into the httpd task the machine is unaware of how to find the redis task.

What is the nomad way to have tasks aware of other tasks?

Here is my dns-example.nomad:

job "dns-example" {
  region = "global"
  datacenters = ["dc1"]
  type = "service"
  group "services" {
    count = 1

    network {
      mode = "host"

      port "https" {}
      port "http" {}

      port "redis" {
        static = 6379

    task "redis" {
      driver = "docker"
      config {
        image      = "redis:6.2.1-alpine3.13"
        ports      = ["redis"]
        force_pull = false
        args = ["redis-server"]

    task "httpd" {
      driver = "docker"
      config {
        image      = "httpd:2.4-alpine"
        ports      = ["http", "https"]
        force_pull = false

Hi @hashicorp5,

Nomad will make a number of environment variables available to the tasks; the most relevant being the network related variables. These network variables include IP addresses and port details of each entry within the network stanza you have defined. Using your example job, you could use ${NOMAD_ADDR_redis} to obtain the ip:port address assigned to the Redis server, passing this to httpd on startup as needed.

A followup question, is there a particular reason to use host network mode rather than bridge?

jrasell and the Nomad team

@jrasell I’m just throwing spaghetti at the wall to see what “sticks” and when I threw my hands up, it happened that I had the network configuration as host.

I did try bridge in the past but I get the error message:

=> Evaluation "21847c5c" finished with status "complete" but failed to place all allocations:
    Task Group "services" (failed to place 1 allocation):
      * Constraint "missing network": 1 nodes excluded by filter
    Evaluation "0aaab9da" waiting for additional capacity to place remainder

And trying to get to the root cause of this error message was leading me down rabbits holes which yielded no working examples.

  • Mike D.