Referring to IP Addresses of other Tasks/Containers

So first - I’m new to Nomad (obviously). I’m trying to convert from docker-compose to Nomad and so far so good. The only problem I’m having right now is referring to the IP of other tasks/containers in the env variables.

I have a group that has two tasks which are both containers. The first is a database container and the second the web app. In the environment variables of the web app I have to pass it the connection string. However, something I’m obviously screwing up isn’t working. The variable interpolation doesn’t seem to be working. The job fails and when I check the logs it shows the actual “$NOMAD_IP_myapp-db_db” string instead of the IP of the task/container.

I’m sure I’m missing something really obvious but I’ve been searching and reading and I haven’t found an answer.

Basic example:

job "networktest" {
  datacenters = ["dc1"]

  group "networktest" {
    network {
      mode = "bridge"
      port "app1" {}
      port "app2" {}
    }

    service {
      name = "networktest"
      tags = ["default"]

      connect {
        sidecar_service {}
      }
    }

    task "app1" {
      driver = "docker"

      service {
        name = "app1"
      }

      env {
        APP1_ID = "${NOMAD_IP_app1}"
      }

      config {
        network_mode = "bridge"
        image = "wbitt/network-multitool"
        ports = [ "app1" ]
      }
    }

    task "app2" {
      driver = "docker"

      service {
        name = "app2"
      }

      env {
        APP2_ID = "${NOMAD_IP_app2}"
      }

      config {
        network_mode = "bridge"
        image = "wbitt/network-multitool"
        ports = [ "app2" ]
      }
    }
  }
}

Thanks!

EDIT - So it appears as though NOMAD_IP_task_label is deprecated anyway so I shouldn’t be using that.

I could take a deep dive into upstreams but of the docs I’ve read on that it looks like that’s more for tasks communicating that are in another group. In my example both the app and db tasks are in the same group.

I could use $NOMAD_IP_ but that returns the host IP address. I’d like the tasks/containers to communicate internally and don’t need the DB port to be exposed externally to the host.

Using the Docker hostname (like I would do in a standard docker-compose scenario) also doesn’t work. For whatever reason the app task/container is trying to do a DNS lookup on that hostname rather than using the Docker network hostname.

EDIT 2 - Alright well hostname will work if I create a Docker network and specify that as the network_mode for both tasks. Seems like a hacky solution though. Since both tasks are a part of the same group are they not run in the same Docket network together?

EDIT 3 - I’m making some progress. I got consul installed on the host and it seems to be configured now. If I check the UI I can see that both app1 and app2 are registered services.

If I look at the details of either service and check their instance it shows the IP address at the top-right as the host IP address!? Shouldn’t this be the internal bridge IP of the container? If I docker inspect either container they have the standard Docker 172.XX IP addresses. If I look at the env variables available in the containers there is a NOMAD_IP_app1 and NOMAD_IP_app2 but they’re both showing as the host IP address.

I’m guessing I have something misconfigured and that’s why it’s showing the host IP?

I’ve also edited my nomad job config to use simple network tool containers so I can diagnose any network issues.