How to connect jobs like docker compose network and dns resolving

Hi, this is a newbie question. I’m trying to deploy a web app, so I need to deploy several services like database, workers, redis, the app and some others.

I don’t know how to communicate with them. I come from kubernetes and docker-compose where you can just connect to “db” and it was automatically resolved.

I think I have three options:

  1. Create env files or export env templates from consul and read them in the app.
  2. Use a dns server like dnsmasq where nomad and consul runs.
  3. Use traefik?

I’ve tried 1 & 2. For 1, I don’t think is the best way. Do I need to reload the app if the job changes to another node?

I’m trying to figure out how to do it with (2). I have dnsmasq and I have configured this:

# Enable forward lookup of the 'consul' domain:

In the node I can run dig db.service.consul and it’s resolved successfully, but on the task (I’m using the docker driver) it doesn’t work. The dns servers in resolve.conf aren’t the ones expected.

And about (3), I don’t know if it’s possible or it’s an acceptable way of doing it.




it is a great question. 1 and 2 are possible, but you can manage it like in k8s or docker-compose. I am a beginner too but now is it almost two months since I run the first cluster with an application so maybe you can reuse my example.

Here is an working example:

job “cool-app” {
datacenters = [“dc1”]

type = “service”

group “deployment” {
count = 1

network {
  # this is IMPORTANT
  mode = "bridge"

  port "http" {
    to = 8080


service {
  name = "${NOMAD_JOB_NAME}-http"

  tags = [

  port = "http"

  check {
    name     = "${NOMAD_JOB_NAME} - alive"
    type     = "http"
    path     = "/_health"
    interval = "1m"
    timeout  = "10s"

task "app" {
  driver = "docker"

  config {
    image = "my-cool-app:stable"
    command = "server"

    ports = ["http"]

  env {
    DB_HOST = ""
    MYSQL_DATABASE = "appdb"
    MYSQL_USER = "app"
    MYSQL_PASSWORD = "secretPassword:-)"

  kill_timeout = "30s"
}   # END cool-app task

task "mariadb" {
  driver = "docker"
  kill_timeout = "30s"

  lifecycle {
    sidecar = true

  # way how to restore your db
  artifact {
    source      = ""
    destination = "local/"
    options {
      filename = "db_dump.sql.gz"
      archive = false

  config {
    image = "mariadb:10.5.5"

    volumes = [

  env {
    MYSQL_ROOT_PASSWORD = "rootPassword"
    MYSQL_DATABASE = "appdb"
    MYSQL_USER = "app"
    MYSQL_PASSWORD = "secretPassword:-)"

}  # END mariadb task


This deployment deploys 2 containers sharing localhost, so the app container will see the 3306 port of the MariaDB container on his loopback IP ( You will have to install the CNI plugins before running this job.
The advantage of this solution is that a TCP port of the DB is not open to the world.
I leave in example a mechanism of exposing app port on a node and a traefik example (maybe it helps you with start).

1 Like

Thanks @msirovy!

If the app and the db are in different nodes, can the app still access the db in Shouldn’t they run in the same node for this to work?


This works only inside group stanza. And group stanza should be at the same node. If you need cross node connection, the sidecar proxy is solution for you. I don’t have experience with it but this could be great source for you

1 Like

I had a look at it and it’s not easy.

Is there an easy way to configure linked services like in swarm? I mean, with automatic name resolution.

I’m trying to do it with dnsmasq and using DNS, but it’s not easy either. I really don’t know what is the supposed way to do it here.

I haven’t found any example of how to build a simple app with two services communicating either.

I think I’m missing something here because it seems that is something very basic that most of the people need.

Hi Adrian,

I’ve experience only with a k8s and docker-compose. In K8s is service discovery provided by etcd, in the hashistack you can do the same with a Consul. In k8s you use networking layer across nodes which alows you to connect services together and you can do the same in nomad cluster of course by CNI plugins, but in K8s and Nomad it is definitely not easy.

As I’ve told before you have basicaly options as follows.

You can connect services together via localhost (bridge), but it works only betwean tasks in the same group and running at the same node.

Other option, is to make services able to comunicate together across whole cloud by binding services to node’s port. Simpiest way how to reach this is binding all services in job at the node’s ports and than use consul as a service discovery service. Than you can use registered services via dnsmasq as you are trying or by consul templates (have you already read this article ?)

Underlaying networks like calico is good way and you can do it with nomad too, but the service mesh way with consul connect is bit more modern and you have more controll there. So I really recomend you to use this way It is not as complicated as it sems to be.

Hi @AdrianRibao :wave:

I think 1 would be a good start to get a better starting of how things work in Nomad. As you’ve noticed, they are a bit different from how docker-compose and Kubernetes do things.

For 1, I don’t think is the best way. Do I need to reload the app if the job changes to another node?

It depends on what you are running. Some applications, like nginx, can reload their configuration without a restart. You can control what happens when your config template changes by using change_mode. Here’s an example where we use Consul to find the IP and port of a set of redis servers.

2 should also work, but I think it would require a bit more setup. You can define the DNS configuration for your group within network -> dns.

The dns servers in resolve.conf aren’t the ones expected.

Could you elaborate a bit more on this?

As @msirovy mentioned, you can also Consul Connect for service mesh. Nomad is well integrated with Consul Connect, so it’s actually quite easy to get started. The link posted will give you quick introduction, but basically when you add this to your service:

connect {
  sidecar_service {}

Nomad will automatically deploy a sidecar proxy for you that uses Consul to find other services. Since the sidecar will be deployed in the same group, everything can be accessed via localhost. Behind the scenes, the Consul Connect proxy will automatically encrypt and send traffic across your nodes to other proxies, so this will work even if your jobs are running in different clients.


Thanks a lot for your help! I didn’t have time last week to have a look at this, but I’ll give it a try this week.

I hope I can figure out how it works. Whatever happens, I’ll let you know how it went :slight_smile:


1 Like

This is good to know. TIL. :+1: :+1:

BTW, just out of curiosity, when did this happen? as part of moving the network to the group ?

I ask, as I don’t find a mention of the network -> dns parameters in the CHANGELOG. :grinning:

BTW, just out of curiosity, when did this happen? as part of moving the network to the group ?

This is fairly recent, but I am not sure if it was part of moving the network stanza. It was added in this PR #7661.

1 Like