How to connect two Docker containers in two different jobs?

Hi,

I am struggling to do something that is conceptually simple: allow the backend of an app deployed with job “backend” to communicate with a database deployed with job “postgres”.

I would rather not have to deploy Consul etc. Isn’t there a “simple” way to do that only with Nomad?

I have read A LOT of documentation and it seems that I should use the CNI plugin, but I’m still missing some piece of the puzzle…

Here are my two job files (simplified for readability):

job "postgres" {

  group "postgres" {

    network {
      mode = "bridge"

      port "db" {
        static       = 5432
        host_network = "private"
      }
    }

    task "postgres" {
      driver = "docker"

      config {
        image    = "postgres:15.2-alpine3.17"
        ports    = ["db"]
      }

      service {
        name     = "postgres-server"
        port     = "db"
        provider = "nomad"

        check {
          type     = "tcp"
          interval = "10s"
          timeout  = "2s"
        }
      }
    }
  }
}
job "backend" {
  
  group "backend" {

    network {
      mode = "bridge"

      port "http" {
        to           = "3000"
        host_network = "private"
      }
    }

    task "backend" {
      driver = "docker"

      env = {
        "DATABASE_URL"  = "postgresql://user:pass@<WHAT SHOULD I PUT HERE??>/mydb"
      }

      config {
        image = "backend:latest"
        ports = ["http"]
      }

      service {
        name     = "backend"
        port     = "http"
        provider = "nomad"

        check {
          type     = "tcp"
          interval = "10s"
          timeout  = "2s"
        }
      }
    }
  }
}

Note: I use a host_network to bind to cidr = 127.0.0.0/8 to keep things “private”, but maybe with CNI I could do this as well?

Thank you for your help!

Not sure I can get you all the way there, but let me give this a stab.

You should be able to use Nomad service discovery, but I think the trick is to put the two groups in the same job. That way, Nomad will be able to allocate ports to each of the tasks, and you will have access to NOMAD_ADDR_http_frontend and NOMAD_ADDR_backend. You will be able to lookup the service since it’s registered in the same job.

If you really want to keep them separate, and you’re using a newer version of Nomad, you could also use a Consul template to look up the Nomad Service. See

Hope this hops you along the path to success :rabbit2::carrot:

job "backend" {
  
  group "backend" {

    network {
      mode = "bridge"

      port "http" {
        to           = "3000"
        host_network = "private"
      }
    }

    task "backend" {
      driver = "docker"

      env = {
        "DATABASE_URL"  = "postgresql://user:pass@{{ with nomadService 'postgres-server' }}{{ with index . 0 }}{{.Address}}:{{.Port}}{{ end }}{{ end }}/mydb"
      }

      config {
        image = "backend:latest"
        ports = ["http"]
      }

      service {
        name     = "backend"
        port     = "http"
        provider = "nomad"

        check {
          type     = "tcp"
          interval = "10s"
          timeout  = "2s"
        }
      }
    }
  }
}

This should make the trick. I haven’t tested myself, let me know if it works. If it doesn’t work you have to use template stanza in similar way with env=true attribute.

Note that this solution is basic as it will only provide the first item in the list of the service Postgres-server. If you have more than one server, then this will be returning the first IP, not the second one. For more advanced features, check Consul.

Thank you for your reply!

I have tried to do that like so:

job "backend" {
  
  group "backend" {

    network {
      mode = "bridge"

      port "http" {
        to           = "3000"
        host_network = "private"
      }
    }

    task "backend" {
      driver = "docker"

      template {
        data        = <<EOH
{{ range nomadService "postgres-server" }}
DATABASE_URL=postgresql://user:pass@{{ .Address }}:{{ .Port }}/mydb
{{ end }}
EOH
        destination = "local/env.txt"
        env         = true
      }

      env = {
        "DATABASE_URL"  = "$DATABASE_URL"
      }

      config {
        image = "backend:latest"
        ports = ["http"]
      }

      service {
        name     = "backend"
        port     = "http"
        provider = "nomad"

        check {
          type     = "tcp"
          interval = "10s"
          timeout  = "2s"
        }
      }
    }
  }
}

but the DATABASE_URLvariable gets set to postgresql://user:pass@127.0.0.1:5432/mydb and it doesn’t work…

I think (hope ^^) I am missing something minor in the CNI configuration, but can’t figure out what :confused:

It seems your docker container is being exposed in 127.0.0.1.

Could you check this by running the command docker ps ?

If you are in testing mode, I would remove mode = "bridge" and host_network = "private" and check again with docker ps command.

I guess (it’s just a guess) you have just one interface (public one), and when you force it to being expose in private one it uses the loopback interface (just a guess)

I guess (it’s just a guess) you have just one interface (public one), and when you force it to being expose in private one it uses the loopback interface (just a guess)

You’re absolutely right :slight_smile:

Interestingly (maybe), executing docker ps when in bridge mode shows empty ports bindings on the containers created (also true for the automatically created “nomad_init_…” containers) while of course if I remove the bridge mode, I see “normal” bindings (i.e. 127.0.0.1:5432->5432/tcp with the private host_network or #.#.#.#:5432->5432/tcp otherwise)

Awesome! :star_struck:

I don’t have too much experience with nomad, tho. We solved this issue easily because we are using Digital Ocean cloud provider, and all the droplets have two interfaces: public and private. So we are exposing the services just in the private interface.

This can be done globally in the client stanza in the agent settings. So you don’t have to declare it in the job.

client {
   network_interface = "eth1"
}

This will force to expose the jobs in the ‘eth1’ interface. If you just have one, I’m afraid it cannot be done, you have to expose your container somewhere to have access to them.

We have an nginx as a reverse proxy to do this. You could use Traefik or Nginx as a job as an k8s ingress controller.

1 Like

Thank you that’s good to know!

But that doesn’t solve my problem :innocent:

If you have just one interface (public one), I only see two solutions.

  1. Expose docker containers in the public interface and add a firewall.
  2. Create a virtual interface in your host with a private range, then expose your containers there.

These are my bets haha :laughing:

1 Like

Ah gotcha! Thanks :slight_smile:

If you eventually find a suitable solution, mark this as solved explaining what you did, please

Thank you for your help, I finally managed to get what I want!

So in the end:

  • I configured the Nomad client with
    host_network "private" {
      cidr = "127.0.0.0/8"
    }
    
    network_interface = "lo"
    
  • The database job has (the important is the address_mode):
    network {
      port "db" {
        to           = 5432
        host_network = "private"
      }
    }
    
    ...
    
    service {
        name         = "postgres"
        port         = "db"
        provider     = "nomad"
        address_mode = "driver"
    
        check {
          type     = "tcp"
          interval = "10s"
          timeout  = "2s"
        }
      }
    
  • And the backend:
    network {
      port "http" {
        to           = 3000
        host_network = "private"
      }
    }
    
    ...
    
    template {
        data        = <<EOH
    {{ range nomadService "postgres" }}
    {{ if .Address | regexMatch "^172" }}
    DATABASE_URL=postgresql://${var.postgres_user}:${var.postgres_password}@{{ .Address }}:{{ .Port }}/umami
    {{ end }}
    {{ end }}
    EOH
        destination = "local/env.txt"
        env         = true
      }
    

Works great!

2 Likes