Accessing Nomad's Consul from Applications

Hi! I’m currently migrating a bunch of internal applications from docker swarm to nomad. Some of these applications use consul for leader election. Currently they use a simple consul instance deployed into the docker swarm.

My question is: Given that nomad already comes with a consul instance, can applications use that instance as well? (“can” as in: is it technically possible and a good idea)

And follow up question if it can be used: How can I dynamically discover it’s address? For nomad services I can use templates to get their address and port, can I do the same for the consul address?

AFAIK, it is best practice to run a Consul client instance on every Nomad client instance.

Therefore, your application should just query Consul on localhost:8500 and go from there. No need for complex discovery.

1 Like

I assume localhost wouldn’t work from within in tasks with docker driver, would it?

If you are using the host driver, you should be fine.

For bridge you have to configure your app to use the Consul host at NOMAD_IP_xxx, see the network stanza.

I’ve checked the env inside the container and I haven’t seen an entry for consul. Can you elaborate?

Well, if you define a network port “xxx” in your Nomad job file, Nomad will create an environment variable NOMAD_IP_xxx, which contains the external IP of the Nomad host.

As long as Consul is running on the same machine, this is also the IP address of the Consul instance.
Just add the Consul port and your app should be able to communicate with Consul. Something along the lines of
{{ env “NOMAD_IP_xxx” }}:8500
when you create to config file for your app via templating.

So this would “abuse” that nomad will inject the current node’s IP into the env when exposing services from a task? And this would also only work if it is in host network mode, right? In bridge mode I’d expect the IP to be in the bridge network.

If nomad was deployed as a nomad task itself, than I could use nomad’s service discovery to find the address, but in our deployment that’s not the case.

Given the central role of consul, it’s kinda surprising to not have some easier to way to use it.

for Docker driver, from inside the container you can use the Docker bridge IP to query, assuming the Consul process has been configured to listen on local interfaces as well.

that assumes knowledge of the bridge network, which I don’t want to assume. In that sense I might as well just hardcode the IP of the consul server.

would there be a way for a nomad operator to define “global environment variables”, that are injected into every task? That way I could inject the address of the consul instance into all applications.

Ummm, not really … you can use the “docker bridge ip” attribute …
ref: Drivers: Docker | Nomad | HashiCorp Developer

Example: (HCL2 syntax)


OR (older HCL1 syntax)


For example, in the env section of your job you could do:

env {
  DB_IP = "${attr.driver.docker.bridge_ip}"

or to directly use in the dns_servers list you can try like this (that syntax works in the latest versions)

That at least seems like a viable solution. Are all the available attributes documented somewhere?

In my opinion this would be a starting point:

a bunch of links are present in the opening paragraph.

the driver specific require a bit of discovery … example: Task Drivers | Nomad | HashiCorp Developer

I just realized that consul registeres itself as the service consul, so something like this should just work:

template {
  data = <<EOH
{{ range service "consul" }}
CONSUL_ADDRESS=http://{{ .Address }}:{{ .Port }}
{{ end }}
  destination = "consul.env"
  env = true

When there are more than 1 Consul servers, this would expand to multiple lines, right?

Using some Go template magic, you may want to use only the first one, I think

I assume, since env vars must be unique, that consul would do some decision on which of the values to actually use, if the variable shows up multiple times. I guess the better (more robust) way would probably be to use Consul Connect’s service mesh stuff.

I realized that the “consul” service is not for agents and weirdly the “consul-agent” service we had only contained a small subset of the available agents.

Even though I think it’s not a good assumption to make, I now do what @matthias suggested here: Accessing Nomad's Consul from Applications - #6 by matthias.
I guess it’s at least a common deployment scheme to have nomad and consul agents colocated.

IIRC, running Nomad and Consul client on the same instance is considered Best Practice by Hashicorp. Pretty sure that I read that somewhere.

And it actually makes sense. I’m currently in the process of building a Node Red plugin which registers a Watch on Consul and runs flows based on topology changes.
In my case, a change to the Traefik service should reconfigure the internet HTTP(S) forward on my Unifi router via REST to point to the new location of the Traefik instance.

Connecting the plugin to Consul becomes so much easier if you can just assume that Consul is running on the same machine on a well-known port.