Dispatching Nomad jobs across nodes dynamically

Hello! Do Nomad/Consul address following problems:

  1. I need a catalog of nodes in my network.
  2. I need to assign labels/tags to nodes dynamically so I can deploy applications based on those tags.

It seems 1. is addressed by Consul and Nomad as they both offer an endpoint returning all detected nodes.
However I don’t really see how they can address point 2. Consul allows to add metadata to nodes dynamically but I didn’t see any example how Nomad can check Consul’s metadata in Job’s constraints. I saw that you can specify metadata in Nomad agent you can check in constraints but they can’t be changed dynamically - they require restarting an agent.

Bonus question: if there is a Job with constraint that it requires label A and host X has label A and Job is run on host X when later on label A disappears from host X does Nomad stop the Job and moves it to another machine automatically?

Hi @hejcz ! interesting situation you find yourself in!

It seems to me like we need some more details here. The question is - who or what is assigning these tags? Based on what conditions?
If perhaps you follow some sort of operator pattern in Nomad, you can write a small application to perform the task of updating metadata via the Consul API for Consul metadata.

However, since you are using Nomad to run jobs, you may as well use the Nomad metadata stanza. This will allow you to create constraints based on that.

I can’t tell from the documentation how one would use the API to update the agent metadata, which leads me to believe that it is static until the configuration is changed and reloaded

Off the top of my head, I could suggest a consul-template to generate a config file based on Consul KV data with the node metadata in it. You could then use a watch to send a SIGHUP to Nomad and reload the new configuration.

This sounds like you want a reschedule stanza which will reschedule the job on a node when the constraint changes. However – this would require a new evaulation, which it seems is automatically triggered only on three types of events:


You would likely have to trigger a job update event, always via the API to have this work… although Nomad might be smart enough to update the job itself if the node metadata changes.

Hope this helps and looking forward to hear other voices and/or how you eventually solved this!

It’s not that important. I imagine some side application assigning metadata based on a hostname pattern.

Docs say host reload is possible only for a very limited subset of keys. Is it possible to alter Nomad’s config via rest api? Can’t find anything in following docs:

Docs say “Nomad will attempt to schedule the task on another node if any of its allocation statuses become failed”. What are possible allocation statuses?

The actual problem I’m trying to solve is we have a group of around 100 servers but we want to split them in logical groups so one server can run apps A and B, another one B, C and D and so on. We want to by able to modify an information which node is valid for particular app type.

Hi @hejcz did you find a solution for this? I came here to ask the exact same question. It looks like updating the job may be the best way - in my case I want to use this to perform a reschedule from one DC to another.

Then you want to look at node_class and/or node_group configuration on agent nodes. (obPedantic: servers do not run anything, they just coordinate and let the agents {or clients} do the hard work).

You can use the node class or node group as a constraint in your job spec to indicate where a particular job is allowed to run. If your logical groups coincide with physical datacenters it’s even better because then you just define an agent node as being in that datacenter and you can put a constraint on that with the datacenter stanza in the job spec.