Schedule one job at a time per node

Hi,

I am using Nomad 1.0.4 at the moment.

I have the following scenario/requirement:
There are nodes node1, node2, … nodeN of a distributed DB (say Cassandra).

I want to run a cleanup job on each node, but ONLY one node at a time, SERIALLY !!!

Order of nodes themselves is not that important, but the cleanup CANNOT start on multiple nodes at the same time.

Also, if any cleanup were to fail, the job sequence should NOT go any further, as an operator should look into why the particular node failed.

What could I use to achieve the following?

What I have thought of so far is to start multipl parametrized jobs, one for each node (constraint matching node name), and when the time comes for cleanup, run one job at a time and look at result and continue.

Is there any other simpler way?

Hi @shantanugadgil,

Nomad does not have a native way to do this and I believe your approach is a good way of looking at it. In addition with a CI/CD pipeline to control the execution flow and therefore halt if a job fails, this would be my initial approach also.

An outstanding question to double check would be whether it is possible to parameterise the constraints in such as way.

Thanks,
jrasell and the Nomad team

I am not sure if parameterized is possible at all.

Between this:

and this:

… I am not able to piece together how to target (constraint) certain node after job invocation.