Stop allocation restarts it automatically

Hi,
I am trying to stop a healthy allocation in my three node cluster and it just keeps getting started again. Why is this happening? I did not set any restart or reschedule stanza configuration.

Many thanks,
bert

Hi @bert2002,

Is the allocation being restarted or is a new allocation being started in its place after you have run the nomad alloc stop <alloc_id> command?

On job registration, Nomad will configure default settings on the restart stanza as detailed here. You can also see the canonical Nomad representation of the running job by using nomad job inspect <job_id>.

Thanks,
jrasell and the Nomad team

Hi @jrasell, a new allocation is being started. I tried to convince Nomad to not start a new one by setting restart.attempts to 0, but it still started it (does this only apply when it fails?). Does it need any other configuration?

Thanks,
bert

Hi @bert2002,

Nomad will start new allocations because the job specification group count would still be set to 1. Therefore once the running allocation is stopped, Nomad will identify a discrepancy between the number of running allocations, and the desired; starting new allocations to resolve this issue.

If you are looking to stop the job from running any allocations you could use the nomad job stop <job_id> command or the nomad job scale <job_id> 0 command. The stop will result in the job eventually being garbage collected from the system, whereas scaling the job to zero will allow the job to survive garbage collection cycles.

Thanks,
jrasell and the Nomad team