Yes - a nomad job eval <jobname> on the command line should get that working again. The allocations appearing when a new node is added is due to what I think is an explicit evaluation of all jobs as soon as a node is added to the cluster, that’s why I misunderstood you on the Github issue comments I think
Hi @benvanstaveren!
The command nomad job eval <jobID> will re-evaluation the entire job, that is to say, it will re-schedule all allocations of the job. If there is three allocations in a job, two of them has stopped, and I want to start the one of the two stopped allocations. It seems it does not work, because it will start all two allocations. It’s not what I want.
@qqzeng, I’m interested in what case you would be trying to accomplish by starting one of two stopped allocations? Allocations were not really meant to be addressable individual units, and the allocation stop behaviors were added to allow users to stimulate restarts restarts of an allocation my manually causing it to complete (which for service jobs is expected to reschedule it).
A little more information about your use case might help me to see a workaround that fits in the Nomad paradigm.
It seems that there are some exceptions to my use case. But I really accept that the allocation is designed to be nonaddressable. So I have sovled my issue in another way.