How does Nomad decide which instances to kill when scaling in a job

As per the title!

When reducing the count of a group, which instances get killed and which are kept.

It seems like newer allocations are killed off before older ones.

That’s right! The newer allocations are usually stopped first, but you shouldn’t count on that behavior, it’s a detail of the implementation and not guaranteed. If the leader of the server cluster changes, the order may change now. If you need a specific order, you probably want to deploy separate jobs.

Thanks for the responses! We’re adapting some mesos / marathon code and just wanted to confirm the expected behavior.