Will the scheduler over-allocate a node?

I would like to understand what the scheduler would do in the following setup:

  • 3 client nodes
  • 10 unique (long-running) services
    – each is it’s own job
    – each with a count of 3
    – constraint distinct_hosts is set to true

=> so in theory this results in exactly unique 10 services per node

Now lets assume the total required memory of the 10 services (based on the sum of the memory in the resources stanza) is higher than the available memory of each host.
e.g. each service has 1 GB defined in the resources stanza but the hosts have only 8 GB available each.
What will the scheduler do in such a case?

Based on everything I read in the documentation I’m currently assuming, that the scheduler will allocate up to 8 services per node and in the worst case up to 2 unique services won’t be running on neither host.
Is that assumption correct?

If it’s correct, is there a way to bypass it? A way to say the scheduler “trust me, just do what I’m asking for”?

Hi @ConfiDev! You can’t overprovision memory, so you’ll end up with a plan error something like the following:

 ==> Evaluation "27c25f85" finished with status "complete" but failed to place all allocations:
    Task Group "cache" (failed to place 1 allocation):
      * Resources exhausted on 1 nodes
      * Dimension "memory" exhausted on 1 nodes
    Evaluation "05b685a8" waiting for additional capacity to place remainder

Memory overcommitment isn’t a feature any of the task drivers support. You’ll need to tune the memory reservations for your tasks to fit the available space.

Another cute hack is to puposely configure more memory in the agent config than the real memory of the machine. This will achieve over-provisioning in a roundabout manner.