I’m setting up a nomad cluster for the first time, and I’d love to know if there are best practices to reason about, or measure, how much CPU to allocate to a task.
The best I can think of is to look at Wikipedia’s page on “Microprocessor Chronology” and try to guess at what seems reasonable on current hardware.
My initial reaction would be to set cpu to a low number, like 100, as that shows up often in the documentation. However that doesn’t have an measurable basis.
Is there either a rule of thumb, or a set of measurement tools I could run, to evaluate this?
Is this for Docker or raw_exec or something else? I would take a look at the workload as it runs and it should show you how much CPU an allocation is using in the Nomad UI.
I believe that CPU is allowed to burst above if needed so It’s a little tricky as CPU by default doesn’t have a hard limit. In some ways it’s more used for scheduling if the server is not overutilized.
My understanding on CPU is that it’s used for
- Scheduling workloads / bin packing
- Divvying up CPU shares if the CPU is overutilized (higher number = more priority)
I am not 100% sure this is the case but this is my understanding at the moment.