so the Nomad hardware requirements page specifies… " 4-8+ cores, 16-32 GB+ of memory, 40-80 GB+ of fast disk".
but… is Nomad client or server actually multithreaded? what happens if i pin it to a single logical core? aka only allocate it a single hyperthread.
and will it really consume many gigabytes of memory? if so do you have any ballpark estimates? because I need to be very careful i don’t face resource starvation situations with my application’s important running binaries it’s meant to schedule.
and will it really consume so many gigabytes of disk space? i have the disk space to spare but would still love some clarity on this so that I can be prepared.
Hi @victorstewart! The answer to all these questions is “it depends” because it’s going to scale with (1) the number of client nodes in your cluster, and (2) the number of jobs and allocations in that cluster.
The hardware requirements we’ve provided are for typical enterprise use cases. If you’re running tiny clusters, you can probably get away with a lot less. If you’ve got a good monitoring culture and deployment pipeline for your team, you could also start smaller and then make adjustments over time as you expand the cluster.
but… is Nomad client or server actually multithreaded? what happens if i pin it to a single logical core? aka only allocate it a single hyperthread.
It’s very multithreaded. Nomad is written in go so it uses userland threads M:N mapped onto OS threads (aka “goroutines”). On clients most of these goroutines are I/O bound but on the servers the scheduler workers are heavily CPU bound and will use as much CPU as you can give them. The less computer power you have, the slower scheduling will be.
If you’re trying to run resource-constrained machines, something to watch out for in cloud environments is using the “burstable” workload type VMs like AWS’s t2. These machines can look like they’ll do the job for a little while and then suddenly get CPU starved enough that they go into leader election flapping.