The use case:
Say I’ve got a homogenous set of batch jobs (crunching data) and I’d like to execute them in parallel on a heterogenous cluster of machines. Some of these machines support containerization/virtualization and some do not. Perhaps they are also running different operating systems. All agents are capable of processing the incoming data, but some might be executing a container while others are running an executable in
raw_exec mode to do so.
My assumption is that tasks for these jobs might need to look slightly different depending on which agent they’re running on- specifically the task driver will need to be different depending on the capabilities of the agent, and the task definition itself may need to vary as a result. However, the agents are all capable of processing the data in one way or another, so it makes more sense to treat them as a homogenous set during work assignment and to deal with the details of how the data is processed once the work has been assigned.
Does Nomad have facilities for dealing with this case? Am I thinking about this wrong, or asking the wrong questions? I imagine this could be solved by writing a custom task driver, but that seems a bit drastic as a solution that could presumably be solved by dynamic driver/task selection.