Thank You for the reply.
While thinking about these constraints I was trying to solve a little bit another problem: I just want to manage cluster resources so that “system” nodes host all the staff we need to run as an additional services to our own. There can be some queue managers (like RabbitMQ), maybe smtp service, etc. They just have their own resource pool and use it without affecting main microservices we need to run. To ensure stable Waypoint working there can be several nodes where server and runner can be installed.
Would like to repeat my main idea: I just want to have more controllable cluster. I have nodes where I run my staff and I want them to be flexible: I should be able to start, stop or restart them on demand. It’s normal when some system requirements are changing and I have to update Nomad’s configuration file (for example to add a volume). If then I restart Nomad I want to be sure that CD pipeline is still working.
In our system we have nodes where changes can be done quite frequently. And also there are nodes, which are allocated for a single service only, so there no changes will be applied for a very long time. I think that it could really be great to have possibility to manage where Waypoint instances should run. Nomad does have all these features.
As You mentioned, there could be great to have an additional optional parameter in Waypoint command setting constraints (attribute in meta data for example) that should be referred to when deploying Waypoint server and runner instances.