Faced strange problem. My Nomad jobs are being build from a templates and before they can be used Nomad plan is run to check whether configuration is Ok.
Now I have a PostgreSQL server configuration, that was not changed for really long time, but Nomad started to fail it with message “missing compatible host volumes”. Service uses Docker volume, that volume is described in Nomad client configuration and service is running in an environment I run “nomad plan” on. So volume configuration is Ok.
The most strangest thing is that I have another PostgreSQL server configuration (for separate service to run) with another volume. And that is being planned with no problems.
Now magic: if I paste my second service configuration to a Nomad UI “Run” window and push “Plan” button - it plans with no problems: “Scheduler dry-run All tasks successfully allocated.”
But if I go to that configuration and change job name - it fails.
Just the same configuration, with all same settings, constraints, volumes.
Why and how job name is linked to the volumes?
Hope that anybody can help with this.
Nomad version 1.7.3.