I am attempting to create a Nomad job for a Redpanda HA cluster. The internal config files, similar to how we would set up bootstrap addresses for other Consul or Nomad server nodes, need to know how to reach the other nodes (I have a group with a count of 3). I’m having trouble figuring out how to describe this in the Nomad templates; I would think that I could reference the allocations by index (0-2) but I can’t find any primitives that looks like they would do the trick and it seems like the references in the template would be hard to know at the time of launch.
I am able to see things like node.unique.id or NOMAD_IP_label, but those are all in reference to the current node. I need to be able to point to the other allocations
I’m using Consul, so maybe there’s some service-level environment variables there I can tap into.
Has anyone done this? Any help would be appreciated.
I also thought that was what I needed, but there is a Catch-22. You need to look up services in the catalog in order to register the same services in the catalog. This is a self-referential loop, and if I remember correctly, you can’t ask the job to find all other instances of it’s own services. You can only find other services which already exist in the catalog.
One of the hacks I thought of trying was to register so-called “pilot” services – dummy services which register themselves independently, reserving the place of the actual service which will be placed later. I thought of using the prestart lifecycle for this, but never actually got around it it.
It seems like using the independent Consul-Template binary as you originally suggested, rather than what is baked into Nomad would be better then, as it can listen to the catalog and update itself as task statuses change.