Binding IPs to Docker Jobs?

I’ve setup a Nomad/Consul Raspberry Pi cluster here to play around with (as an aside I worked on some Ansible to get it stood up quickly if anyone wants to try it as well). It gives me some good Nomad practice while optionally letting me run some things I was running on my NAS as raw LXC containers.

One problem I’ve run into is how to expose services externally in a way that isn’t super complicated. Say I want to run Grafana in Nomad. I don’t choose which node it will end up on. In the case of Grafana, and really most jobs, I only need 1 instance running.

I was trying to figure out if there was a way to just assign a static IP to the job itself so that, no matter where it runs, it will have the same IP on my local network. Akin to adding another IP to a normal Linux machine or to having an LXC container setup a static IP. For my setup, though I know that’s not really the right way to do it, it’d be a lot simpler.

To get around this, instead, I setup an haproxy instance on my NAS that just checks all the nodes for the port, in this case, Grafana would run on (and also maps it over to port 80). But that means if my NAS is down, I can’t get to Grafana which sort of invalidates part of the benefit.

I know in a real setup Consul would be involved and we can leverage that and either configure DNS services (e.g. grafana.mynetwork.home which is updated via Consul or points to a Consul DNS entry) but I haven’t figured out how to do that without having a single point of failure (and likewise if I’m injecting records, I need to write something that will work with my Mikrotik router).

That just feels way more complicated for basically a lab setup over just assigning IPs to various services I’d want to run but I’ve been tripping over myself trying to figure it out.

Any ideas?