Nomad service discovery registering public ipv4 instead of private

I have a cluster set up with a few different nodes, and want to deploy a postgres job to one of them. In my jobspec I have specified that it should use Nomad’s service discovery:

service {
    name = "postgres"
    provider = "nomad"
    port = "pg"

    check {
        type     = "tcp"
        interval = "10s"
        timeout  = "2s"
    }
}

When the job is allocated, I can see in the Nomad UI that it is registered as a service, however it has received the public ipv4 address of the Nomad node, and not the private IP address. In this case the node’s IP address on the private LAN is 10.0.0.8, and I’d like Nomad to register that IP for the postgres service, since I want my application to communicate on this fast local private network.

I can’t find a way to specify if Nomad should register the public or private IP address for service discovery, is this possible?

Edit: not sure if relevant, but while investigating this I found another issue where my client node’s address in the Nomad UI was incorrect: 172.17.0.1:4646. I think this was introduced by an earlier change in the bash script we use to provision clients. This address is the docker0 network adapter according to ip address list:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 96:00:02:ea:3a:5a brd ff:ff:ff:ff:ff:ff
    inet redacted metric 100 scope global dynamic eth0
       valid_lft 85903sec preferred_lft 85903sec
    inet6 fe80::9400:2ff:feea:3a5a/64 scope link
       valid_lft forever preferred_lft forever
3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc fq_codel state UP group default qlen 1000
    link/ether 86:00:00:72:8a:eb brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.8/32 brd 10.0.0.8 scope global dynamic enp7s0
       valid_lft 85905sec preferred_lft 85905sec
    inet6 fe80::8400:ff:fe72:8aeb/64 scope link
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:d9:6f:16:55 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

However, updating my client’s configuration to:

network_interface = "{{ GetPrivateInterfaces | include \"network\" \"10.0.0.0/8\" | attr \"name\" }}"

and then restarting Nomad does not fix it. Even hard-coding network_interface to enp7s0 does not update the address in the UI to 10.0.0.0/8. Not sure if this is a related issue, I have a feeling that if this network_interface is set correctly, Nomad would correctly bind my progres service to 10.0.0.0/8?

Edit 2: after messing around a bit and updating my nomad.hcl configuration to have the following:

addresses = {
  http = "{{ GetPrivateInterfaces | include \"network\" \"10.0.0.0/8\" | attr \"address\" }}"
  rpc = "{{ GetPrivateInterfaces | include \"network\" \"10.0.0.0/8\" | attr \"address\" }}"
  serf = "{{ GetPrivateInterfaces | include \"network\" \"10.0.0.0/8\" | attr \"address\" }}"
}

it looks like all my nodes have the correct IPs (in the 10.0.0.0/8 range) in the Nomad UI, and my postgres service now has the address 10.0.0.8, which is what I wanted. I’m feeling unsure about if I’ve solved this in the best way, so any further input is appreciated. My bind_addr is 0.0.0.0, and it looks like Nomad was binding http, rpc and serf to the docker0 interface, which is not what I would have expected.

hi,

I have a sandbox stack … and also a public ip and private. It took a while … but I have the below config. It makes sure, that services are only on the 10.4.x.x
registered.

{
  "data_dir": "/opt/nomad",
  "disable_update_check": "true",
  "region": "ffm-sandbox",
  "datacenter": "foobar",
  "log_level": "INFO",
  "bind_addr": "0.0.0.0",
  "server": {
    "enabled": true,
    "bootstrap_expect": 3,
    "server_join": {
      "retry_join": [
        "hashi-server-01.sandbox.example.work:4648",
        "hashi-server-02.sandbox.example.work:4648",
        "hashi-server-03.sandbox.example.work:4648"
      ]
    }
  },
  "consul": {
    "address": "hashi-server-01.sandbox.example.work:8501",
    "server_service_name": "nomad",
    "client_service_name": "nomad-client",
    "auto_advertise": true,
    "checks_use_advertise": true,
    "server_auto_join": true,
    "client_auto_join": true,
    "ssl": true,
    "ca_file": "/etc/ssl/acme/ca.pem",
    "cert_file": "/etc/ssl/acme/_.sandbox.example.work.crt",
    "key_file": "/etc/ssl/acme/_.sandbox.example.work.key",
    "token": ""
  },
  "tls": {
    "http": true,
    "rpc": true,
    "ca_file": "/etc/ssl/acme/_.sandbox.example.work.crt",
    "cert_file": "/etc/ssl/acme/_.sandbox.example.work.crt",
    "key_file": "/etc/ssl/acme/_.sandbox.example.work.key",
    "verify_server_hostname": false,
    "verify_https_client": false
  },
  "acl": {
    "enabled": true,
    "token_ttl": "30s",
    "policy_ttl": "60s",
    "role_ttl": "60s"
  },
  "vault": {
    "enabled": true,
    "ca_file": "/etc/ssl/acme/_.sandbox.example.work.crt",
    "cert_file": "/etc/ssl/acme/_.sandbox.example.work.crt",
    "key_file": "/etc/ssl/acme/_.sandbox.example.work.key",
    "address": "https://vault.example.payabl.work",
    "create_from_role": "nomad-cluster",
    "token": "",
    "tls_skip_verify": true
  },
  "advertise": {
    "http": "10.4.1.11",
    "rpc": "10.4.1.11",
    "serf": "10.4.1.11"
  },
  "plugin": [
    {
      "docker": {
        "config": {
          "allow_privileged": false,
          "volumes": {
            "enabled": false
          }
        }
      }
    }
  ]
}