I’m experimenting with a single-node instance of Nomad (v1.2.0). This client is using the podman driver. It already had a bridge (br0) before I started using Nomad, and I’m using the CNI option for networking. The conflist looks like this:
{
"cniVersion": "0.4.0",
"name": "testing",
"plugins": [
{
"type": "loopback"
},
{
"type": "bridge",
"bridge": "br0",
"isGateway": false,
"ipMasq": false,
"hairpinMode": false,
"ipam": {
"type": "host-local",
"routes": [{ "dst": "0.0.0.0/0" }],
"ranges": [
[
{
"subnet": "192.168.1.0/24",
"rangeStart": "192.168.1.160",
"rangeEnd": "192.168.1.191",
"gateway": "192.168.1.1"
}
]
]
}
}
]
}
The intent is to allocate IP addresses directly on my network to each job’s group and not do port mapping.
In an example job’s configuration, my group section includes
network {
mode = "cni/testing"
port "http" {
static = "80"
}
}
and
service {
tags = ["http"]
name = "${JOB}"
port = "http"
address_mode = "alloc"
}
I am able to create nomad jobs and access the created containers through the allocated IP addresses. The IP addresses are also being correctly published in Consul. In general it seems to be working.
However, the web ui for Nomad shows allocations as having a host address of e.g. 192.168.1.10:80 with a mapped port of 0. This is showing my client’s address. I would expect to see the IP that is being published in Consul. Also (perhaps due to the same root issue), if I plan another job with a static port of 80, I get the error:
“network: reserved port collision http=80” exhausted on 1 nodes
This is unexpected because the port would be allocated on a new IP address. There shouldn’t be any conflict with the host’s network.
Any clues as to what I might be doing wrong?
Thanks!