Nomad job unreachable in bridge mode networking even though container runs fine

Hi folks, an interesting problem cropped up the other day and it’s got me absolutely stumped. We’re running a Traefik docker container in a Nomad job. The job itself has a network stanza including:

network {
    mode = "host"
}

The ports are set up as follows:

port "http" {
    to = 8080
    static = 80
}
port "https" {
    to = 8443
    static = 443
}

Traefik is configured to listen on ports 8080 and 8443 respectively. This works fine. The host itself is configured with network_interface = "ens10" (which has IP 10.8.1.2) and in the allocation view in the UI you can see that Nomad has indeed allocated ports on the right IP address.

Today I wanted to enable influx metrics recording from Traefik, and to do that I need consul connect to reach our influx instance. I figured just changing the network mode to bridge would suffice because, well, that setup in particular works for every other service we’ve got running.

After adding the sidecar/proxy/upstream bits, changing the network mode to bridge, and restarting the job, the allocation view still shows the proper port mapping on the proper IP, however, when you try to connect to it you get a plain out connection refused message. The connect proxy however functions just fine and can be connected to.

The node itself has no firewall, all iptables policies are set to accept with an empty ruleset, and the only contents of an iptables -L/iptables -L -t nat are the rules nomad puts in there.

Anyone have any suggestions for what on earth could be causing this?

Hi @benvanstaveren,

I’ve been having the same issues in the same scenario.

What I found additionally was that on the host node I could query without having a connection refused on the loopback interface.

But not from the other.

Have you made some progress on the issue?

The problem sort of fixed itself, a full reboot of the physical nodes in question just caused things to work. I’m unsure why but I guess turning it off and then on again does work :smiley: