I’m looking for advice on the following:
My goal:
- I have nomad clients which are connected to multiple networks (either using vlan interfaces, or multiple physical interfaces)
- I would like to run jobs on the nomad clients, but restrict the outbound communication of certain jobs to only one of the connected networks
- Use consul connect for inbound routing to these jobs.
My understanding so far is that for inbound port mapping, I can specify a host_network to restrict where the external port of this service is exposed. This is for inbound traffic only. I could also bind the application to 127.0.0.1 in the alloc and use consul connect with a sidecar proxy to control access to the service.
I want to also control outbound access from the application in the task though.
The way it works currently looks like:
The veth devices in the alloc get connected to the default nomad
bridge device:
pi@clusterpi-03:/srv/docker $ bridge link show
38: veth09743fb6@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master nomad state forwarding priority 32 cost 2
39: veth0fe2613d@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master nomad state forwarding priority 32 cost 2
40: vethcc8eb4af@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master nomad state forwarding priority 32 cost 2
41: veth1e688971@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master nomad state forwarding priority 32 cost 2
Inside the alloc, the default route is set to the IP of the nomad
bridge, and on the host, traffic leaving the bridge is masqeraded to a host IP. This implies that the task can make outbound connections to any network that the host can connect to.
I would rather restrict the alloc to a single network, as well as ensure that outbound connections from the task route via a specific gateway.
Network isolation between groups/jobs on the same host talks about alloc to alloc communication, not commns from an alloc to the outside network
CNI plugins are a possible option - I could use macvlan to provide a network interface inside the alloc which is isolated to a single network. There are some limitations here though. CNI macvlan requires the CNI plugin to provide an IP address, so each nomad client would have to have a managed pool of IPs on each connected vlan, leading to management complexity. Also, allowing the alloc to connect to a macvlan for external routing, while also connecting to the nomad bridge for consul connect seems not possible
Another option would be to define a CNI bridge, where I can specify the VLAN when I attach the veth to the bridge (e.g. bridge vlan add dev veth2 vid 2 pvid untagged
), but this is not supported by the CNI plugin (CNI)
So, a few specific questions:
- Is consul connect only supported if the alloc is connected to the default nomad bridge device?
- Is there a way of having cluster-level IP allocation management (so that nomad can manage the addresses assigned to macvlan-based interfaces)
- Are there any worked examples of managing alloc → network connectivity as described here?