after experimenting a bit with a nomad cluster I would like to ask for recommendations for the host network layout.
I am wondering if it makes sense to physically (or virtually) separate the network traffic between nomad servers and nomad clients from the actual service traffic. Like having two interfaces, where the first can be seen as an administration/cluster interface, where the second netif is mostly responsible for the traffic hitting the services.
Also I am wondering if more people run in a host network mode where we have the port mapping from the host port 20000 to 32000 mapped to the actual service, or if the services are handled by a dedicated DHCP/DNS server (consul).
In the end I would like to be able to ssh into a container/VM without the need to find out and especially to provide the host port to ssh (
ssh -p 221234 ...). But I would still like to have the benefits from eg. a service mesh and the kind-of isolation so that being sshed into one machine does not mean I can use this as a jump host to the whole network. So here I would assume there is no way around the bridged network with the envoy proxy infront, but how does this play together with sshing via the default ssh port?
Any help would be appreciated
Hello, since there seem to be no recommendations, maybe users of medium to large sized clusters can elaborate a bit on their network setups and the motivations behind the chosen setup.
Hi @schlumpfit! Nomad doesn’t really prescribe a network topology - it’s designed to be flexible to work with whatever fits your needs. I think the two interface approach (one private, one public) is very common - in fact many cloud providers provide this kind of setup out of the box. In Nomad client config you’d map both of them as
host_network,'s and then make use of the appropriate one in the
network config of each task.
Service discovery is another topic on top of that - as of v1.3.x Nomad has support for built-in service discovery via API /
template stanza, but not DNS. That is something we are considering but is not yet planned. Consul is of course Hashi’s premier network solution, and Nomad integrates with it natively. Consul can get you DNS based service discovery, with or without the Connect/envoy service mesh bits.
how does this play together with sshing via the default ssh port?
To expose a specific port for a service you can simply use a
static port in the task’s network config. Of course you can only have one static port per network on a host.
Thanks @seth.hoenig for the reply.
I am aware that nomad leaves the exact setup to the end-user, but this doens’t make it any easier .
The main issue I am facing right now is how to provide SSH access to containers and VMs in a nice way. Let’s assume I have a host with 2 interfaces:
- netif1: administration/cluster-traffic
- netif2: workload
The one of interest in my question would be the
No matter if we use nomads native service discovery with traefik or Consuls service mesh the following flow/isolation could be achieved:
client [network A]
Firewall port 443
proxy [network B]
Firewall port 20.000-32.000
nomad clients/compute instances [network C]
So the only exposed port would be 443 in this case. Taking consuls service mesh into consideration, even the traffic inside
network C could be fully isolated/controlled via the mTls certificates.
So the question that comes to my mind is: How do people deal with SSH? Options that I can see are:
- Open the full port range from clients to the nomad client hosts. But I consider this not really an option.
- Make use of static ssh ports other than
22. This defeats the beauty of the whole setup since I suddenly get some static components and therefore dependencies.
- Don’t use port forwarding from the nomad clients to the allocations, but make use of the second network interface and a dedicated DHCP server. Here I don’t see how this integrates with all the benefits of consul like dns-forwarding, service discovery, service-mesch (How to prevent communication between instances inside this network). Maybe I am lacking some knowledge here.
In case the second approach would somehow work I could imagine that all the HTTPS traffic would need to go through a proxy as described above and only the ssh port is exposed from
client [network A] to the
nomad clients/compute instances [network C]. But I have no clue how this would look like in the real world with consul, nomad, envoy …