Hi all,
I’m trying to set up a DMZ with Consul Connect and Nomad, and I’m running into some issues.
My setup ist the following:
- Nomad: 3 master/server nodes in datacenter “home” plus two client nodes in datacenter “dmz”
- Consul: 3 server nodes in datacenter “home” plus 2 client nodes in datacenter “home” because client nodes refuse to connect to the server nodes if the datacenter property does not match
My goal is to route service traffic from the “dmz” datacenter through the Consul Connect Gateway to the “home” datacenter. Right now I have ports 20000-31999 open in my firewall, but I would prefer to tighten these rules to just port 8443 for the Consul Connect gateway.
Anyone implemented something similar and could give me a heads up?
The documentation for more advanced topologies with Nomad + Consul still has room for improvement …
Hi @matthias,
Are you planning to run Service Mesh Services in both DCs and make them communicate with each other via a Mesh Gateway (to minimize port exposure between DCs)? Or your is it that your “dmz” DC has non-service mesh services, and they want to connect to the service-mesh services in “Home” DC via Ingress/API Gateway deployed inside Home DC? It would be great if you can share a bit more details.
If it is the former, considering Consul nodes are in the same datacenter (“home”), Consul assumes that all services can talk to each other. Consul Mesh gateways can only be used to connect services between multi-dcs (WAN federation or cluster peering) or between multiple Admin Partitions (Enterprise).
In such cases, the only option is to run Consul in multi-dc setup, but if you are doing this at a smaller scale, this would be an overkill.
I would also like to highlight that, this is were the enterprise feature, Admin Partitions would be useful. When using Admin partitions, under the same Consul DC you can have seggregation as home/default partition and dmz partition, and then you can peer between partitions using Gateways and achieve what you are looking for.
If it is the second option, this is possible.
Hi @Ranjandas ,
thank you for your reply. This clarifies things a bit.
I’m running a very small setup, consisting of three machines for the internal network and two machines for the DMZ. This is reflected in my Nomad setup, where I’m using two datacenters. Three servers for the “home” DC and two clients for the “dmz” DC.
Unfortunately, it looks like the DC semantics between Nomad and Consul are quite different. Nomad is fine with having more than one DC defined in a single cluster, while Consul seems to assume that a Consul cluster is always running in the same DC.
As I don’t really want to setup a full Consul cluster in my “dmz” zone, I think I will have to keep my current setup since I don’t want to upgrade to the enterprise features, including Admin Partitions.
Hi @matthias,
Yes, you are right; Consul DC’s equivalent is a Nomad region.
My current understanding is that your “dmz” to “home” connections will be mostly in one direction, right? In that case, one option I can think of (to limit port exposure) is to have Consul API Gateway (or something similar) expose all services in the “home” DC, and jobs deployed into “dmz” would consume the services in the “home” DC via this gateway, thereby exposing only one port. Have you thought about this option?
Thanks for the reply!
I’m currently using something similar to your approach with Loki log management, where I expose port 3100 for Loki and added an exception to the firewall.
Will try to see if this would work for my other use-cases.