Using Nomad without Consul?

Hi everyone,

I’m new to Nomad and Consul.

I have a simple app (small REST API + Postgres DB, in Docker containers) what I’d like to host on Oracle Cloud Infrastructure where I get 4 ARM cores with 24 GB RAM altogether for free, also I get 2 AMD VMs for free and I don’t need to pay a cent. I also get OCI Vault for free, so I probably don’t need to host HashiCorp Vault.

My main objective is to use this environment to host a tiny Nomad deployment, if possible. Obviously later if my app becomes profitable and it’s needed, I would pay for additional (or just bigger) VMs to make my infra better but in the beginning I don’t want to pay.

I read the hardware requirements of Nomad and Consul servers and they are big:

  • For Nomad servers: “4-8+ cores, 16-32 GB+ RAM, 40-80 GB+ of fast disk and significant network bandwidth”
  • For Consul servers: “designed to run reliably (but relatively slowly) on a server cluster of three AWS t2.micro instances”. So 1 core, 1 GB RAM.

About Nomad here I read "it’s going to scale with (1) the number of client nodes in your cluster, and (2) the number of jobs and allocations in that cluster. The hardware requirements we’ve provided are for typical enterprise use cases. If you’re running tiny clusters, you can probably get away with a lot less.”

I won’t have many client nodes, only two, and I won’t have many Nomad jobs (probably 3-4, one is the web app, the other is the DB, maybe some Redis later?). But even given this tiny workload, I guess even by running only Nomad servers, I will be probably on the edge of what’s possible with these servers, so resource-wise I probably can’t afford running Consul as well.

So in the end my architecture would be something like this:

Without spending a lot of time learning Nomad, and experimenting, does this look feasible?

I guess I’m okay to join the nodes into the cluster manually as described here.

But I’m not sure what other features will I miss if I don’t use Consul?

What I expect from Nomad:

  1. Monitor the Web App and the DB processes. If they die, restart on the same node (web app on client-1 and DB on client-2)
  2. Show me some CPU and RAM usage charts
  3. Nice-to-have: if one node goes down, Nomad should try to run the job on the other node.

I’m aware this is not the world’s most resilient architecture but my goal is to make it work for free, while keeping the option to switch to more powerful VMs later and making it more stable for a bigger load. I think Kubernetes would be an overkill for this, the other thing I’m considering is k3s.

Please let me know if I’m on the right track or not.

Hi @zero ,

Considering your starting scale considerations this architecture should work. You will possibly be able to include Consul as well as at smaller scale, they are pretty light weight.

Thanks, @Ranjandas !

Do you know what exactly will I miss if I don’t use Consul?

Only the automatic joining of the cluster?

Consul offers more advanced service-discovery capabilities, including a DNS interface for locating services. It also provides a built-in service mesh, should you decide to use one.

That said, given your current simple architecture, you can start with Nomad alone and add Consul later if and when you feel you need the additional features.

Hi, I run nomad without consul on my private server. Consul like allows to:

  • manage DNS names to services
    • What will you do when your service changes ip?
    • How will you proxy requests from users to services? How will the proxy know where to forward?
    • With Nomad services, you can regenerate the configuration of dependent services with new ip. Effectively everything gets restarted with new config and there is downtime.
    • What will happen if you move Nomad server? You should then update all Nomad clients configuration with new ip address of the Nomad server. Effectively all Nomad clients and servers should be restarted.
  • Allows to do like private networks between containers Integrate Consul service mesh | Nomad | HashiCorp Developer
  • Runs services healthcheck, like “is port 80 open and there is HTTP and responds with 200 OK”
    • I just use zabbix with predefined stuff I monitor.

I say few machines without consul, all fine. 100 services without consul, and there might be issues with knowing on which machine which services is running.

Overall I think your setup is ok, if you have like only two services it is not any “scale". You can manage all manually and do you need to have DNS names like postgres.service.consul.your.domain.com ? The issues will come when you would want to have like redudancy on webapp across multiple machines running haproxy as the proxy with red-blue deployment… I think you can care about these problems when it comes to that.

Overall the issue comes down in the nice-to-have section:

Nice-to-have: if one node goes down, Nomad should try to run the job on the other node.

How will the user be informed to connect to the new node? Typically, new IP address is registered in DNS by consul. You can for example update the ip address of the service on your DNS provider like with some API.

I just wanted to add, that coredns now has a nomad plugin ( coredns/plugin/nomad/README.md at master · coredns/coredns · GitHub ), that allows you to “export” the services registered with Nomad via DNS, too.

1 Like