What are the minimum requirements for a Cloud on EC2 Micro for example and do you need a Load Balance?

What are the minimum requirements for a small, low-power cluster?

Example could you run with 3-5 AWS EC2 Nano or Micro instances?
And if so what would be the estimated RAM and CPU consumption of Nomad for moderate use.
The idea is for each EC2 to use a maximum of 8 Containers, such as PHP, Redis, Nginx, among other services.

In the documentation I consider the Hardware informed too high for certain uses, sometimes we just need a small EC2 cluster for greater redundancy.

I would also like to know if you need a Load Balance in front of Nomad.

I’m analyzing the use of Nomad, Kubernates or simply using VM + Container in a more manual way maybe with Packer to create the images only and a more manual control if the other options are too complex to maintain and consume a lot of resources.

Today we use 2 Clouds with 1 CPU and 1GB and they have been given it, but we intend to have a higher consumption and new services.

Hi @FelipoAntonoff :wave:

We don’t provide any minimum requirements because we don’t actively test Nomad in low-powered clusters, so it’s hard to say what will or will not work for you.

There have been some previous discussions here in the forum that may help you get some more insights:

In general, I would say give it a try and see how it works for you, just make sure you have some resource monitoring setup :slightly_smiling_face:

1 Like

My venture into clusters was not fruitful, but I did manage to solve my issue referenced here. Since I just wanted a way to stand up a second instance if I needed to and route traffic during an upgrade, I ended up using caddy. That will manage https for you via lets encrypt and provide basic routing rules and proxying.

1 Like

Oh no, sorry to hear it didn’t work out for you. Do you have any more details on what happened? Anything we can improve?

1 Like

You had mentioned running nomad on a pi in my post you referenced, maybe you could share the configuration for that which could help the poster here configure something light for EC2.

Regarding my own issue in that post - I think nomad could be something I return to in the future, my need at the time was met another way. If nomad can run on a pi then I guess the only suggestion I would offer is an easy way to start a cluster with that configuration on a local machine. I didn’t even know what I was really looking for at the time!


There are a few ways to do it. I created some janky Ansible playbooks that I’m not sure would add too much value in sharing because they are very specific to my environment.

But there other options. hashi-up is probably the quickest way to start.

For a more production-like environment, building immutable images with Packer is also a good way, though it requires quite a bit of more work. You can even use it to build Raspbery Pi images.


Hi, thanks guys for the answers, I’ll take a look at the posted posts.
From what I understand it is not very common use cases of Nomad on machines like EC2 Nano or Micro, unfortunately in Cluster from what I see most focus on machines with multiple cores and RAM, but it would be interesting to focus also on more smaller use cases, sometimes we just need to maintain light usage, but ensure redundancy and ease of managing a Container Cluster.

But it can all be a matter of testing, seeing how much RAM, CPU the Nomad consumes and what is left over for other resources in a smaller Cloud. Do you know if, in complexity, Nomad is simpler to deal with than Kubernates and managed Kubernates?

Type who uses 1 or 2 Clouds of 1CPU and 1GB, it would be great to be able to use 3-6 of 1CPU and 512MB instead, leaving the bank perhaps in a 1GB part or in some service such as RDS and the like.

Thus, the sum of the resources of the mini Clouds is greater than 1 single Cloud and still guarantees redundancy and performance without a very high increase in the budget.

Maybe also these use cases, it’s not for Cluster either, who you know I’m trying to solve a detail that actually isn’t the time yet or there isn’t a technology so focused on smaller Clusters because it’s a less common use case.

I’m not aware of anyone using these instances sizes, but I know that Nomad at least starts because I used them for smoke test clusters :sweat_smile:

I think that, in the long run, they may be too small for the Nomad servers at least, specially if your cluster starts to grow (both in terms of workload and clients).

So maybe start with a larger server and these small clients and see how it goes? If your metrics look good after some time you can try downscaling the servers as well.

Complexity is a hard thing to define. In terms of operational complexity I would say a managed service is definitely the easiest path. Nomad doesn’t have any managed service offer available, so if you’re looking for the smallest possible operational overhead a Kubernetes managed service is probably the way to go.

But with managed services you trade complexity for control, so you will be restricted to the features, maintenance schedule, pricing etc. that is offered by the service. It may suit your needs, but maybe you need something more customized to your case.

In this scenario you would need to self-host your infrastructure, in which case I personally find Nomad to be easier to manage.

Yeah, things start to get hard with these low-budget constraints so you may need to start thinking about trade-offs.

For example, maybe you can live with a single, more robust, server? You may lose availability and need to be careful with backups as to not lose cluster state, but in some scenarios this could be fine (though I must add that we don’t officially support single server setups and recommend at least 3 :sweat_smile: )

1 Like

Yes, it’s a good idea to run some tests to see how it handles a larger or smaller server and see if it’s going to handle it.

I like the control, but a possible option is Kubernates managed with Linode for example because they only charge the infrastructure.

Yes, today there is no redundancy in Clouds, just making it easier to manage the Container would already be a kind of advantage and nothing would prevent it after scaling with more machines.

Thanks for the tips and opinion, you solved most of my doubts, if I have time I’ll test Nomad a little and see if you can answer at the time.

1 Like

Hi, just to leave feedback.

I played a little more with Container, using Docker Swarm, Nomad and very little Kubernates.

I used Vagrant with Debian 11 image + PyInfra (Ansible in Python :slight_smile: lol) to set up my initial Setup, set it to 1GB and it was based on the box from https://learn.hashicorp.com/tutorials/nomad/get-started- install?in=nomad/get-started mainly the Nomad script.

Consumption was below 300MB (maybe it could be less without any job), I was testing it with Redis and very little on the CPU, some quick notes about Nomad:
I found the idea of ​​the nomad plan and seeing the changes very cool, very easy to rollback, very interesting the UI for monitoring and seeing resource consumption as a whole and I found it easy to play with the CLI in Nomad very simple and with very interesting tips to the run nomad only.

The Docker Swarm is also very light, since it is native to Docker, I’m not sure how many MB it consumes, but it must be less than 100MB.

I really liked Portainer.io, excellent to see all Swarm resources, remove Container, Volume and etc, I even recommend to Nomad team to check to add some resources in Nomad UI, now in Monitoring part to see consumptions a Nomad’s UI is much superior, in terms of management, Portainer I found it more complete because it’s its main focus.

It consumed about 17-25MB the Agent and around that in the GUI, now running again it gave a little more, but we can say that overall the Portainer is between 50-100MB, very good for its resources, it would be nice to have something like that for Nomad would make it much easier to use, it’s a pity that Portainer only supports Docker Swarm and Kubernates, Nomad sometimes I think it’s kind of forgotten, because unfortunately many focus too much on Kubernates, when in fact it’s a burden heavy and should be used more for large clusters of big companies, Nomad should be the most used and adopted :slight_smile: .

Finally I wanted to check out Kubernates, I get lost just seeing its documentation, but it is the most used and has a rich ecosystem around it, even in an attempt to reduce its complexity.

I installed Kubernates by Digital Ocean with a minimum machine of 10 dollars with 2GB and 1CPU, the same without putting any extra Pod or new container gave absurd 660MB of consumption even minutes after provisioning and the Cloud had 6GB of use and CPU consumption at 3-4% and 7 Pods running natively.

I thought the consumption was very high, it was about 33% of 2GB, if it were on a 1GB machine there would barely be any RAM left, maybe that’s why Digital Ocean won’t let you choose a 1GB one.

The Dashboard thought it was very bad in the monitoring part, I couldn’t even see the consumption of resources, it seems more focused on configuring and seeing some details, it already showed its complexity and Digital Ocean has nothing special to manage, it only shows consumption and nothing else basically, I expected more from a managed service, by the way they only serve to facilitate updates and other details.

A detail that I noticed, managed Kubernates usually don’t use the most current stable versions of Kubernates, they seem to be a few months late, I imagine it’s to validate and test well, but it’s annoying for those who want the latest features.

In the end the most simple and straightforward I found Docker Swarm, then Nomad and Kubernates I tried to use something, but it seems the most complex and the one that consumes more RAM and easy CPU.

Something I loved about Docker Swarm is the docker stack, it was very easy to install several stacks via docker-compose files, I also thought the idea of ​​stack → services → tasks (container) was cool, very simple to understand.

Nomad I think for its mass adoption, it would need some system like Portainer.io or more features of Nomad’s native UI for its management and containers and it would also be great if it had something like the dokcer stack that uses docker compose files , even compose could even be an industry standard, as it is super easy to use an image, inform the services, volume and other settings and as many already use Docker, it makes the next step for orchestration much easier.

It would be nice something like, in Kubernates there’s Kompose, but I don’t think that for small or small companies it’s worth using Kubernates, even the managed one still has a lot of work from what I’ve seen, being apparently easier Nomad or Docker Swarm and Nomad has a huge advantage among all that is not just for Container and also has an excellent task scheduler from what I saw.

I wrote a lot, I hope it helps a colleague who is also in doubt and for the Hashicorp team some of the tips I gave after a short test time, remembering that I just ran fast, a few minutes and played with the Nomad initial guide, without a deep analysis with multiple containers and clouds, I only tested with a single virtual machine using Vagrant with Virtualbox.