Nomad 1.1.0 released

The Nomad team is excited to announce Nomad 1.1.0!

Major new features in Nomad 1.1.0 include:

  • Memory oversubscription: Improve cluster efficiency by allowing applications, whether containerized or non-containerized, to use memory in excess of their scheduled amount.
  • Reserved CPU cores: Improve the performance of your applications by ensuring tasks have exclusive use of client CPUs.
  • UI improvements: Enjoy a streamlined operator experience with fuzzy search, resource monitoring, and authentication improvements.
  • CSI enhancements: Run stateful applications with improved volume management and support for Container Storage Interface (CSI) plugins such as Ceph.
  • Readiness checks: Differentiate between application liveness and readiness with new options for task health checks.
  • Remote task drivers (technical preview): Use Nomad to manage your workloads on more platforms, such as AWS Lambda or Amazon ECS.
  • Consul namespace support (Enterprise): Run Nomad-defined services in their HashiCorp Consul namespaces more easily using Nomad Enterprise.
  • License autoloading (Enterprise): Automatically load Nomad licenses when a Nomad server agent starts using Nomad Enterprise.
  • Autoscaling improvements: Scale your applications more precisely with new strategies.

Changes since 1.1.0-rc1:

  • drivers/exec+java: Reduce set of linux capabilities enabled by default [GH-10600]

There are a lot of other improvements and fixes in Nomad 1.1.0. Please see the announcement blog or changelog for details.

Along with Nomad 1.1.0, we’re also releasing Nomad 1.0.6 with a long list of backported bug fixes. Please see the 1.0.6 changelog for more details.


The Nomad Team

1.1.0 Binaries -

1.1.0 Changelog -

1.0.6 Binaries -

1.0.6 Changelog -


The only appropriate reaction to a new Nomad release …
“Booyah!!!” :celebration:


Thank you for this release; loving Nomad!

I’m trying to figure out if there is any difference between using the new 1.1.0 parameter “memory_max” under resources-config (with the required server-parameter enabled), versus using the existing docker-specific option “memory_hard_limit” under config. Both seem to be doing the same thing for me.