Job with 1 task, how to run no more than 2 instances per client node?

It’s a similar concept to distinct_hosts but instead of a constraining Nomad to a single task instance per client node, it should be N task instances per client node.

For example, our performance testing cluster has 3 clients. I want to run 6 instances total and have each client node run 2 instances. Our performance cluster includes Consul and Fabio.

Here’s a test job file using httpecho:

job "http-echo-hostnametest" {
  datacenters = ["dc1"]

  constraint {
	  attribute = "${attr.kernel.name}"
	  value     = "linux"
  }

  group "echo" {
    count = 6
    task "server" {
      driver = "docker"

      config {
        image = "hashicorp/http-echo:latest"
        args  = [
          "-listen", ":${NOMAD_PORT_http}",
          "-text", "Welcome to ${NOMAD_ADDR_http}"
        ]
      }

      resources {
	memory = 300
        network {
          mbits = 10
          port "http" {}
        }
      }

      service {
        name = "http-echo-hostnametest"
        port = "http"

        tags = [
          "hostnametest",
          "urlprefix-/test"
        ]

        check {
          type     = "http"
          path     = "/health"
          interval = "2s"
          timeout  = "2s"
        }
      }
    }
  }
}

Hi @wclark.accela,

I wonder if the spread job specification configuration block would resolve your situation? While it doesn’t allow you to specify exactly how many allocations are placed on each host, it would allow you to control how they are spread, which should end up with 2 allocations per client in a stable cluster.

Thanks,
jrasell and the Nomad team

Thanks for the response @jrasell .

The actual application is memory constrained. The other aspect I’d like to control is memory allocation and that’s where the primary problem lies.

Using the example above, specifying count of 6 without distinct_hosts seems to cause Nomad to limit the sum total memory of all six instances to be no more than the max allowed for a single node. Will spread eliminate this constraint?

If you have memory oversubscription enabled, you can use memory_max block along side spread. Maybe this blurb can help: Memory Oversubscription