Device IDs constraint not working properly


I have a problem when running a job that requires to use constraint block for gpu device.
Nomad server version : 1.5.6
Nomad client version: 1.4.3
This is the job that I am running:

job "test-2070-2" {
  datacenters = ["dc1"]
  group "test-2070-2" {

    restart {
    task "test-2070-2" {
        driver = "podman"
        config {
            image = "image_with_gpu"
        resources {
            cpu = 2650
            memory = 8192
            device "nvidia/gpu" {
                count = 2

                constraint {
                    attribute = "${device.model}"
                    value     = "NVIDIA GeForce RTX 2070 SUPER"

                constraint {
                    attribute = "${device.ids}"
                    operator  = "set_contains"
                    value     = "GPU-9b5df054-6f08-f35c-9c4c-5709b19efea5,GPU-1846fc5f-8c71-bfab-00e1-9c190dd88ed7"


when I am running nvidia-smi -L inside the container I get another UUIDs

[root@481a2da8e0a9 /]# nvidia-smi -L
GPU 0: NVIDIA GeForce RTX 2070 SUPER (UUID: GPU-d7574813-0b3f-ee8f-39fc-2b48f9dff169)
GPU 1: NVIDIA GeForce RTX 2070 SUPER (UUID: GPU-9b5df054-6f08-f35c-9c4c-5709b19efea5)

Do I need to upgrade the client in order to have this job work?

Hi @ruspaul013 :wave:

Yes, you need to update your Nomad client. The process of fingerprinting host information (detecting their configuration, devices installed etc.) is done by the Nomad clients, and GPU ID fingerprinting was added in Nomad v1.4.5.

By running your servers in 1.5.6, it allows the use of the ${device.ids} attribute in your job, but the client doesn’t report this value because it’s running a version prior to the code change.

Hello @lgfa29

Thank you so much for the suggestion. I did indeed what you said but with no results. Both, server and client, are at the same version (1.5.6) and every time I run a job with specified UUIDs as constraint, I still get random GPUs.