How to share qcow artifact between allocs?

Hello,
I am looking for some help to share a .qcow2 image downloaded as artifact between multiple allocs.
My use case:

  • Build using packer and uploaded to artifactory
  • Download the image to a shared directory on some nomad clients
  • Start Multiple Jobs with each running a single QEMU image as snapshot

What worked so far:

  • Add plugin stance to client configuration:
plugin "qemu" {
  config {
    image_paths = ["/srv/nomad/images"]
  }
}
  • Download image manually to this folder
  • Use the image
...
image_path = "/srv/nomad/images/packer-focal2004"
...

But if I download the image via the artifact stance from artifactory it is stored in the data_dir/alloc/alloc-id/srv/nomad/images folder

      artifact {
        source = "https://artifactory/path/to/image"
        destination = "/srv/nomad/images/packer-focal2004"
      }

What options do I have to download and share between allocs?

Hello, any suggestions here?

Hi @schlumpfit :wave:

I don’t have a lot of experience with QEMU, so I may be misunderstanding what you are trying to do.

In general Nomad allocations have their own isolated file system, since different allocations are not guaranteed to run in the same host.

The artifact block is also restricted to place files only in the allocation file system to avoid security issues where a job is able to place arbitrary (and potentially malicious) files somewhere in the host file system.

If you want to share data between allocations that are in the same host, you can use host volumes. If the allocations are in different hosts you will need to setup some mechanism to share these files, like NFS for example. If you are running in a cloud environment, you may be able to use CSI as well.

Does this help?

Hi @lgfa29,

I think you understood everything correctly.

I am using a shared drive which is mounted on each node to /srv/nomad/images.
The issue so far was that each client created its own allocation file system. This ended up in the image being downloaded to /srv/nomad/images/clientX/allocs/alloc/image instead of (what I expected) /srv/nomad/images since I added

  config {
    image_paths = ["/srv/nomad/images"]
  }
}

(But this was a false and silly assumption on my side.)

I will follow your advice and dig more into host volumes.

The ideal case for me would be:

  • Use a shared drive for qemu base images.
  • Download the base image in case it is not present.
  • Start qemu from that base, but store the snapshot/overlay image in the alloc file system.
  • Delete the snapshot once the job is done.
  • Keep the base image (Which leads to the fact that the shared folder needs to be cleaned up manually from time to time, which should not be too hard as in the ideal case the needed images would just be re-downloaded again)

I think that this is what you need then (again, I’m not very familiar with QEMU :sweat_smile:):

Create a folder in your clients to server as the shared drive, like /srv/nomad/images as you’ve been using (make sure it exists in all clients).

Then add a host_volume block to your clients configuration file:

client {
  # ...
  host_volume "images" {
    path = "/srv/nomad/images"
  }
}

In your job, add the volume and volume_mount:

job "example" {
  # ...
  group "example" {
    # ...
    volume "images" {
      type   = "host"
      source = "images"  # This value must match the volume name in your client config.
    }
    # ...
    task "example" {
      driver = "qemu" 
  
      config {
        image_path = "/srv/nomad/images/..."
        # ...
      }

      volume_mount {
        volume      = "images"  # This value must match the name of your volume in this job.
        destination = "/srv/nomad/images"
      }
    }
  }
}

This is probably the trickiest bit. In theory you could use an artifact to download the images, but due to the order in which artifacts are downloaded and volumes are mounted, the volume will actually mount over the downloaded artifact.

You could maybe write a custom script that runs as a prestart task in your job that checks for the image and downloads it if not available. Not a great solution though :confused:

This is where my lack of knowledge of QEMU trips me. You would need to check where the snapshot/overlays are being stored.

If the snapshot is being store in the allocation directory, then this happens automatically (not really when the job is done, but when the Nomad garbage collector runs).

Maybe this could be another custom script running as a system job? The tricky part would be making sure that no active allocation is using the image before removing it.


I hope these help, if not, feel free to ask more questions :grinning_face_with_smiling_eyes: