Artifact Stanza with Directory Semantics

Hello,

I’m currently trying to figure out how to handle application config files for my Nomad jobs. I’ve been looking at the artifact stanza, but I’m missing one capability: Handling entire directories.
One example would be the Traefik file provider, with one configuration file per site, or my Fluentd configs, with one configuration file per service.

Having one artifact stanza per file seems overkill, if I could also just download the entire config directory.

Volumes (be they host volumes or CSI) are not fit for purpose, as some of my config files are templates with e.g. NOMAD_PORT_ variables. And volumes aren’t available before the task is started and can’t host template files.

What I would like to achieve: Manipulating an (app configuration) file on my Dev machine and then run the Nomad job which should somehow get at the changed config file.
The important point: I’d like to be able to do this without jumping through any hoops, like making a commit and pushing to Git while I’m experimenting with a new service.

I’d also like to prevent having to put my application configs into my orchestrator configuration files. That just seems like bad separation of concerns and also a mess with a 200 line config file in my otherwise short Nomad job config.

What I’ve figured out up to now is that there are two artifact getters which can handle directories, namely the Git and S3 ones.

First, why the Git one isn’t a good fit:

  • I don’t want to have to make a commit for each of my config changes and push them to my Git server
  • My Git server is also a Nomad job

What I’ve currently got is an S3 setup as follows:

  • I’ve got an S3 bucket on my Ceph cluster
  • In that S3 bucket, I’ve got the git repo with my application configs (and Nomad job files)
  • On my Dev machine, I mount that S3 bucket as a file system with S3FS
  • In my Nomad job files, I’m using the artifact stanza with the S3 getter to download the application configs

That setup works and does what it’s supposed to - I can change the app config locally and just restart the Nomad job and it’s taking up the new config from the S3 bucket, without any additional action on my part.

The only problem with that setup: It’s fast enough to be workable, but it’s also slow enough to be annoying. Whenever I’m doing a “git status” on the S3 mounted git clone, it takes several seconds and a hefty amount of HTTP requests to finish.

The only other idea I could come up with, which I read in other threads on how to get at files for Nomad jobs, was the advise to mount e.g. an NFS share on the Nomad host and serve the files with a simple HTTP server.
But that doesn’t work. When trying to download the files with the HTTP getter, the only thing I’m ever seeing is the error message Error downloading: no source URL was returned. Looking at the go-getter source code, that makes (somewhat?) sense.

Now finally to my two questions:

  1. Is the artifact stanza supposed to be able to download entire directories with the HTTP getter?
  2. How are other people feeding multiple application config files to their Nomad jobs?

PS: For those with deja-vu, I had already posted something similar here, but it was during the Christmas holidays, so I’m hoping more people might see it now. :slight_smile:

I will also repost in this thread just in case.

Hi @mmeier86

There is a fileset() function to suplement file() function that will allow you to create directories.

.
├── config.d
│   ├── config.yml
│   ├── logger.yml
│   ├── macros.yml
│   ├── replicated.yml
│   └── zookeeper.yml
├── job.nomad.hcl
└── users.d
    ├── clickhouse-persister.yml
    └── default.yml

2 directories, 8 files
      dynamic "template" {
        for_each = fileset(".", "{config,users}.d/*.yml")

        content {
          data        = file(template.value)
          destination = "local/${template.value}"
          change_mode = "noop"
        }
      }

I hope that helps!

Thank you, @nahsi! This looks like exactly what I need. No idea how I managed to overlook the fileset function while looking at file.

If I see it correctly, this would also restart the job each time one of the config files failed, because file is still used and that copies the contents into the Nomad job file?
Because that’s another problem I’ve had with my current artifact stanza/S3 setup: When I change a config file, Nomad doesn’t see any reason to restart a task, because it doesn’t redownload the artifacts from S3. I’ve been working around that with nomad alloc stop.

Yes, it will. Depending on update stanza it can also do it one alloc at a time until it healthy and revert when it is not.

One downside of this is an ugly nomad plan output.
I’ve added one new line to the config file, the result is this:

The best experience I had with config files was with Consul K/V and kv function.

For example
For input:

  dynamic "key" {
    for_each = fileset("${path.module}", "configs/**.yml")
    content {
      path   = "configs/home-assistant/${trimsuffix(key.value, ".yml")}"
      value  = file("${key.value}")
      delete = true
    }
  }
}

And for output in job:

[[ range safeLs "configs/home-assistant/configs" ]]
[[ .Key ]]:
[[ .Value | indent 2 ]]
[[ end -]]

This way you can manage config separately with terraform and when config is updated in Consul applications can be restarted/reloaded. No need to change job definition.

Unfortunately right now I see no way of dealing with directories.

I’ve created an issue with a proposal to implement writeToFile function that should help with directories.