Passing local config file to task

I’m trying to create a simple redis deployment where I pass in a local redis.conf file to the but the file never makes it’s way over to Nomad.

job "example" {
  region = "global"
  datacenters = ["dc1"]
  type = "service"
  group "services" {
    count = 1
    network {
      mode = "host"
      port "redis" {
        static = 6379
      }
    }
    task "redis" {
      driver = "docker"

      artifact {
        source      = "./redis/redis.conf"
      }

      volume_mount {
        volume      = "redis-data"
        destination = "/data"
        read_only   = false
      }

      config {
        image      = "redis:6.2.1-alpine3.13"
        ports      = ["redis"]
        force_pull = false
        args = [
          "redis-server",
          "/redis.conf",
        ]
        mounts = [
          {
            type   = "bind"
            source = "local/redis.conf"
            target = "/redis.conf"
          },
        ]
      }
    }
  }
}

I get an error message of failed to download artifact "./redis/redis.conf": relative paths require a module with a pwd but I don’t know what that actually means the docs are pretty vague.

Any help would be awesome.

  • Mike D.

I’m open to other ways to getting the redis.conf over to the task but this seems the most straight forward.

Hey Mike,

The artifact stanza is not parsed until the job is starting on the client, so files specified in the source attribute are resolved relative to the client—specifically the task’s working directory in the allocation’s filesystem—not the submitting user.

If you do not want to use a hosted location, then another option is via the job file itself. For small files and simple configurations, this is okay. An important thing to remember is that the entire job specification is stored in the server’s memory until the job is eligible for garbage collection, so this is a consideration for things like batch jobs or jobs where there are a significant number of them running or waiting to be eligible for garbage collection.

You can use the template stanza to write one or more files to the allocation as it starts. For a very static job, you might even insert the configuration directly in the job using the data attribute and the heredoc syntax. However, if you want to keep the content separate from the job but also use a local file, you can combine the template stanza, an HCL2 input variable, and the HCL2 file function.

HCL2 functions run before the job is submitted to the cluster, so the file function writes the specified file’s contents directly in place as the value of the template’s data attribute. You can supply the path as a constant or use an HCL2 input variable to make it runtime specific.

To use the file function, your job would look something like this:

variable "redis_config_file" {
  type = string
  description = "local path to the redis configuration to inject into the job."
}

job "example" {
  region = "global"
  datacenters = ["dc1"]
  type = "service"
  group "services" {
    count = 1
    network {
      mode = "host"
      port "redis" {
        static = 6379
      }
    }
    task "redis" {
      driver = "docker"

      template {
        destination = "local/redis.conf"
        data = file(var.redis_config_file)
      }

      volume_mount {
        volume      = "redis-data"
        destination = "/data"
        read_only   = false
      }

      config {
        image      = "redis:6.2.1-alpine3.13"
        ports      = ["redis"]
        force_pull = false
        args = [
          "redis-server",
          "/redis.conf",
        ]
        mounts = [
          {
            type   = "bind"
            source = "local/redis.conf"
            target = "/redis.conf"
          },
        ]
      }
    }
  }
}

Embedding artifacts in the job specification should be done with care, because the job specifications are stored in Server memory for the lifecycle of the object and until it is garbage collected.

You can put the variable stanza anywhere in the job. I tend to put them at the top of the job, but that is a stylistic preference.

To run the above job, you would need to provide the path to the local redis.conf file by flag (-var redis_config_file=./redis/redis.conf) or environment variable ( NOMAD_VAR_redis_config_file=./redis/redis.conf).

Here is a more complete example of the command using flags.

$ nomad job run -var redis_config_file=./redis/redis.conf example.nomad

Hope this help get you going!

Best,
-cv


Charlie Voiselle
Product Education Engineer, Nomad

4 Likes

@angrycub Thanks for the reply, it was very helpful.

I thought I understood where I went wrong so I tried to implement your suggestion using a variable but when when the job is processed, I get the following error:

 failed to create container: API error (400): invalid mount config for type "bind": bind source path does not exist: /tmp/nomad/alloc/a16f8ee8-d4bf-9248-9626-c3ec5bcfa327/redis/local/redis.conf 

The full version of the file redis.nomad is as follows:

variable "redis_config_file" {
  type = string
  description = "local path to the redis configuration to inject into the job."
  default = "./redis/redis.conf"
}

job "example" {
  region = "global"
  datacenters = ["dc1"]
  type = "service"
  group "services" {
    count = 1

    network {
      mode = "host"
      port "redis" {
        static = 6379
      }
    }

    volume "redis-data" {
      type      = "host"
      source    = "rcs-redis-data"
      read_only = false
    }

    task "redis" {
      driver = "docker"

      template {
        destination = "local/redis.conf"
        data = file(var.redis_config_file)
      }

      volume_mount {
        volume      = "redis-data"
        destination = "/data"
        read_only   = false
      }

      config {
        image      = "redis:6.2.1-alpine3.13"
        ports      = ["redis"]
        force_pull = false
        args = [
          "redis-server",
          "/redis.conf",
        ]
        mounts = [
          {
            type   = "bind"
            source = "local/redis.conf"
            target = "/redis.conf"
          },
        ]
      }
    }
  }
}

Could I be missing something with how I started the nomad agent on my laptop (mac os; big sur)?

I appreciate your continued help with this.

Sorry it’s taken me so long to get back to you, I lost the email in my Drafts folder. Verify your Docker settings to make sure that you are sharing your /tmp folder and any folder that holds your host volumes with the Docker VM. If not, then the Docker driver tries to mount paths inside the VM with no success.

One other thing I had to do was make sure that the directory backing the host volume on my macOS machine was owned by me, not root, so that the redis chown worked properly.

Let me know how this goes!

Best,
Charlie

1 Like