How do I pass an input variable to a Nomad template?

Hi all,

I am working on a Nomad job created via the Nomad Terraform provider, because I want to reference other Terraformed resources.

My problem seems similar to

In this case, the suggestion was to store the data in Consul and read it later. If this is the only way to do what I want, so be it, although it seems somewhat inelegant.

Job details

For context, this is a Nomad job to deploy Grafana Loki, using a Digital Ocean space as s3 storage.

I have three files:

  • main.tf has all of the terraform resources
  • loki.nomad is a Nomad job definition
  • loki.yml.tpl is a Nomad template which produces the Loki configuration file, and is passed as input to a template block in the Job’s task stanza

The problem I’m having is passing data from Terraform, through the Nomad job definition, into the Nomad template:

Template failed: (dynamic): parse: template: :23: function "var" not defined

The yml template is as follows:

auth_enabled: false

server:
  http_listen_port: 3100
  grpc_listen_port: 9096
memberlist:
  join_members:
    - loki-http-server
schema_config:
  configs:
    - from: 2022-01-01
      store: boltdb-shipper
      object_store: s3
      schema: v11
      index:
        prefix: index_
        period: 24h
common:
  path_prefix: local/
  replication_factor: 1
  storage:
    s3:
      endpoint:  {{ var.s3_endpoint }}
      bucketnames: {{ var.logs_bucket }}
      access_key_id: {{ var.access_key }}
      secret_access_key: {{ var.secret_key }}
      s3forcepathstyle: true
  ring:
    kvstore:
      store: consul
ruler:
  storage:
    s3:
      bucketnames: {{ var.logs_bucket }}

The Nomad job which uses it is as follows:

task "server" {
      driver = "exec"
      config {
        command = "loki"
        args = [
          "-config.file=local/loki.yml"
        ]
      }
      resources {
        cpu = 128
        memory = 200
      }
      template {
        data = file("loki.yml.tpl")
        destination = "local/loki.yml"
        change_mode = "restart"
      }
      artifact {
        source = "https://github.com/grafana/loki/releases/download/v2.6.0/loki-linux-arm64.zip"
        options { # checksum depends on the cpu arch
        }
        destination = "local/loki"
        mode = "file"
      }
    }

while the Nomad job resource in Terraform is:

resource "nomad_job" "loki" {
  jobspec    = file("${path.module}/loki.nomad")
  depends_on = [digitalocean_spaces_bucket.logs]
  hcl2 {
    enabled  = true
    allow_fs = true
    vars = {
      "logs_bucket" = digitalocean_spaces_bucket.logs.name,
      "s3_endpoint" = "https://${digitalocean_spaces_bucket.logs.region}.digitaloceanspaces.com",
      "access_key"  = jsondecode(data.vault_kv_secret_v2.digitalocean.data_json)["spaces_key"]
      "secret_key"  = jsondecode(data.vault_kv_secret_v2.digitalocean.data_json)["spaces_secret"]
    }
  }
  purge_on_destroy = true
  detach           = false
}

While I can read the Vault secrets in the Nomad template, I don’t see how I can get the logs_bucket and s3_endpoint from anything else but the Terraform state.

This is somewhat besides the point though, since even without Terraform, I want to be able to ass Nomad declared variables to Nomad templates.

So - how do I pass Nomad variables declared in the Nomad job definition to templates in that job definition?

From what the error is telling me there is no var function in the Consul template – is the only way to get Nomad variables into a template via some way that doesn’t actually use Nomad variables?

Perhaps I’m going about this all wrong, so I would appreciate any advice on the matter.
Cheers!
Bruce

1 Like

Did you find any solution on this? I also need looking for ways to inject input variables into a nomad template.

1 Like

Yes, I ended up doing this via Consul templates. The working example is at

A few points that made it more comprehensible to me. The main thing was to have one statement for the job. At first I thought I could do this with a .nomad jobspec, but it soon became clear in my case that I needed other resources outside of the Nomad scope, which that job depended on, so writing a Terraform statement was the right approach.

This job (to run Loki log aggregation) needed some dynamic configuration, including:

  • Storage endpoint
  • Storage bucket name
  • secrets for permission to access the APIs of the cloud provider for the log storage.

This dynamic information had to be in both the jobspec and in the template for the job configuration. My “error” was to think I could pass values for these variables through the jobspec when actually they should be read from a single source, in this case, Consul.

First, I created a KV store for the application along with all of the other resources, in the same Terraform statement :

resource "consul_keys" "bucket" {
  datacenter = "dc1"

  key {
    path  = "jobs/loki/logs_bucket"
    value = digitalocean_spaces_bucket.logs.name
  }
}

resource "consul_keys" "endpoint" {
  datacenter = "dc1"

  key {
    path  = "jobs/loki/s3_endpoint"
    value = "https://${digitalocean_spaces_bucket.logs.region}.digitaloceanspaces.com"
  }
}

Then I could create the job:

resource "nomad_job" "loki" {
  jobspec    = file("${path.module}/loki.nomad")
  depends_on = [digitalocean_spaces_bucket.logs]
  hcl2 {
    enabled  = true
    allow_fs = true
    vars = {
      "access_key" = jsondecode(data.vault_kv_secret_v2.digitalocean.data_json)["spaces_key"]
      "secret_key" = jsondecode(data.vault_kv_secret_v2.digitalocean.data_json)["spaces_secret"]
    }
  }
  purge_on_destroy = true
  detach           = false
}

The configuration file for the actual service contained references both to Nomad environment variables as well as Consul keys:

auth_enabled: false

server:
  http_listen_port: {{ env "NOMAD_PORT_http" }}
  grpc_listen_port: {{ env "NOMAD_PORT_grpc" }}
memberlist:
  join_members:
    - loki-http-server
schema_config:
  configs:
    - from: 2022-01-01
      store: boltdb-shipper
      object_store: s3
      schema: v11
      index:
        prefix: index_
        period: 24h
common:
  path_prefix: local/
  replication_factor: 1
  storage:
    s3:
      endpoint:  {{ key "jobs/loki/s3_endpoint" }}
      bucketnames: {{ key "jobs/loki/logs_bucket" }}
      access_key_id: {{ env "access_key" }}
      secret_access_key: {{ env "secret_key" }}
      s3forcepathstyle: true
  ring:
    kvstore:
      store: consul
ruler:
  storage:
    s3:
      bucketnames: {{ key "jobs/loki/logs_bucket" }}

Clearly some of these (env "access_key") should be Vault lookups, but I haven’t configured the nodes with Vault access yet.

So, I’m not sure if this is a universal approach that will work for all, and you might benefit more from the use of Nomad Pack than I would, but I still hope this helps you solve your problem :slight_smile:

1 Like