Forcing a template rerender when the source file changes

I’m working on migrating our production setup from Docker Compose to Nomad. For context, we currently generate our configuration with a custom script that uses Jinja and injects e.g. secrets / other parameters into it. Then, we bind mount the config directory into the service’s Docker container. If we need to change a service’s configuration at runtime, we rerun this configurator script, then send a SIGHUP to the service, which picks up the new config without downtime.

I’ve been trying to adapt this workflow to Nomad with one caveat: at least for now, we want to keep using this configurator script. However, we also want to use consul-template (through Nomad’s template stanza) for some last-mile work on the config file (like injecting ports for Consul Connect).

As a more concrete example, we have a HAProxy instance where:

  • a frontend is protected with HTTP Basic Auth, so we need to use our configuration script to inject the actual password into it
  • a backend routes to the Envoy sidecar (for Consul Connect), so we need to use consul-template to inject the sidecar’s address into the config

This is what the haproxy.cfg.tmpl file looks like when stored in our source control:

userlist admin-interface-list
  group is-admin
  user admin password {{ password.haproxy.admin|sha512_crypt }} groups is-admin
...
backend bk_some_web
  mode http
  server www-1 {{ '{{ env "NOMAD_UPSTREAM_ADDR_nginx" }}' }} check maxconn 256

After we run our configurator (it uses the Jinja syntax), a consul-template template is rendered:

userlist admin-interface-list
  group is-admin
  user admin password [ACTUAL HASHED PASSWORD] groups is-admin
...
backend bk_some_web
  mode http
  server www-1 {{ env "NOMAD_UPSTREAM_ADDR_nginx" }} check maxconn 256

I’m aware we could be using Vault + Consul to store the password and config options and template them in in the same way we’ll inject the Envoy sidecar’s address, but we currently want to keep using our config generation script for multiple reasons (e.g. ci-dev-prod parity).

We’ll distribute this and other generated configuration files to all nodes into a special directory, $CONFIG_HOME. Then, the task I wrote for HAProxy looks like this:

task "haproxy" {
  driver = "docker"

  config {
	image = "haproxy@sha256:e6f9faf0c2a0cf2d2d5a53307351fa896d90ca9ccd62817c24026460d97dde92"

	mounts = [
	  {
		type   = "bind"
		target = "/usr/local/etc/haproxy"
		source = "config/haproxy"
	  }
	]
  }

  template {
	source        = "$CONFIG_HOME/config/haproxy/haproxy.cfg"
	destination   = "config/haproxy/haproxy.cfg"
	change_mode   = "signal"
	change_signal = "SIGHUP"
  }
}

We’ll use envsubst to replace $CONFIG_HOME with an actual path before the job is submitted.

So far, this lets me run HAProxy, but if I change the config file mentioned in template.source, the change doesn’t propagate. I dug around and found out that it’s intentional. Here’s some things I tried to get Nomad to reread the template and rerender the config:

  • consul-template supports receiving SIGHUP to reload and rerender all templates. I don’t think it’s possible to send a SIGHUP to an embedded consul-template running inside of the Nomad agent. I tried sending SIGHUP to the agent itself and that didn’t do anything.
  • resubmitting the job after changing the source file doesn’t work.
  • found https://gitter.im/hashicorp-nomad/Lobby?at=5aad4e2bf3f6d24c6886af06 and wrote this: data = "{{ define \"t\" }}{{ file \"$CONFIG_HOME/config/haproxy/haproxy.cfg\" }}{{ end }}{{ executeTemplate \"t\" }}". This makes Nomad pick up changes to the original file, but doesn’t recursively render the template, i.e. the raw consul-template template gets copied into the allocation dir. I think it’s mentioned in https://github.com/hashicorp/consul-template/issues/1402?

Here’s some workarounds I haven’t tried yet since it might be overkill:

  • Including something like # {{ key "config/sentinel" }} at the top of every configuration file. When we need to reload the configs, we’ll write a random value to Consul at config/sentinel, which should force consul-template to rerender the file.
  • Writing the actual configuration files (after our configurator has run) into Consul, then referencing them as a template. I don’t think this will work (https://github.com/hashicorp/nomad/issues/4470).
  • Inlining the generated configuration files into the job spec (as data=...). Issues with that are that they have secrets in them and that I’m not sure if a change in the data key in a task spec will force the task to get recreated.

Basically, is there a way to force Nomad to reread all files referenced by template stanzas and propagate the changes to its integrated consul-template?

Hi @mildbyte! As you’ve noted, there’s no way to get consul-template to detect a change to the source file without reloading it entirely: https://github.com/hashicorp/consul-template#running-and-process-lifecycle And we don’t reload the embedded consul-template unless we restart Nomad or make a modification to the job specification.

When I’ve run into this situation in production environments, I’ve done one of the following:

  • Include a sentinel key, as you’ve suggested above.
  • Run consul-template as PID1 inside my container, and then use Nomad’s embedded consul-template to render the source file for the “inner” consul-template running in the container and trigger a SIGHUP. I found this works ok but is kind of janky-feeling.

I think what’s making this challenging is that you have two sources of changes: the configurator that injects the secrets and the Consul service catalog. And it sounds like the configurator wants to run before we run the job, but it creates secrets we have to handle.

Writing the actual configuration files (after our configurator has run) into Consul, then referencing them as a template.

The other problem with this is that it would be using Consul as a store for the hashed passwords.

I’m not sure if a change in the data key in a task spec will force the task to get recreated.

It will: that changes the jobspec, so the task will get recreated.

What’s not clear in this situation is where the configurator is getting the secrets from in the first place? You might be able to solve this problem by giving the pre-rendered templates to a prestart task that runs the configurator and then drops the results into the allocation directory as a template source for the main task. Something like:

job {
  task "init" {
    lifecycle {
      hook    = "prestart"
    }
    config {
      command = "/bin/sh"
      args = ["-c", "/bin/configurator.py; cp local/haproxy.cfg.tmpl /alloc/haproxy.cfg.tmpl"]
    }
    template {
      # no need for a change_mode here because it's a prestart
      # and it'll only run once on job start
      destination = "local/haproxy.cfg.tmpl"
      data = <<-EOT
userlist admin-interface-list
  group is-admin
  user admin password {{ password.haproxy.admin|sha512_crypt }} groups is-admin
...
backend bk_some_web
  mode http
  server www-1 {{ '{{ env "NOMAD_UPSTREAM_ADDR_nginx" }}' }} check maxconn 256

    }
  }

  task "haproxy" {
    ...
  }
}

Alternately, given that you’ve got hashed passwords in text files anyways… assuming you have ACLs configured for Consul, you could have your configurator write the hashed password to a Consul key and then the template itself could be inlined in the jobspec:

template {
  destination = "config/haproxy/haproxy.cfg"
  change_mode   = "signal"
  change_signal = "SIGHUP"

  data = <<-EOT
userlist admin-interface-list
  group is-admin
  user admin password {{ key "secrets/admin" }} groups is-admin
...
backend bk_some_web
  mode http
{{ range service "nginx" }}
  server {{ .Name }} {{ .Address }}:{{ .Port }} check maxconn 256{{ end }}
EOT

That would also make it easy for you to replace Consul with Vault for the secrets store down the road. But it’s hard for me to recommend this approach because it has the hashed secret floating around in Consul. But depending on how the configurator works today that might not be any worse than what you’re doing right now.