Template Rendering Issues/Questions, Duplicate Data

Hello, I have some questions, possibly issues, in regards to how nomad deals with rendering templates into the alloc directory. So, I have a nomad job that runs a caddy container with the docker driver. I am using nomad’s native service discovery as well as integrating with vault. Everything was working for awhile, but just recently after running a new job (same job) after editing the data section in the template stanza, I seem to be running into some errors. I will list some files first.

Here is my caddy job file:

job "caddy" {

  datacenters = ["home"]

  type = "service"

  update {
    max_parallel     = 1
    min_healthy_time = "30s"
    healthy_deadline = "3m"
  }

  group "caddy" {

    count = 1
  
    network {
      mode = "host"

      port "http" {
        to     = 80
        static = 80
      }

      port "https" {
        to     = 443
        static = 443
      }
    }
  
    restart {
      attempts = 3
      delay    = "1m"
    }

    volume "data" {
      type      = "host"
      read_only = false
      source    = "caddy-data"
    }

    volume "config" {
      type      = "host"
      read_only = false
      source    = "caddy-config"
    }

    task "caddy" {

      vault {
        policies = ["nomad-caddy-task"]
      }
    
      template {
        data = <<EOF
{
        ocsp_stapling off
}
*.domain.com {
        header {
                # enable HSTS
                Strict-Transport-Security max-age=31536000

                # clickjacking protection
                X-Frame-Options DENY
        }

        tls email@domain.com {
                        {{ range nomadService "smallstep-ca" }}
                ca https://192.168.1.2:{{ .Port }}/acme/acme/directory
                ca_root /srv/roots.pem
                                {{ end }}
                {{ with secret "secret/data/nomad/tasks/caddy/env" }}
                dns cloudflare {{ .Data.data.cloudflare_dns_token }}
                {{ end }}
                resolvers 1.1.1.1
        }

                {{ range nomadService "baikal" }}
                @baikal host domain.com
                handle @baikal {
                        reverse_proxy {{ .Address }}:{{ .Port }}
                }
                {{ end }}

                {{ range nomadService "gitea" }}
                @gitea host domain.com
                handle @gitea {
                        reverse_proxy {{ .Address }}:{{ .Port }}
                }
            {{ end }}

                {{ range nomadService "firefly" }}
        @firefly host domain.com
        handle @firefly {
                reverse_proxy {{ .Address }}:{{ .Port }}
        }
        {{ end }}


                {{ range nomadService "vaultwarden" }}
        @vaultwarden host domain.com
        handle @vaultwarden {
                @insecureadmin {
                        not remote_ip 192.168.0.0/16 172.16.0.0/12 10.0.0.0/8
                        path /admin*
                }

                redir @insecureadmin /

                reverse_proxy /notifications/hub/negotiate {{ .Address }}:{{ .Port }}
                reverse_proxy /notifications/hub {{ .Address }}:3012
                reverse_proxy {{ .Address }}:{{ .Port }} {
                        header_up X-Real-IP {remote_host}
                }
        }
        {{ end }}

                {{ range nomadService "qbittorrent" }}
        @qbittorrent host domain.com
        handle @qbittorrent {
                reverse_proxy {{ .Address }}:8080
        }
        {{ end }}

                {{ range nomadService "jellyfin" }}
        @jellyfin host domain.com
        handle @jellyfin {
                reverse_proxy {{ .Address }}:{{ .Port }}
        }
        {{ end }}
}
EOF

        destination = "local/Caddyfile"
      }

      service {
        name     = "caddy-https"
        provider = "nomad"
        port     = "https"
                }

      driver = "docker"
                  
      config {
        image = "customregistry/azoller/caddy:2.5.1v4"
        ports = ["http","https"]
        auth {
                username = "azoller"
                password = "randompassword"
        }
        mount {
          type   = "bind"
          source = "local/Caddyfile"
          target = "/etc/caddy/Caddyfile"
        }
      }
                  
      volume_mount {
        volume      = "data"
        destination = "/data"
        read_only   = false
      }

      volume_mount {
        volume      = "config"
        destination = "/config"
        read_only   = false
      }
                  
      resources {
        cpu    = 300
        memory = 150
      }
    }        
  }
}

The job never gets to healthy status because the container doesn’t start properly due to an incorrect Caddyfile that is rendered from the template into the alloc directory. The Caddyfile from the local alloc directory looks like this:

{
        ocsp_stapling off
}
*.domain.com {
        header {
                # enable HSTS
                Strict-Transport-Security max-age=31536000

                # clickjacking protection
                X-Frame-Options DENY
        }

        tls email@domain.com {
        
                ca https://192.168.1.2:9000/acme/acme/directory
                ca_root /srv/roots.pem
 
                ca https://192.168.1.2:9000/acme/acme/directory
                ca_root /srv/roots.pem
 
                ca https://192.168.1.2:9000/acme/acme/directory
                ca_root /srv/roots.pem
 
                
                dns cloudflare TOKEN
                
                resolvers 1.1.1.1
        }


                @baikal host domain.com
                handle @baikal {
                        reverse_proxy 192.168.1.2:22306
                }

                @baikal host domain.com
                handle @baikal {
                        reverse_proxy 192.168.1.2:27418
                }

                @baikal host domain.com
                handle @baikal {
                        reverse_proxy 192.168.1.2:27728
                }

                @baikal host domain.com
                handle @baikal {
                        reverse_proxy 192.168.1.2:27557
                }



                @gitea host domain.com
                handle @gitea {
                        reverse_proxy 192.168.1.2:26446
                }
            
                @gitea host domain.com
                handle @gitea {
                        reverse_proxy 192.168.1.2:29814
                }
            
                @gitea host domain.com
                handle @gitea {
                        reverse_proxy 192.168.1.2:29519
                }
            


        @firefly host domain.com
        handle @firefly {
                reverse_proxy 192.168.1.2:25299
        }
        
        @firefly host domain.com
        handle @firefly {
                reverse_proxy 192.168.1.2:29822
        }
        
        @firefly host domain.com
        handle @firefly {
                reverse_proxy 192.168.1.2:30501
        }
        



        @vaultwarden host domain.com
        handle @vaultwarden {
                @insecureadmin {
                        not remote_ip 192.168.0.0/16 172.16.0.0/12 10.0.0.0/8
                        path /admin*
                }

                redir @insecureadmin /

                reverse_proxy /notifications/hub/negotiate 192.168.1.2:31798
                reverse_proxy /notifications/hub 192.168.1.2:3012
                reverse_proxy 192.168.1.2:31798 {
                        header_up X-Real-IP {remote_host}
                }
        }
        
        @vaultwarden host domain.com
        handle @vaultwarden {
                @insecureadmin {
                        not remote_ip 192.168.0.0/16 172.16.0.0/12 10.0.0.0/8
                        path /admin*
                }

                redir @insecureadmin /

                reverse_proxy /notifications/hub/negotiate 192.168.1.2:25080
                reverse_proxy /notifications/hub 192.168.1.2:3012
                reverse_proxy 192.168.1.2:25080 {
                        header_up X-Real-IP {remote_host}
                }
        }
        
        @vaultwarden host domain.com
        handle @vaultwarden {
                @insecureadmin {
                        not remote_ip 192.168.0.0/16 172.16.0.0/12 10.0.0.0/8
                        path /admin*
                }

                redir @insecureadmin /

                reverse_proxy /notifications/hub/negotiate 192.168.1.2:29828
                reverse_proxy /notifications/hub 192.168.1.2:3012
                reverse_proxy 192.168.1.2:29828 {
                        header_up X-Real-IP {remote_host}
                }
        }
        


        @qbittorrent host domain.com
        handle @qbittorrent {
                reverse_proxy 192.168.1.2:8080
        }
        
        @qbittorrent host domain.com
        handle @qbittorrent {
                reverse_proxy 192.168.1.2:8080
        }
        
        @qbittorrent host domain.com
        handle @qbittorrent {
                reverse_proxy 192.168.1.2:8080
        }
        
        @qbittorrent host domain.com
        handle @qbittorrent {
                reverse_proxy 192.168.1.2:8080
        }
        


        @jellyfin host domain.com
        handle @jellyfin {
                reverse_proxy 192.168.1.2:20709
        }
        
        @jellyfin host domain.com
        handle @jellyfin {
                reverse_proxy 192.168.1.2:31106
        }
        
        @jellyfin host domain.com
        handle @jellyfin {
                reverse_proxy 192.168.1.2:25914
        }
        
        @jellyfin host domain.com
        handle @jellyfin {
                reverse_proxy 192.168.1.2:31272
        }
        

}

Now, the container wont start because I have multiple instances of things like @baikal and @gitea which caddy won’t run if you have the same handle defined twice. This appears to be the actual Caddyfile being rendered into the container. So, I guess I am wondering how exactly does template rendering work?

If I edit the data section in the template stanza in the job file, does it overwrite the previous rendered template/file? Or does it just append to the old one? I am not sure I understand the exact behavior. I would assume it just creates a new file based on the destination local/Caddyfile into the new alloc dir, but it doesn’t appear to be this way? Perhaps I need to just restart the current alloc instead of running the job again? Also, I realize in the caddy file it will make a new block for each service due to the port updating when I update the services, but it won’t delete the old block for that service/proxy block.

Some of my setup:
Nomad Version: 1.3.1
Debian 11
Latest Docker Engine 20.10.17

Aware this is an old post, and probably more Caddy related, but I’ve just been working on something similar and it cropped up while i was googling.

This is going to loop through the full definition for each gitea address/port combo you have.

Based on the example and the expected Caddyfile syntax it should probably be something more like this:

@gitea host domain.com
handle @gitea {
    reverse_proxy {{ range nomadService "gitea" }}{{ .Address }}:{{ .Port }} {{ end }}
}

…so that it’s only looping through the addresses for each service rather than the whole.

Hi! Thanks for replying, that makes sense. I have moved onto traefik however.

I do have one problem and maybe you can help. I can’t seem to get the bridge networking mode to work properly, at the group level.

I have verified the nomad virtual interface is being created. I have the CNI plugins available at /opt/cni/bin with the correct path in the nomad config file. But, whenever I create a task, the port always seems to be allocated/binded to the host IP interfave and not the nomad interface…

However, if I force the network interface to be nomad in the client config, then the tasks binds to the nomad network namespace/interface. So, I don’t know what is going on.