Scale group / task -> wrong vars

Hello,

I have a job (from the wordpress exmaple), which contains a Nginx and an app. If I use the scale command, like:

nomad job scale backoffice_gunicorn 2

than i have the issue, that the Nginx upstream server ENV from the new tasks, are not from also the 2nd app, but from the 1st.

In other words: the 2nd Nginx has the upstream IP and port, from the 1st one. I assume, that problem is on the template stanza:

               template {
                  data = <<EOH
                  {{- if service "backoffice-django" -}}
                  {{- with index (service "backoffice-django") 0 -}}
                  API={{ .Address }}:{{ .Port }}
                  {{- end -}}
                  {{- end }}
                  EOH
                     destination = "local/envvars.txt"
                     env = true
               }

On both docker containers, the content of local/envvars.txt is the same, but I expected, that the 2nd contains also the IP and port from the 2nd APP.

Means: Nginx2 → APP1

So , I want, that the Nginx always has the IP:PORT, from the APP on the same group:

  • Nginx1 → APP1
  • Nginx2 → App2
  • …

Any suggestions would be great :slight_smile:

Hi @linuxmail,

Could you share the jobspec you are using?

If NGINX and Wordpress applications are running in the same group, you could use interpolation to populate the backend links.

Thanks,
jrasell and the Nomad team

Hi @jrasell

thanks for the reply … I saw the interpolation, but … I’m unsure … how to replace the IP/Port. I was thinking, to export the values to a file and reread them. I red, that I can’t export variables in task1 and read them in task2.

The point is: The app (Django / Gunicorn) and the Nginx builds always a pair, because of the static files they share (thats why they are in the same group (learned last week :slight_smile: …)

It works perfectly, if you just have one pair running. But in case, I want to scale, the value for Nginx where to find its “upstream”, is not updated. I have an ENV, called API and it holds the IP and port to the Gunicorn App, like this:

root@fra-test-nomad-02:[~]: docker exec  -it nginx-75596968-bb42-8ebc-2044-13f88db00748 env | grep API
API=172.16.0.103:26867

It gets this info, from Consul and the template stanza, which I copied from the Wordpress example. So it’s clear, why it happens, because the Wordpress example uses a single DB :slight_smile:

But, If I scale it ~n times, it will always sends the requests to the first Gunicorn, what’s wrong. I need to update the API variable to the IP/Port from the Gunicorn — no idea how to name it in english – for which the Nginx shares the files on the ${NOMAD_ALLOC_DIR}, so that I have a pair again.

I hope, I was able to explain it :slight_smile:

The complete job:

job "backoffice_gunicorn" {

# For our secrets
   vault {
      policies = ["access-tables"]
   }

# We want it on fra-test
   datacenters = ["fra-test"]

      group "backoffice" {
         network {
            port "api" { to = 8000 }
            port "https" { to = 443 }
         }

    ephemeral_disk {
      migrate = false
      size    = 300
      sticky  = false
    }

         service {
            name = "django"
               tags = ["backoffice","django"]
               port = "api"
               check {
                  type = "tcp"
                     port = "api"
                     interval = "10s"
                     timeout = "2s"
               }
         }

         service {
            name = "nginx"
               tags = ["backoffice","nginx"]
               port = "api"

               check {
                  type = "tcp"
                     port = "https"
                     interval = "10s"
                     timeout = "2s"
               }
         }

         task "django-collectstatic" {
            lifecycle {
               hook = "prestart"
               sidecar = false
            }
            driver = "docker"
               template {
                  data = <<EOH
                  {{key "nomad/backoffice/environment"}}
                  EOH
                     destination = "local/env"
                     env         = true
               }

            template {
               data = <<EOH
               {{with secret "kv/docker/nomad/backoffice/secrets"}}
               {{range $key, $value := .Data.data}}
               {{$key}}={{$value}}{{end}}
               {{end}}
               EOH
                  destination = "secrets/file.env"
                  env = true
            }

            config {
               image = "fra-test-harbor.example.local/testing/backoffice/test:latest"
                  ports = ["api"]
                  entrypoint = [ "./docker-entrypoint.sh", "collectstatic" ]
                  force_pull = true
                  volumes = [ "${NOMAD_ALLOC_DIR}/data/backoffice/static:/app/static"]
                  auth {
                     username = "robot$devops"
                        password = "secret"
                  }
            }

            resources {
               cpu = 1000
                  memory = 256
            }
         }

         task "django" {
            driver = "docker"
               template {
                  data = <<EOH
                  {{key "nomad/backoffice/environment"}}
                  EOH
                     destination = "local/env"
                     env         = true
               }

            template {
               data = <<EOH
               {{with secret "kv/docker/nomad/backoffice/secrets"}}
               {{range $key, $value := .Data.data}}
               {{$key}}={{$value}}{{end}}
               {{end}}
               EOH
                  destination = "secrets/file.env"
                  env = true
            }

            config {
               image = "fra-test-harbor.example.local/testing/backoffice/test:latest"
                  ports = ["api"]
                  volumes = [ "${NOMAD_ALLOC_DIR}/data/backoffice/static:/app/static"]
                  auth {
                     username = "robot$devops"
                        password = "secret"
                  }
            }

            resources {
               cpu = 1000
                  memory = 256
            }
         }

         task "nginx" {
            driver = "docker"

               template {
                  data = <<EOH
                  {{- if service "django" -}}
                  {{- with index (service "django") 0 -}}
                  API={{ .Address }}:{{ .Port }}
                  {{- end -}}
                  {{- end }}
                  EOH

                     destination = "local/envvars.txt"
                     env = true
               }

            template {
               data = <<EOH
               {{key "nomad/backoffice/environment"}}
               EOH
                  destination = "local/env"
                  env         = true
            }

            template {
               data        = "{{ with secret \"kv/docker/nomad/certs\"  }}{{.Data.data.key}}{{end}}"
                  destination = "certs/server.key"
                  change_mode = "restart"
                  perms        = "0644" 
                  splay       = "1m"
            }

            template {
               data        = "{{ with secret \"kv/docker/nomad/certs\"  }}{{.Data.data.crt}}{{end}}"
                  destination = "certs/server.crt"
                  change_mode = "restart"
                  perms        = "0644" 
                  splay       = "1m"
            }
            template {
               data        = "{{ with secret \"kv/docker/nomad/certs\"  }}{{.Data.data.dh}}{{end}}"
                  destination = "certs/dh.pem"
                  change_mode = "restart"
                  perms        = "0400" 
                  splay       = "1m"
            }
            config {
               image = "fra-test-harbor.example.local/testing/backoffice/nginx:latest"
                  ports = ["https"]
                  volumes = [
                     "${NOMAD_ALLOC_DIR}/data/backoffice/static:/home/app/static/",
                     "certs/:/certs"
                  ]
                  auth {
                     username = "robot$devops"
                        password = "secret"
                  }
            }

            resources {
               cpu = 1000
                  memory = 256
            }
         }

      }
}

hi,

I’ve found an issue for it: Add interpolation for driver IP address of another task within the group · Issue #4913 · hashicorp/nomad · GitHub

hi there,

I’ve got it working in a different way … it wasn’t so complicated:

network {
	port "api" { to = 8000 }
	port "https" { to = 443 }
}

...
task "nginx" {
          env = {
           API = "${NOMAD_HOST_ADDR_api}"
            }
...

so, setting the API (which is used by Nginx for upstream host) is replaced with the corresponding host and IP from the same group. :slight_smile:

1 Like

Glad you found a solution!

1 Like