Nomad 1.4 and HaProxy server-template without consul and its DNS feature?

I’m building a new project with Nomad and want to use HaProxy as my facing reserve proxy for a public API (performance, reliability and i have some experience with it).

For this specific API, i want the best performance i can (so preferring HaProxy over Caddy and Traefik for example).

I have a 1st working version with Nomad, Consul and HaProxy but i would like to remove Consul. After reading some documentation with the steps to deploy/maintain it in production … i would like to avoid this if i can.

In my current configuration, HaProxy knows where to « route » request thanks to haproxy server-template directive and DNS resolution … provided by Consul.

I was inspired by this official hashicorp tutorial and this blog post.

As far as i can tell, nomad 1.4 does/will not provide such a feature.

Can someone confirm my understanding ?
I’m also open to other ideas / suggestion

Thanks

Hi @brahima,

This is totally possible, however, Nomad does not currently provide a native DNS lookup feature with its service discovery functionality. The way to achieve this would be to utilise Nomad’s template block from within the HAProxy job and use the consul-template Nomad API lookup functions to lookup services that you wish to route to from the HAProxy instance. This would then write a config file that HAProxy loads; changes to the config could then use the template signal configuration options to send a SIGHUP signal to HAProxy to inform it to reload its configuration.

Thanks,
jrasell and the Nomad team

Hi @jrasell and thanks for the time you took answering my question.

I did my tests using the 1.4 beta-1 to play with the latest features.

I did solve my problem following your advices and now have a working deployment without Consul.

However, there’s 1/2 points (see below) that i would like to improve/understand.

I went from that (with consul) :

global
   tune.ssl.default-dh-param 2048

defaults
   mode http
   timeout connect 5000
   timeout check 5000
   timeout client 30000
   timeout server 30000

frontend stats
   bind *:1936
   stats uri /
   stats show-legends
   no log

frontend https_front
   # TODO: Update bind addresses here (do not use *)
   bind *:{{ env "NOMAD_PORT_https" }} ssl crt /local/tls/private/example.com.pem
   http-request set-header X-Forwarded-Proto https

{{ range $tag, $services := services | byTag }}{{ if eq $tag "proxy" }}{{ range $service := $services }}{{ if ne .Name "haproxy" }}
   acl host_{{ .Name }} hdr(host) -i {{ .Name }}.example.com
   use_backend {{ .Name }} if host_{{ .Name }}
{{ end }}{{ end }}{{ end }}{{ end }}

{{ range $tag, $services := services | byTag }}{{ if eq $tag "proxy" }}{{ range $service := $services }}{{ if ne .Name "haproxy" }}
backend {{ .Name }}
    balance roundrobin
    server-template {{ .Name }} 3 _{{ .Name }}._tcp.service.consul resolvers consul resolve-opts allow-dup-ip resolve-prefer ipv4 check
{{ end }}{{ end }}{{ end }}{{ end }}

resolvers consul
   nameserver consul {{ env "CONSUL_HTTP_ADDR" }}:8600
   accepted_payload_size 8192
   hold valid 5s

To this (with nomad). Here i copy/paste the full task stanza :

    task "haproxy" {
      driver = "docker"

      config {
        image        = "haproxy:2.6"
        # Not in host mode
        #network_mode = "host"

        ports = ["haproxy_ui", "https"]

        # /!\ This is important to mount the haproxy directory and not a file
        # See : https://github.com/docker-library/haproxy/issues/31#issuecomment-247680432
        volumes = [
          "local:/usr/local/etc/haproxy",
        ]
      }

      template {
        destination = "local/haproxy.cfg"
        change_mode = "signal"
        change_signal = "SIGHUP"
        data        = <<EOF

global
   tune.ssl.default-dh-param 2048

defaults
   mode http
   timeout connect 5000
   timeout check 5000
   timeout client 30000
   timeout server 30000

frontend stats
   bind *:1936
   stats uri /
   stats show-legends
   no log

frontend https_front
   # TODO: Update bind address (remove *)
   bind *:{{ env "NOMAD_PORT_https" }} ssl crt /local/tls/private/example.com.pem
   http-request set-header X-Forwarded-Proto https
{{ range $tag, $services := nomadServices | byTag }}{{ if eq $tag "proxy" }}{{ range $service := $services }}{{ if ne .Name "haproxy" }}
   acl host_{{ .Name }} hdr(host) -i {{ .Name }}.example.com
   use_backend {{ .Name }} if host_{{ .Name }}

backend {{ .Name }}
   balance roundrobin
{{- end }}{{- end }}{{- end }}{{- end }}
{{$allocID := env "NOMAD_ALLOC_ID" -}}
{{range nomadService 3 $allocID "preview"}}
   server {{.Name}}-{{.Address}} {{.Address}}:{{.Port}} check
{{- end }}


EOF
      }

      resources {
        cpu    = 200
        memory = 128
      }

      volume_mount {
        volume      = "certs"
        destination = "/local/tls"
        read_only   = true
      }
    }

Note that for the SIGHUP signal to work i had to mount the haproxy directory (containing he haproxy.cfg) and not the file only.

Hope that would help anyone passing by.

Two points that i’d like to improve/understand :

  1. Is there a unique service ID or way to generate a unique service name to use in the template ? I had to concatenate the word ‘server’ with the IP address ({{.Name}}-{{.Address}}) to generate something unique to give to HaProxy (i played with range, loop … with no success).
  2. When i increase/decrease my target service count, i can see that my HaProxy is reloading its conf (via logs) and the HaProxy allocation is not killed but the template is sometimes rendered 2 times. Is it normal (rendering 2 times) ?

Thanks again for your time and hope my explanations were clear.

Best regards,
Brahim

1 Like

Sorry for being dense, but I am not seeing your concatenation of the word “server” in your Nomad specific config – other than the obvious usage of it while declaring a timeout and backend … That said, if you happen to be referring to the declaration of an HAProxy backend server, I have used the below config with success, to ensure a unique server name for every allocation instance of a specific Nomad service ::

server {{.Name}}-{{.Address}}-{{.Port}} {{.Address}}:{{.Port}} check

This is normal for me, especially when scaling up/down a target service count in our Nomad sandbox, with limited resources … The best way I can explain it (have observed) – is that a template is re-rendered for every change in allocation count for a specific service … For example, scaling from 1-to-3 allocations, I have consistently seen the template re-render 2 times … I haven’t had time to test this theory, but possibly extending splay may reduce the number of times a template is re-rendered – if that is a goal of yours …