Serving Public API through Nomad + Consul Connect + Load Balancer

I’m currently trying to develop a PoC for a microservice oriented web-service.

I’m imagining the whole architecture like this:


                                        /auth     +----+    +-----------------------+
                                     +----------> | LB +--> | Auth Service (1 to N) |
                                     |            +----+    +-----------------------+
                                     |                        ^
                                     |                        | (via Consul Connect)
                                     |                        v
+------------+     +---------------+ |  /user     +----+    +-----------------------+
|  Internet  +---->| Load Balancer | +----------> | LB +--> | User Service (1 to N) |
+------------+     +---------------+ |            +----+    +-----------------------+
                                     |                        ^
                                     |                        | (via Consul Connect)
                                     |                        v
                                     |  /buy      +----+    +-----------------------+
                                     +----------> | LB +--> | Buy Service (1 to N)  |
                                                  +----+    +-----------------------+

The user would access the API via the web frontend from the public internet, that would go to a load balancer which would, based on which API path is being called, route the request to the correct service.

For example, the user calls /auth/signup to signup, he would call the load balancer, the load balancer would route the request to the envoy proxy of the auth service which would call one of the auth service nodes, of which several can exist. The auth service would connect to the database, create a new record, and call the user service via Consul Connect to create a new user record.

Hopefully, everything is clear so far.

I’ve created a nomad job for this:

job "web-service" {
  datacenters = ["dc1"]
  type = "service"

  # ===========================================================================
  # POSTGRES
  # ===========================================================================
  group "postgres" {
    network {
      mode = "bridge"
      port "postgres" {
        to = 5432
      }
    }

    service {
      port = "5432"
      connect {
        sidecar_service {}
      }
    }

    task "postgres" {
      driver = "docker"
      env {
        POSTGRES_PASSWORD = "secret"
        POSTGRES_DB = "user-service"
      }
      config {
        image = "postgres:14.1-alpine"
        ports = ["postgres"]
      }
      resources {
        cpu    = 1000 # MHz
        memory = 1024 # MB
      }
    }
  }

  # ===========================================================================
  # AUTH SERVICE
  # ===========================================================================
  group "auth" {
    count = 3
    network {
      mode = "bridge"
      port "auth-service" {
        to = "9001"
      }
    }

    service {
      port = "9001"
      connect {
        sidecar_service {}
      }

      check {
        expose = true
        type = "http"
        port = "auth-service"
        path = "/auth/health"
        interval = "10s"
        timeout = "3s"
      }
    }

    task "auth-service" {
      driver = "docker"
      env {
        HTTP_PORT = "${NOMAD_PORT_auth-service}"
      }
      config {
        image = "https://hub.docker.io/auth-service:0.1.0"
        ports = ["auth-service"]
        auth {
          username = "user"
          password = "pass"
          server_address = "https://hub.docker.io"
        }
      }
      resources {
        cpu    = 500 # MHz
        memory = 256 # MB
      }
    }
  }

  # ===========================================================================
  # USER SERVICE
  # ===========================================================================
  group "user-service" {
    count = 3
    network {
      mode = "bridge"
      port "user-service" {
        to = "9001"
      }
    }

    service {
      port = "9001"
      connect {
        sidecar_service {
          proxy {
            config {
              protocol = "http"
            }
            upstreams {
              destination_name = "web-service-postgres"
              local_bind_port = 5432
            }
          }
        }
      }

      check {
        expose = true
        type = "http"
        port = "user-service"
        path = "/user/health"
        interval = "10s"
        timeout = "3s"
      }
    }

    task "user-service" {
      driver = "docker"
      env {
        HTTP_PORT = "${NOMAD_PORT_user-service}"
        DB_CONNECT_STRING = "postgres://postgres:secret@${NOMAD_UPSTREAM_ADDR_web_service_postgres}/user-service"
      }
      config {
        image = "https://hub.docker.io/user-service:0.1.0-1"
        ports = ["user-service"]
        auth {
          username = "user"
          password = "pass"
          server_address = "https://hub.docker.io"
        }
      }
      resources {
        cpu    = 500 # MHz
        memory = 256 # MB
      }
    }
  }
}

The services itself bind to 127.0.0.1 on the port $HTTP_PORT.
They spin up fine and the healthchecks also work.

I’ve got two main issues:

  1. How do I access my 3 auth services through the envoy proxy from the load balancer?
    I’ve tried using Fabio, which didn’t work. In my case it tried to route /auth to 127.0.0.1:9001, which is the port of the auth service but the service itself isn’t accessible via 9001 (due to bridge networking, which is required for connect).
    I’ve also tried using HAProxy but that didn’t work, either. Service discovery worked but I assume it also tried to connect using port 9001, which, also didn’t work.

  2. Communication between user service and postgres doesn’t work. Despite multiple attempts and seemingly trying everything I can’t get those two to communicate via Consul Connect.

I’ve tried making HAProxy part of my service mesh but it than meant that HAProxy isn’t accessible anymore.
Would an ingress gateway solve my issue? The load balancer would access the service mesh through the gateway?

Any help or hints are much appreciated. Thank you!

Hi @Leandros, I believe what you need is a Consul Ingress gateway. We have a tutorial for Ingress gatways. The loadbalancer would point to the ingress gateway, and from there the ingress gateway woud route the request to the auth service.

1 Like

Alright, I got it setup with the an ingress gateway.

  group "ingress" {
    count = 3
    network {
      mode = "bridge"
      port "inbound" {
        to = 8081
      }
    }

    service {
      port = "inbound"
      connect {
        gateway {
          ingress {
            listener {
              port = "8081"
              protocol = "http"
              service {
                name = "web-service-auth"
                hosts = [
                  "auth.web-service.local"
                ]
              }
              service {
                name = "web-service-user"
                hosts = [
                  "user.web-service.local"
                ]
              }
            }
          }
        }
      }
    }
  }

It required some service-defaults for Consule but that’s manageable. I can now access my services via the ingress gateway on the inbound port.

On top of that I setup an HAProxy instance.

job "haproxy" {
  datacenters = ["dc1"]
  # run exactly one per node.
  type = "system"

  group "haproxy" {
    count = 1

    network {
      port "http" {
        static = 8080
      }

      port "haproxy_ui" {
        static = 1936
      }
    }

    service {
      name = "haproxy"
      check {
        name     = "alive"
        type     = "tcp"
        port     = "http"
        interval = "10s"
        timeout  = "2s"
      }
    }

    task "haproxy" {
      driver = "docker"

      config {
        image        = "haproxy:2.5"
        network_mode = "host"

        volumes = [
          "local/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg",
        ]
      }

      template {
        data = <<EOF
defaults
  timeout connect 10s
  timeout client 30s
  timeout server 30s
  mode http

frontend stats
  bind *:1936
  stats uri /
  stats show-legends
  no log

frontend http
  bind *:8080
  acl auth-service path_beg -i /auth
  acl user-service path_beg -i /user
  use_backend svc_auth-service if auth-service
  use_backend svc_user-service if user-service

backend svc_auth-service
  balance roundrobin
  default-server maxconn 200
  http-request set-header Host auth.web-service.local
  server-template auth-service 1-10 _web-service-ingress._tcp.service.consul resolvers consul resolve-opts allow-dup-ip resolve-prefer ipv4 check

backend svc_user-service
  balance roundrobin
  default-server maxconn 200
  http-request set-header Host user.web-service.local
  server-template user-service 1-10 _web-service-ingress._tcp.service.consul resolvers consul resolve-opts allow-dup-ip resolve-prefer ipv4 check

resolvers consul
  nameserver consul 127.0.0.1:8600
  accepted_payload_size 8192
  hold valid 5s
EOF

        destination = "local/haproxy.cfg"
      }

      resources {
        cpu    = 2000 # MHz
        memory = 1024 # MB
      }
    }
  }
}
1 Like