Consul Networking

Hey,

i am facing the following issue. I wanna setup a basic nomad/consul cluster with a flask app which has to connect to a redis cache. For testing purposes I have a “master” node with nomad and consul in server mode and a “slave” node with nomad and consul in client mode.
The IP addresses of the servers are public only, so there are no private addresses, i wanna connect via public.

Here are my configs:
Consul Server:

data_dir = "/opt/consul"

server           = true
bootstrap_expect = 1 
client_addr      = "111.222.333.444"
bind_addr        = "111.222.333.444"
advertise_addr   = "111.222.333.444"
log_level = "INFO"
enable_syslog = true

connect {
  enabled = true
}
ui_config {
  enabled = true
}
ports {
  grpc = 8502
}

datacenter       = "dc1"

telemetry {
  prometheus_retention_time = "30s"
}

Consul Client:

data_dir = "/opt/consul"

server           = false
client_addr      = "222.333.444.555"
bind_addr        = "222.333.444.555"
advertise_addr   = "222.333.444.555"
log_level = "INFO"
enable_syslog = true
leave_on_terminate = true
start_join = [
  "111.222.333.444"
]

connect {
  enabled = true
}
ui_config {
  enabled = false
}
ports {
  grpc = 8502
}

datacenter       = "dc1"

telemetry {
  prometheus_retention_time = "30s"
}

Nomad Server:

bind_addr = "111.222.333.444"
data_dir  = "/opt/nomad"

datacenter = "dc1"
region = "dc1"

advertise {
  http = "111.222.333.444"
  rpc  = "111.222.333.444"
  serf = "111.222.333.444"
}

server {
  enabled          = true
  bootstrap_expect = 1
  server_join {
    retry_join = ["localhost:4648"]
  }
}

plugin "raw_exec" {
  config {
    enabled = true
  }
}

consul {
  address = "localhost:8500"
}

log_level = "INFO"

and nomad client:

bind_addr = "222.333.444.555"
data_dir  = "/opt/nomad"

datacenter = "dc1"
region = "dc1"

advertise {
  http = "222.333.444.555"
  rpc  = "222.333.444.555"
  serf = "222.333.444.555"
}

client {
  enabled       = true
  network_interface = "eth0"
  server_join {
    retry_join = ["111.222.333.444"]
    retry_max = 3
    retry_interval = "15s"
  }
}

plugin "raw_exec" {
  config {
    enabled = true
  }
}

consul {
  address = "111.222.333.444:8500"
}

log_level = "INFO"

And here is the App i try to deploy:

job "hit-counter" {
  datacenters = ["dc1"]
  type        = "service"

  group "flask" {
    count = 1

    network {
      mode = "bridge"
    }

    service {
      name     = "flask"
      provider = "consul"
      port     = "8000"
      connect {
        sidecar_service {
          proxy {
            upstreams {
              destination_name = "redis"
              local_bind_port  = 6379
            }
          }
        }
      }
    }

    task "flask" {
      driver = "docker"
      
      template {
         data = <<EOH
{{ range service "redis" }}
REDIS_HOST={{ .Address }}
REDIS_PORT={{ .Port }}
{{ end }}
EOH
         destination = "secrets/config.env"
         env         = true
      }

      config {
        image        = "flaskapp:1.1"
      }
    }
  }

  group "redis" {
    count = 1

    network {
      mode = "bridge"
      port "redis" {
        to = 6379
      }
    }

    service {
      name     = "redis"
      provider = "consul"
      port     = "6379"
      connect {
        sidecar_service {}
      }
    }

    task "redis" {
      driver = "docker"

      config {
        image = "redislabs/redismod"
        ports = ["redis"]
      }
    }
  }
}

I also installed CNI plugins on both machines.

What am i missing here?
With this current config flask cannot access redis.

I am grateful for any help.

@seth.hoenig can you please have a look at this issue

I think

env {
        REDIS_HOST = "${NOMAD_UPSTREAM_IP_redis}"
        REDIS_PORT = "${NOMAD_UPSTREAM_PORT_redis}"
      }

does the trick here, but i am still confused if this is consul or nomad now.

I also wanted to setup a ingress gw then:

job "hit-counter-ingress" {

  datacenters = ["dc1"]

  group "hit-counter-ingress" {

    count = 1
    network {
      mode = "bridge"
      port "web_inbound" {
        to     = 8000
        static = 80
      }
    }

    service {
      name = "hit-counter-ingress"
      port = 8000

      tags = [
        "urlprefix-/"
      ]

      connect {
        gateway {
          proxy {
          }
          ingress {
            listener {
              port     = 8000
              protocol = "tcp"
              service {
                name = "flask"
              }
            }            
          }
        }
      }

      check {
        type = "http"
        port = 8000
        path = "/"
        interval = "4s"
        timeout = "2s"
      }
    }
  }
}

the connection works, but the ingress envoy cannot do the health check,
still have some question marks regarding networking here

Hello,

Is required because when you define an upstream in Consul, Nomad retrieves the IP address for routing. With the upstreams Stanza, Consul’s proxy is routing everything through localhost.

As for ingress gateways, I think you may need to remove the check stanza. Ingress gateways are using Envoy and have their own built-in health check for the service’s port. As a result, the health check you define as part of the ingress gateway job would fail, as the ingress gateway itself (and thus Envoy) is not running on port 8000.