I’m having a hard time figuring out what’s wrong with this minimal example of running Nomad and Consul Connect.
I’m following along with Consul Connect | Nomad by HashiCorp but with slight modifications (using netcat instead of socat, and not running Nomad in dev mode but instead in a 3 client/3 server cluster in Vagrant).
My downstream service cannot talk to my upstream service. Looking at the logs of the connect task, I see a lot of no healthy host for TCP connection pool
:
[2021-02-02 04:14:06.645][14][debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:389] [C134] Creating connection to cluster exec-upstream-service.default.dc1.internal.1be84599-6568-253c-820b-7b161b4193f3.consul
[2021-02-02 04:14:06.645][14][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1417] no healthy host for TCP connection pool
Here’s the job that I’m running:
job "exec-services" {
datacenters = ["dc1"]
type = "service"
group "group" {
network {
mode = "bridge"
port "upstream" { to = "8181" }
}
service {
name = "exec-upstream-service"
port = "upstream"
connect {
sidecar_service {}
}
}
task "exec-upstream-service" {
driver = "exec"
config {
command = "/bin/sh"
args = [
"-c",
"while true; do printf 'HTTP/1.1 200 OK\nContent-Type: text/plain; charset=UTF-8\nServer: netcat\n\nHello, world.\n' | nc -w 10 -p 8181 -l; sleep 1; done"
]
}
}
service {
name = "exec-downstream-service"
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "exec-upstream-service"
local_bind_port = 9191
}
}
}
}
}
task "exec-downstream-service" {
driver = "exec"
config {
command = "/bin/sh"
args = [
"-c",
"echo \"starting\"; while true; do nc -w 1 localhost 9191; sleep 1; done"
]
}
}
}
}