I got almost all I want working…
The last thing I need is communication between consul services
So I defined a job with network bridge and a service → connect → sidecar_service {} to test it out. The service is called “node-exporter-nginx”.
Deployed a curl to test upstream communication… which fails with a 503 error.
I did allow curl → node-exporter-nginx connection using Intentions.
The service itself is healthy and I can access just fine from the host using consul dns for example.
Where should I be looking for the root cause?
How can I test that consul connect is working correctly, etc?
job "curl" {
datacenters = ["dc1"]
type = "service"
group "demo" {
count = ${replica_count}
shutdown_delay = "30s"
network {
mode = "bridge"
}
service {
name = "curl"
provider = "consul"
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "node-exporter-nginx"
local_bind_port = 9113
config {
protocol = "http"
}
}
}
}
}
tags = [
"primary",
]
}
task "curl" {
driver = "docker"
config {
image = "curlimages/curl:latest"
command = "sh"
args = ["-c", "sleep infinity"]
}
env {
NETWORK_IP2 = "$${attr.unique.network.ip-address}"
DRIVER_IP2 = "$${attr.driver.docker.bridge_ip}"
}
}
}
}
~ $ netstat -tunpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.2:19001 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:9113 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:27402 0.0.0.0:* LISTEN -
~ $ curl -v 127.0.0.1:9113
* Trying 127.0.0.1:9113...
* Connected to 127.0.0.1 (127.0.0.1) port 9113
* using HTTP/1.x
> GET / HTTP/1.1
> Host: 127.0.0.1:9113
> User-Agent: curl/8.13.0
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 503 Service Unavailable
< content-length: 98
< content-type: text/plain
< date: Sat, 14 Jun 2025 13:46:34 GMT
< server: envoy
<
* Connection #0 to host 127.0.0.1 left intact
upstream connect error or disconnect/reset before headers. reset reason: remote connection failure
should curl from host → sidecar work?
$ nomad job allocs node-exporter-nginx
ID Node ID Task Group Version Desired Status Created Modified
6f60abc3 b84fd224 monitoring 1 run running 59s ago 21s ago
3ae59c44 b84fd224 monitoring 0 stop complete 1h2m ago 28s ago
$ nomad alloc status 6f60abc3
ID = 6f60abc3-8926-3214-963c-f1af935680e6
Eval ID = 3394e3d5
Name = node-exporter-nginx.monitoring[0]
Node ID = b84fd224
Node Name = dell-mini
Job ID = node-exporter-nginx
Job Version = 1
Client Status = running
Client Description = Tasks are running
Desired Status = run
Desired Description = <none>
Created = 1m8s ago
Modified = 30s ago
Allocation Addresses (mode = "bridge"):
Label Dynamic Address
*http yes 192.168.1.48:24287 -> 9113
*connect-proxy-node-exporter-nginx yes 192.168.1.48:29187 -> 29187
Task "connect-proxy-node-exporter-nginx" (prestart sidecar) is "running"
Task Resources:
CPU Memory Disk Addresses
4/100 MHz 12 MiB/300 MiB 300 MiB
Task Events:
Started At = 2025-06-14T14:25:15Z
Finished At = N/A
Total Restarts = 0
Last Restart = N/A
Recent Events:
Time Type Description
2025-06-14T11:25:15-03:00 Started Task started by client
2025-06-14T11:25:13-03:00 Task Setup Building Task Directory
2025-06-14T11:24:40-03:00 Received Task received by client
Task "node-exporter-nginx" is "running"
Task Resources:
CPU Memory Disk Addresses
0/100 MHz 1.7 MiB/128 MiB 300 MiB
Task Events:
Started At = 2025-06-14T14:25:18Z
Finished At = N/A
Total Restarts = 0
Last Restart = N/A
Recent Events:
Time Type Description
2025-06-14T11:25:18-03:00 Started Task started by client
2025-06-14T11:25:15-03:00 Driver Downloading image
2025-06-14T11:25:15-03:00 Task Setup Building Task Directory
2025-06-14T11:24:40-03:00 Received Task received by client
==> View allocation details in the Web UI: http://127.0.0.1:4646/ui/allocations/6f60abc3-8926-3214-963c-f1af935680e6
$ curl -I -v 192.168.1.48:24287
* Trying 192.168.1.48:24287...
* Connected to 192.168.1.48 (192.168.1.48) port 24287
* using HTTP/1.x
> HEAD / HTTP/1.1
> Host: 192.168.1.48:24287
> User-Agent: curl/8.14.1
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Content-Type: text/html; charset=UTF-8
Content-Type: text/html; charset=UTF-8
< Date: Sat, 14 Jun 2025 14:26:00 GMT
Date: Sat, 14 Jun 2025 14:26:00 GMT
< Content-Length: 1519
Content-Length: 1519
<
* Connection #0 to host 192.168.1.48 left intact
$ curl -I -v 192.168.1.48:29187
* Trying 192.168.1.48:29187...
* Connected to 192.168.1.48 (192.168.1.48) port 29187
* using HTTP/1.x
> HEAD / HTTP/1.1
> Host: 192.168.1.48:29187
> User-Agent: curl/8.14.1
> Accept: */*
>
* Request completely sent off
* Empty reply from server
* shutting down connection #0
curl: (52) Empty reply from server
Curl from host → sidecard won’t work since your curl cannot provide the Consul mTLS cert required by the Connect proxy.
Could you share the service definition of your node-exporter-nginx service? Something’s odd here.
BTW: If you want to access your node-exporter-nginx from Prometheus, it won’t really work nicely with Connect. Prom will want to connect to a normal http endpoint, especially if you’re using consul_sd_configs.