Hi,
I am trying to setup a boundary worker on VM that is integrated into the service mesh through transparent proxy.
On nomad I run the http-echo service example that is integrated in the service mesh as well. The target in boundary is set as echo-api.virtual.consul which is resolved to the correct IP
PING echo-api.virtual.consul (240.0.0.1) 56(84) bytes of data.
I followed all the steps in this tutorial
My service definition on the boundary worker
service {
name = "boundary-worker-0"
connect = {
sidecar_service = {
proxy = {
mode = "transparent"
}
}
}
}
This is my full redirect configuration
/usr/bin/consul \
connect redirect-traffic \
-proxy-id=boundary-worker-0-sidecar-proxy \
-proxy-uid="$(id --user envoy)" \
-exclude-uid="$(id --user consul)" \
-exclude-uid="$(id --user boundary)" \
-exclude-inbound-port=22 \
-exclude-inbound-port=9202 \
-consul-dns-port=8600 \
-http-addr=https://127.0.0.1:8501 \
-ca-file=/opt/consul/agent-certs/ca.pem \
-client-cert=/opt/consul/agent-certs/agent.pem \
-client-key=/opt/consul/agent-certs/agent.key \
-token-file /etc/consul.d/tokens/gateway
My consul envoy config
ExecStart=/usr/bin/consul \
connect envoy \
-sidecar-for=boundary-worker-0 \
-http-addr=https://127.0.0.1:8501 \
-grpc-addr=https://127.0.0.1:8503 \
-ca-file=/opt/consul/agent-certs/ca.pem \
-client-cert=/opt/consul/agent-certs/agent.pem \
-client-key=/opt/consul/agent-certs/agent.key \
-token-file /etc/consul.d/tokens/gateway
Iptables
Chain PREROUTING (policy ACCEPT 21 packets, 1384 bytes)
pkts bytes target prot opt in out source destination
388 21144 CONSUL_PROXY_INBOUND tcp -- any any anywhere anywhere
Chain INPUT (policy ACCEPT 393 packets, 21628 bytes)
pkts bytes target prot opt in out source destination
service {
Chain OUTPUT (policy ACCEPT 532 packets, 33100 bytes)
pkts bytes target prot opt in out source destination
0 0 CONSUL_DNS_REDIRECT udp -- any any anywhere localhost udp dpt:domain
0 0 CONSUL_DNS_REDIRECT tcp -- any any anywhere localhost tcp dpt:domain
582 35373 CONSUL_PROXY_OUTPUT tcp -- any any anywhere anywhere
Chain POSTROUTING (policy ACCEPT 636 packets, 39736 bytes)
pkts bytes target prot opt in out source destination
Chain CONSUL_DNS_REDIRECT (2 references)
pkts bytes target prot opt in out source destination
0 0 DNAT udp -- any any anywhere localhost udp dpt:domain to:127.0.0.1:8600
0 0 DNAT tcp -- any any anywhere localhost tcp dpt:domain to:127.0.0.1:8600
Chain CONSUL_PROXY_INBOUND (1 references)
pkts bytes target prot opt in out source destination
0 0 RETURN tcp -- any any anywhere anywhere tcp dpt:9202
16 900 RETURN tcp -- any any anywhere anywhere tcp dpt:ssh
372 20244 CONSUL_PROXY_IN_REDIRECT tcp -- any any anywhere anywhere
Chain CONSUL_PROXY_IN_REDIRECT (1 references)
pkts bytes target prot opt in out source destination
372 20244 REDIRECT tcp -- any any anywhere anywhere redir ports 21000
Chain CONSUL_PROXY_OUTPUT (1 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- any any anywhere anywhere owner UID match boundary
127 7677 RETURN all -- any any anywhere anywhere owner UID match consul
0 0 RETURN all -- any any anywhere anywhere owner UID match envoy
351 21060 RETURN all -- any any anywhere localhost
104 6636 CONSUL_PROXY_REDIRECT all -- any any anywhere anywhere
Chain CONSUL_PROXY_REDIRECT (1 references)
pkts bytes target prot opt in out source destination
104 6636 REDIRECT tcp -- any any anywhere anywhere redir ports 15001
The service is correctly registered in Consul, but when I use curl on the boundary VM or through the boundary tunnel from my machine I don’t get any answer from the nomad job.
The consul envoy logs
[2025-01-16 09:55:06.390][27538][debug][conn_handler] [source/common/listener_manager/active_tcp_listener.cc:160] [Tags: "ConnectionId":"163"] new connection from **********:38262
[2025-01-16 09:55:06.390][27538][trace][connection] [source/common/network/connection_impl.cc:614] [Tags: "ConnectionId":"163"] socket event: 2
[2025-01-16 09:55:06.390][27538][trace][connection] [source/common/network/connection_impl.cc:737] [Tags: "ConnectionId":"163"] write ready
[2025-01-16 09:55:06.390][27538][debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:459] [Tags: "ConnectionId":"163"] Creating connection to cluster echo-api.default.fra1.internal.78846937-b490-5264-ea08-b7e4ec16f1b8.consul
[2025-01-16 09:55:06.390][27538][debug][connection] [source/common/network/connection_impl.cc:150] [Tags: "ConnectionId":"163"] closing data_to_write=0 type=1
[2025-01-16 09:55:06.390][27538][debug][connection] [source/common/network/connection_impl.cc:276] [Tags: "ConnectionId":"163"] closing socket: 1
[2025-01-16 09:55:06.390][27538][trace][connection] [source/common/network/connection_impl.cc:469] [Tags: "ConnectionId":"163"] raising connection event 1
[2025-01-16 09:55:06.390][27538][trace][filter] [source/common/tcp_proxy/tcp_proxy.cc:806] [Tags: "ConnectionId":"163"] on downstream event 1, has upstream = false
[2025-01-16 09:55:06.390][27538][trace][conn_handler] [source/common/listener_manager/active_stream_listener_base.cc:126] [Tags: "ConnectionId":"163"] tcp connection on event 1
[2025-01-16 09:55:06.390][27538][debug][conn_handler] [source/common/listener_manager/active_stream_listener_base.cc:136] [Tags: "ConnectionId":"163"] adding to cleanup list
Is there any additional step that I need to do?
Thanks in advance