I’ve peered two clusters, exported services and created intentions.
Cluster A has a service A and cluster B has a service B
From cluster A I see the service in cluster B and it shows as healthy.
I’ve defined this service config:
service {
name = "service-a"
id = "service-a-i-<instanceid>"
port = 8080
token = "<service-token>"
connect {
sidecar_service {
proxy {
upstreams = [
{
destination_name = "service-b"
destination_peer = "cluster-b"
local_bind_port = 8000
}
]
}
}
}
}
running the proxy in debug mode I see:
[2022-12-10 19:11:30.712][38][debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:197] [C53] new tcp proxy session
[2022-12-10 19:11:30.712][38][debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:369] [C53] Creating connection to cluster service-b.default.cluster-b.external.<some-uuid>.consul
[2022-12-10 19:11:30.712][38][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1743] no healthy host for TCP connection pool
[2022-12-10 19:11:30.712][38][debug][connection] [source/common/network/connection_impl.cc:139] [C53] closing data_to_write=0 type=1
[2022-12-10 19:11:30.712][38][debug][connection] [source/common/network/connection_impl.cc:250] [C53] closing socket: 1
what could be the problem?
Update:
calling envoy endpoint I see that service doesn’t have “host statutes” like the others
{
"name": "service-b.default.cluster-b.external.<some-uuid>.consul",
"added_via_api": true,
"circuit_breakers": {
"thresholds": [
{
"max_connections": 1024,
"max_pending_requests": 1024,
"max_requests": 1024,
"max_retries": 3
},
{
"priority": "HIGH",
"max_connections": 1024,
"max_pending_requests": 1024,
"max_requests": 1024,
"max_retries": 3
}
]
},
"observability_name": "service-b.default.cluster-b.external.<some-uuid>.consul"
}