How to test connectivity from my service to my sidecar-proxy

I have setup some containers with docker-compose, I have consul up running, services are getting registered through consul-envoy container, this consul-envoy container acts as a sidecar-proxy for my services, but I would like to know how can I test that my container service can is actually forwarding the request to my sidecar-proxy, because consul connect does not work, so I would like to do some debugging, in fact inside my sidecar-proxy container making curl request to the upstreams shows connection refused

There are a few ways you could figure this out. Since you are using Envoy, if you have the admin interface bound to localhost:19000 which is the default then the stats endpoint should give some information about where the connection could be going wrong.

The envoy docs are pretty decent with regards to documenting the stats. If there was a problem with the mTLS connection to the proxy then that should show up in the listener stats. If the problem were between Envoy and the application then the general statistics should help to detect that. In particular the upstream_cx_* ones should give insights about the connection.

Another thing I personally like to do is inject a container running tcpdump or tshark into the network namespace of the Envoy container.

The docker image I use for this is from this dockerfile:

FROM alpine:latest
RUN apk add --no-cache tshark

ENTRYPOINT ["/usr/bin/tshark"]
CMD [] 

Then you can run it like: docker run -ti --rm --network container:<envoy container name or id> tshark <tshark arguments>

Some helpful tshark arguments I use are:

  • -V - This outputs much more in depth packet decoding and will show TLS information as well as any TCP/UDP/HTTP(s) information for the unencrypted side of the proxy.
  • port <application port> and host <IP of the proxied application> - To debug a connection between the proxy and one endpoint.
  • port <public listener port> - To debug the main proxy listener. This will be a mTLS connection.

Thanks for the help

this my repo: https://github.com/Crizstian/cinema-microservice-in-GO/blob/step6-consul-connect/deploy/docker-compose/consul-connect-example

this are the logs from consul envoy proxy

 ~/go/src/cinemas-microservices/deploy/docker-compose/service-mesh-connect   step6-consul-connect  docker logs edd4bdff2802                1 ↵  ⚙  3400
Node          Address         Status  Type    Build  Protocol  DC   Segment
57db6805ee1f  10.10.0.2:8301  alive   server  1.6.0  2         dc1  <all>
Registering service with consul /config/booking-service.hcl
Registered service: booking
Register central config /central_config/booking-defaults.hcl
Register central config /central_config/notification-defaults.hcl
Register central config /central_config/notification-resolver.hcl
Register central config /central_config/payment-defaults.hcl
Register central config /central_config/payment-resolver.hcl
Command: consul connect envoy -sidecar-for booking-dc1
[2019-09-28 21:48:16.518][81][info][main] [source/server/server.cc:205] initializing epoch 0 (hot restart version=disabled)
[2019-09-28 21:48:16.518][81][info][main] [source/server/server.cc:207] statically linked extensions:
[2019-09-28 21:48:16.518][81][info][main] [source/server/server.cc:209]   access_loggers: envoy.file_access_log,envoy.http_grpc_access_log
[2019-09-28 21:48:16.518][81][info][main] [source/server/server.cc:212]   filters.http: envoy.buffer,envoy.cors,envoy.ext_authz,envoy.fault,envoy.filters.http.grpc_http1_reverse_bridge,envoy.filters.http.header_to_metadata,envoy.filters.http.jwt_authn,envoy.filters.http.rbac,envoy.filters.http.tap,envoy.grpc_http1_bridge,envoy.grpc_json_transcoder,envoy.grpc_web,envoy.gzip,envoy.health_check,envoy.http_dynamo_filter,envoy.ip_tagging,envoy.lua,envoy.rate_limit,envoy.router,envoy.squash
[2019-09-28 21:48:16.518][81][info][main] [source/server/server.cc:215]   filters.listener: envoy.listener.original_dst,envoy.listener.original_src,envoy.listener.proxy_protocol,envoy.listener.tls_inspector
[2019-09-28 21:48:16.518][81][info][main] [source/server/server.cc:218]   filters.network: envoy.client_ssl_auth,envoy.echo,envoy.ext_authz,envoy.filters.network.dubbo_proxy,envoy.filters.network.mysql_proxy,envoy.filters.network.rbac,envoy.filters.network.sni_cluster,envoy.filters.network.thrift_proxy,envoy.filters.network.zookeeper_proxy,envoy.http_connection_manager,envoy.mongo_proxy,envoy.ratelimit,envoy.redis_proxy,envoy.tcp_proxy
[2019-09-28 21:48:16.518][81][info][main] [source/server/server.cc:220]   stat_sinks: envoy.dog_statsd,envoy.metrics_service,envoy.stat_sinks.hystrix,envoy.statsd
[2019-09-28 21:48:16.518][81][info][main] [source/server/server.cc:222]   tracers: envoy.dynamic.ot,envoy.lightstep,envoy.tracers.datadog,envoy.zipkin
[2019-09-28 21:48:16.518][81][info][main] [source/server/server.cc:225]   transport_sockets.downstream: envoy.transport_sockets.alts,envoy.transport_sockets.tap,raw_buffer,tls
[2019-09-28 21:48:16.518][81][info][main] [source/server/server.cc:228]   transport_sockets.upstream: envoy.transport_sockets.alts,envoy.transport_sockets.tap,raw_buffer,tls
[2019-09-28 21:48:16.519][81][info][main] [source/server/server.cc:234] buffer implementation: old (libevent)
[2019-09-28 21:48:16.555][81][warning][misc] [source/common/protobuf/utility.cc:173] Using deprecated option 'envoy.api.v2.Cluster.hosts' from file cds.proto. This configuration will be removed from Envoy soon. Please see https://github.com/envoyproxy/envoy/blob/master/DEPRECATED.md for details.
[2019-09-28 21:48:16.559][81][info][main] [source/server/server.cc:281] admin address: 127.0.0.1:19000
[2019-09-28 21:48:16.560][81][info][config] [source/server/configuration_impl.cc:50] loading 0 static secret(s)
[2019-09-28 21:48:16.560][81][info][config] [source/server/configuration_impl.cc:56] loading 1 cluster(s)
[2019-09-28 21:48:16.565][81][info][upstream] [source/common/upstream/cluster_manager_impl.cc:133] cm init: initializing cds
[2019-09-28 21:48:16.567][81][info][config] [source/server/configuration_impl.cc:60] loading 0 listener(s)
[2019-09-28 21:48:16.567][81][info][config] [source/server/configuration_impl.cc:85] loading tracing configuration
[2019-09-28 21:48:16.567][81][info][config] [source/server/configuration_impl.cc:105] loading stats sink configuration
[2019-09-28 21:48:16.567][81][info][main] [source/server/server.cc:478] starting main dispatch loop
[2019-09-28 21:48:16.573][81][info][upstream] [source/common/upstream/cluster_manager_impl.cc:477] add/update cluster local_app during init
[2019-09-28 21:48:16.580][81][info][upstream] [source/common/upstream/cluster_manager_impl.cc:477] add/update cluster payment.default.dc1.internal.dfa6f152-16c8-ebec-ea74-30fce332aea2.consul during init
[2019-09-28 21:48:16.586][81][info][upstream] [source/common/upstream/cluster_manager_impl.cc:477] add/update cluster notification.default.dc1.internal.dfa6f152-16c8-ebec-ea74-30fce332aea2.consul during init
[2019-09-28 21:48:16.586][81][info][upstream] [source/common/upstream/cluster_manager_impl.cc:113] cm init: initializing secondary clusters
[2019-09-28 21:48:16.587][81][info][upstream] [source/common/upstream/cluster_manager_impl.cc:137] cm init: all clusters initialized
[2019-09-28 21:48:16.587][81][info][main] [source/server/server.cc:462] all clusters initialized. initializing init manager
[2019-09-28 21:48:16.593][81][info][upstream] [source/server/lds_api.cc:74] lds: add/update listener 'public_listener:10.10.0.9:20000'
[2019-09-28 21:48:16.594][81][info][upstream] [source/server/lds_api.cc:74] lds: add/update listener 'payment:127.0.0.1:9091'
[2019-09-28 21:48:16.595][81][info][upstream] [source/server/lds_api.cc:74] lds: add/update listener 'notification:127.0.0.1:9092'
[2019-09-28 21:48:16.595][81][info][config] [source/server/listener_manager_impl.cc:1006] all dependencies initialized. starting workers
[2019-09-28 22:03:27.408][81][info][main] [source/server/drain_manager_impl.cc:63] shutting down parent after drain

if I do a netcat command this what I get:

localhost (127.0.0.1:9091) open
/ # nc -zv localhost 9092
localhost (127.0.0.1:9092) open
/ # nc -zv localhost 19000
localhost (127.0.0.1:19000) open

and from my container service, I got connection refused

 ~/go/src/cinemas-microservices/deploy/docker-compose/service-mesh-connect   step6-consul-connect  docker exec -it 9f2c6c55c0b0 sh           ✔  ⚙  3403
/ # curl localhost:9091/ping
curl: (7) Failed to connect to localhost port 9091: Connection refused
/ # curl localhost:9092/ping
curl: (7) Failed to connect to localhost port 9092: Connection refused

I will try to use the tshark tool to see what is happening