Hello!
I was following this guide
WAN Federation Through Mesh Gateways - VMs and Kubernetes | Consul | HashiCorp Developer
But the federation failed.
Log from the consul server in Kubernetes:
2020-07-23T13:39:00.306Z [INFO] agent.server.serf.wan: serf: EventMemberJoin: yao-dc1-server-0.dc1 10.1.168.149
7/23/2020 9:39:00 AM 2020-07-23T13:39:00.306Z [INFO] agent.server: Handled event for server in area: event=member-join server=yao-dc1-server-0.dc1 area=wan
7/23/2020 9:39:00 AM 2020-07-23T13:39:00.401Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send gossip to 10.1.168.149:8302: EOF
7/23/2020 9:39:00 AM 2020-07-23T13:39:00.799Z [ERROR] agent.server.rpc: RPC failed to server in DC: server=10.1.168.149:8300 datacenter=dc1 method=Internal.ServiceDump error=ârpc error getting client: failed to get conn: EOFâ
7/23/2020 9:39:00 AM 2020-07-23T13:39:00.902Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send gossip to 10.1.168.149:8302: EOF
7/23/2020 9:39:01 AM 2020-07-23T13:39:01.743Z [ERROR] agent.server.rpc: RPC failed to server in DC: server=10.1.168.149:8300 datacenter=dc1 method=Internal.ServiceDump error=ârpc error getting client: failed to get conn: EOFâ
7/23/2020 9:39:01 AM 2020-07-23T13:39:01.901Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send gossip to 10.1.168.149:8302: EOF
7/23/2020 9:39:02 AM 2020-07-23T13:39:02.691Z [ERROR] agent.server.rpc: RPC failed to server in DC: server=10.1.168.149:8300 datacenter=dc1 method=Internal.ServiceDump error=ârpc error getting client: failed to get conn: EOFâ
7/23/2020 9:39:04 AM 2020-07-23T13:39:04.900Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send ack: EOF from=10.42.1.112:55954
7/23/2020 9:39:05 AM 2020-07-23T13:39:05.133Z [ERROR] agent.server.rpc: RPC failed to server in DC: server=10.1.168.149:8300 datacenter=dc1 method=Internal.ServiceDump error=ârpc error getting client: failed to get conn: EOFâ
7/23/2020 9:39:09 AM 2020-07-23T13:39:09.369Z [ERROR] agent.server.rpc: RPC failed to server in DC: server=10.1.168.149:8300 datacenter=dc1 method=Internal.ServiceDump error=ârpc error getting client: failed to get conn: EOFâ
7/23/2020 9:39:11 AM 2020-07-23T13:39:11.669Z [ERROR] agent.server.rpc: RPC failed to server in DC: server=10.1.168.149:8300 datacenter=dc1 method=Internal.ServiceDump error=ârpc error getting client: failed to get conn: EOFâ
7/23/2020 9:39:12 AM 2020-07-23T13:39:12.998Z [ERROR] agent.server.rpc: RPC failed to server in DC: server=10.1.168.149:8300 datacenter=dc1 method=Internal.ServiceDump error=ârpc error getting client: failed to get conn: EOFâ
7/23/2020 9:39:13 AM 2020-07-23T13:39:13.399Z [INFO] agent.server.memberlist.wan: memberlist: Suspect yao-dc1-server-0.dc1 has failed, no acks received
7/23/2020 9:39:13 AM 2020-07-23T13:39:13.501Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send ping: EOF
7/23/2020 9:39:44 AM 2020-07-23T13:39:44.703Z [ERROR] agent.server.rpc: RPC failed to server in DC: server=10.1.168.149:8300 datacenter=dc1 method=Internal.ServiceDump error=ârpc error getting client: failed to get conn: EOFâ
7/23/2020 9:39:44 AM 2020-07-23T13:39:44.808Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send ack: EOF from=10.42.1.112:55954
7/23/2020 9:39:44 AM 2020-07-23T13:39:44.808Z [WARN] agent.server.memberlist.wan: memberlist: Refuting a suspect message (from: yao-dc1-server-0.dc1)
7/23/2020 9:39:44 AM 2020-07-23T13:39:44.902Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send gossip to 10.1.168.149:8302: EOF
7/23/2020 9:39:58 AM 2020-07-23T13:39:58.401Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send ping: EOF
7/23/2020 9:39:59 AM 2020-07-23T13:39:59.013Z [ERROR] agent.server.rpc: RPC failed to server in DC: server=10.1.168.149:8300 datacenter=dc1 method=Internal.ServiceDump error=ârpc error getting client: failed to get conn: EOFâ
7/23/2020 9:40:03 AM 2020-07-23T13:40:03.401Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send ping: EOF
7/23/2020 9:40:34 AM 2020-07-23T13:40:34.808Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send ack: EOF from=10.42.1.112:55954
7/23/2020 9:40:35 AM 2020-07-23T13:40:35.638Z [ERROR] agent.server.rpc: RPC failed to server in DC: server=10.1.168.149:8300 datacenter=dc1 method=Internal.ServiceDump error=ârpc error getting client: failed to get conn: EOFâ
7/23/2020 9:40:36 AM 2020-07-23T13:40:35.999Z [ERROR] agent.server.rpc: RPC failed to server in DC: server=10.1.168.149:8300 datacenter=dc1 method=Internal.ServiceDump error=ârpc error getting client: failed to get conn: EOFâ
7/23/2020 9:40:39 AM 2020-07-23T13:40:39.614Z [ERROR] agent.server.memberlist.wan: memberlist: Push/Pull with yao-dc1-server-0.dc1 failed: EOF
7/23/2020 9:40:40 AM 2020-07-23T13:40:40.630Z [WARN] agent.server.rpc: RPC request to DC is currently failing as no server can be reached: datacenter=dc1
7/23/2020 9:40:43 AM 2020-07-23T13:40:43.401Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send ping: EOF
7/23/2020 9:40:47 AM 2020-07-23T13:40:47.425Z [WARN] agent.server.rpc: RPC request to DC is currently failing as no server can be reached: datacenter=dc1
7/23/2020 9:40:47 AM 2020-07-23T13:40:47.750Z [WARN] agent.server.rpc: RPC request to DC is currently failing as no server can be reached: datacenter=dc1
7/23/2020 9:40:48 AM 2020-07-23T13:40:48.499Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send ping: EOF
7/23/2020 9:40:49 AM 2020-07-23T13:40:49.726Z [WARN] agent.server.rpc: RPC request to DC is currently failing as no server can be reached: datacenter=dc1
7/23/2020 9:40:50 AM 2020-07-23T13:40:50.297Z [WARN] agent.server.rpc: RPC request to DC is currently failing as no server can be reached: datacenter=dc1
7/23/2020 9:40:51 AM 2020-07-23T13:40:51.602Z [WARN] agent.server.rpc: RPC request to DC is currently failing as no server can be reached: datacenter=dc1
Netstat of vm consul server:
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 127.0.0.1:8600 0.0.0.0:* LISTEN 999 37945 3462/consul
tcp 0 0 10.1.168.149:8443 0.0.0.0:* LISTEN 1000 37288 3497/envoy
tcp 0 0 127.0.0.1:8443 0.0.0.0:* LISTEN 1000 37281 3497/envoy
tcp 0 0 127.0.0.1:19005 0.0.0.0:* LISTEN 1000 37269 3497/envoy
tcp 0 0 127.0.0.1:8500 0.0.0.0:* LISTEN 999 37947 3462/consul
tcp 0 0 127.0.0.1:8501 0.0.0.0:* LISTEN 999 37948 3462/consul
tcp 0 0 127.0.0.1:8502 0.0.0.0:* LISTEN 999 37951 3462/consul
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 16412 1236/sshd
tcp6 0 0 :::8300 :::* LISTEN 999 37186 3462/consul
tcp6 0 0 :::8301 :::* LISTEN 999 37189 3462/consul
tcp6 0 0 :::8302 :::* LISTEN 999 37187 3462/consul
tcp6 0 0 :::22 :::* LISTEN 0 16421 1236/sshd
udp 0 0 0.0.0.0:68 0.0.0.0:* 0 12064 850/dhclient
udp 0 0 127.0.0.1:8600 0.0.0.0:* 999 37944 3462/consul
udp6 0 0 :::8301 :::* 999 37190 3462/consul
udp6 0 0 :::8302 :::* 999 37188 3462/consul
I can nc to port 8300 and port 8302 on the vm consul server from Kubernetes consul server.
Log from vm consul server:
2020-07-23T13:39:01.381Z [INFO] agent: Synced service: service=gateway-secondary
2020-07-23T13:39:01.394Z [INFO] agent: Synced service: service=gateway-secondary
2020-07-23T13:39:01.464Z [INFO] agent.server: federation state anti-entropy synced
2020-07-23T13:39:03.864Z [INFO] agent.server.memberlist.wan: memberlist: Suspect consul-server-0.dc2 has failed, no acks received
2020-07-23T13:39:04.161Z [INFO] agent.server.serf.wan: serf: attempting reconnect to consul-server-0.dc2 10.42.1.114:8302
2020-07-23T13:39:04.262Z [WARN] agent.server.memberlist.wan: memberlist: Refuting a suspect message (from: yao-dc1-server-0.dc1)
2020-07-23T13:39:04.262Z [INFO] agent.server.serf.wan: serf: EventMemberJoin: consul-server-0.dc2 10.42.1.114
2020-07-23T13:39:04.262Z [INFO] agent.server: Handled event for server in area: event=member-join server=consul-server-0.dc2 area=wan
2020-07-23T13:39:06.008Z [INFO] agent: Synced check: check=service:gateway-secondary
2020-07-23T13:39:06.051Z [INFO] agent.server: federation state anti-entropy synced
2020-07-23T13:39:48.864Z [INFO] agent.server.memberlist.wan: memberlist: Suspect consul-server-0.dc2 has failed, no acks received
2020-07-23T13:40:18.865Z [INFO] agent.server.memberlist.wan: memberlist: Marking consul-server-0.dc2 as failed, suspect timeout reached (0 peer confirmations)
2020-07-23T13:40:18.865Z [INFO] agent.server.serf.wan: serf: EventMemberFailed: consul-server-0.dc2 10.42.1.114
2020-07-23T13:40:18.865Z [INFO] agent.server: Handled event for server in area: event=member-failed server=consul-server-0.dc2 area=wan
2020-07-23T13:40:28.865Z [INFO] agent.server.memberlist.wan: memberlist: Suspect consul-server-0.dc2 has failed, no acks received
2020-07-23T13:40:34.263Z [INFO] agent.server.serf.wan: serf: attempting reconnect to consul-server-0.dc2 10.42.1.114:8302
2020-07-23T13:40:34.359Z [INFO] agent.server.serf.wan: serf: EventMemberJoin: consul-server-0.dc2 10.42.1.114
2020-07-23T13:40:34.359Z [INFO] agent.server: Handled event for server in area: event=member-join server=consul-server-0.dc2 area=wan
2020-07-23T13:41:18.865Z [INFO] agent.server.memberlist.wan: memberlist: Suspect consul-server-0.dc2 has failed, no acks received
2020-07-23T13:41:48.865Z [INFO] agent.server.memberlist.wan: memberlist: Marking consul-server-0.dc2 as failed, suspect timeout reached (0 peer confirmations)
2020-07-23T13:41:48.865Z [INFO] agent.server.serf.wan: serf: EventMemberFailed: consul-server-0.dc2 10.42.1.114
2020-07-23T13:41:48.865Z [INFO] agent.server: Handled event for server in area: event=member-failed server=consul-server-0.dc2 area=wan
2020-07-23T13:42:03.864Z [INFO] agent.server.memberlist.wan: memberlist: Suspect consul-server-0.dc2 has failed, no acks received
2020-07-23T13:42:04.359Z [INFO] agent.server.serf.wan: serf: attempting reconnect to consul-server-0.dc2 10.42.1.114:8302
2020-07-23T13:42:04.459Z [INFO] agent.server.serf.wan: serf: EventMemberJoin: consul-server-0.dc2 10.42.1.114
2020-07-23T13:42:04.460Z [INFO] agent.server: Handled event for server in area: event=member-join server=consul-server-0.dc2 area=wan
2020-07-23T13:42:53.864Z [INFO] agent.server.memberlist.wan: memberlist: Suspect consul-server-0.dc2 has failed, no acks received
2020-07-23T13:43:23.864Z [INFO] agent.server.memberlist.wan: memberlist: Marking consul-server-0.dc2 as failed, suspect timeout reached (0 peer confirmations)
Log from vm envoy mesh gateway:
==> Registered service: gateway-secondary
[2020-07-23 13:39:01.423][3497][info][main] [external/envoy/source/server/server.cc:251] initializing epoch 0 (hot restart version=disabled)
[2020-07-23 13:39:01.423][3497][info][main] [external/envoy/source/server/server.cc:253] statically linked extensions:
[2020-07-23 13:39:01.423][3497][info][main] [external/envoy/source/server/server.cc:255] envoy.thrift_proxy.transports: auto, framed, header, unframed
[2020-07-23 13:39:01.423][3497][info][main] [external/envoy/source/server/server.cc:255] envoy.stats_sinks: envoy.dog_statsd, envoy.metrics_service, envoy.stat_sinks.hystrix, envoy.statsd
[2020-07-23 13:39:01.423][3497][info][main] [external/envoy/source/server/server.cc:255] envoy.tracers: envoy.dynamic.ot, envoy.lightstep, envoy.tracers.datadog, envoy.tracers.opencensus, envoy.tracers.xray, envoy.zipkin
[2020-07-23 13:39:01.424][3497][info][main] [external/envoy/source/server/server.cc:255] envoy.udp_listeners: raw_udp_listener
[2020-07-23 13:39:01.424][3497][info][main] [external/envoy/source/server/server.cc:255] envoy.access_loggers: envoy.file_access_log, envoy.http_grpc_access_log, envoy.tcp_grpc_access_log
[2020-07-23 13:39:01.424][3497][info][main] [external/envoy/source/server/server.cc:255] envoy.thrift_proxy.filters: envoy.filters.thrift.rate_limit, envoy.filters.thrift.router
[2020-07-23 13:39:01.424][3497][info][main] [external/envoy/source/server/server.cc:255] envoy.retry_priorities: envoy.retry_priorities.previous_priorities
[2020-07-23 13:39:01.424][3497][info][main] [external/envoy/source/server/server.cc:255] envoy.retry_host_predicates: envoy.retry_host_predicates.omit_canary_hosts, envoy.retry_host_predicates.previous_hosts
[2020-07-23 13:39:01.424][3497][info][main] [external/envoy/source/server/server.cc:255] envoy.dubbo_proxy.serializers: dubbo.hessian2
[2020-07-23 13:39:01.424][3497][info][main] [external/envoy/source/server/server.cc:255] envoy.resolvers: envoy.ip
[2020-07-23 13:39:01.424][3497][info][main] [external/envoy/source/server/server.cc:255] envoy.filters.udp_listener: envoy.filters.udp_listener.udp_proxy
[2020-07-23 13:39:01.424][3497][info][main] [external/envoy/source/server/server.cc:255] envoy.filters.network: envoy.client_ssl_auth, envoy.echo, envoy.ext_authz, envoy.filters.network.dubbo_proxy, envoy.filters.network.kafka_broker, envoy.filters.network.local_ratelimit, envoy.filters.network.mysql_proxy, envoy.filters.network.rbac, envoy.filters.network.sni_cluster, envoy.filters.network.thrift_proxy, envoy.filters.network.zookeeper_proxy, envoy.http_connection_manager, envoy.mongo_proxy, envoy.ratelimit, envoy.redis_proxy, envoy.tcp_proxy
[2020-07-23 13:39:01.424][3497][info][main] [external/envoy/source/server/server.cc:255] envoy.resource_monitors: envoy.resource_monitors.fixed_heap, envoy.resource_monitors.injected_resource
[2020-07-23 13:39:01.424][3497][info][main] [external/envoy/source/server/server.cc:255] envoy.filters.http: envoy.buffer, envoy.cors, envoy.csrf, envoy.ext_authz, envoy.fault, envoy.filters.http.adaptive_concurrency, envoy.filters.http.dynamic_forward_proxy, envoy.filters.http.grpc_http1_reverse_bridge, envoy.filters.http.grpc_stats, envoy.filters.http.header_to_metadata, envoy.filters.http.jwt_authn, envoy.filters.http.on_demand, envoy.filters.http.original_src, envoy.filters.http.rbac, envoy.filters.http.tap, envoy.grpc_http1_bridge, envoy.grpc_json_transcoder, envoy.grpc_web, envoy.gzip, envoy.health_check, envoy.http_dynamo_filter, envoy.ip_tagging, envoy.lua, envoy.rate_limit, envoy.router, envoy.squash
[2020-07-23 13:39:01.424][3497][info][main] [external/envoy/source/server/server.cc:255] envoy.transport_sockets.upstream: envoy.transport_sockets.alts, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, tls
[2020-07-23 13:39:01.424][3497][info][main] [external/envoy/source/server/server.cc:255] envoy.transport_sockets.downstream: envoy.transport_sockets.alts, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, tls
[2020-07-23 13:39:01.424][3497][info][main] [external/envoy/source/server/server.cc:255] envoy.clusters: envoy.cluster.eds, envoy.cluster.logical_dns, envoy.cluster.original_dst, envoy.cluster.static, envoy.cluster.strict_dns, envoy.clusters.aggregate, envoy.clusters.dynamic_forward_proxy, envoy.clusters.redis
[2020-07-23 13:39:01.424][3497][info][main] [external/envoy/source/server/server.cc:255] envoy.filters.listener: envoy.listener.http_inspector, envoy.listener.original_dst, envoy.listener.original_src, envoy.listener.proxy_protocol, envoy.listener.tls_inspector
[2020-07-23 13:39:01.424][3497][info][main] [external/envoy/source/server/server.cc:255] envoy.health_checkers: envoy.health_checkers.redis
[2020-07-23 13:39:01.424][3497][info][main] [external/envoy/source/server/server.cc:255] envoy.dubbo_proxy.protocols: dubbo
[2020-07-23 13:39:01.424][3497][info][main] [external/envoy/source/server/server.cc:255] envoy.grpc_credentials: envoy.grpc_credentials.aws_iam, envoy.grpc_credentials.default, envoy.grpc_credentials.file_based_metadata
[2020-07-23 13:39:01.424][3497][info][main] [external/envoy/source/server/server.cc:255] envoy.dubbo_proxy.filters: envoy.filters.dubbo.router
[2020-07-23 13:39:01.424][3497][info][main] [external/envoy/source/server/server.cc:255] envoy.thrift_proxy.protocols: auto, binary, binary/non-strict, compact, twitter
[2020-07-23 13:39:01.424][3497][info][main] [external/envoy/source/server/server.cc:255] envoy.dubbo_proxy.route_matchers: default
[2020-07-23 13:39:01.482][3497][warning][misc] [external/envoy/source/common/protobuf/utility.cc:441] Using deprecated option âenvoy.api.v2.Cluster.hostsâ from file cluster.proto. This configuration will be removed from Envoy soon. Please see Deprecated â envoy 1.29.0-dev-2de016 documentation for details.
[2020-07-23 13:39:01.482][3497][warning][misc] [external/envoy/source/common/protobuf/utility.cc:441] Using deprecated option âenvoy.api.v2.Cluster.tls_contextâ from file cluster.proto. This configuration will be removed from Envoy soon. Please see Deprecated â envoy 1.29.0-dev-2de016 documentation for details.
[2020-07-23 13:39:01.482][3497][info][main] [external/envoy/source/server/server.cc:336] admin address: 127.0.0.1:19005
[2020-07-23 13:39:01.485][3497][info][main] [external/envoy/source/server/server.cc:455] runtime: layers: - name: static_layer
static_layer:
envoy.deprecated_features:envoy.config.trace.v2.ZipkinConfig.HTTP_JSON_V1: true
envoy.deprecated_features:envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager.Tracing.operation_name: true
envoy.deprecated_features:envoy.api.v2.Cluster.tls_context: true
[2020-07-23 13:39:01.485][3497][info][config] [external/envoy/source/server/configuration_impl.cc:62] loading 0 static secret(s)
[2020-07-23 13:39:01.485][3497][info][config] [external/envoy/source/server/configuration_impl.cc:68] loading 1 cluster(s)
[2020-07-23 13:39:01.498][3497][info][upstream] [external/envoy/source/common/upstream/cluster_manager_impl.cc:167] cm init: initializing cds
[2020-07-23 13:39:01.501][3497][info][config] [external/envoy/source/server/configuration_impl.cc:72] loading 0 listener(s)
[2020-07-23 13:39:01.501][3497][info][config] [external/envoy/source/server/configuration_impl.cc:97] loading tracing configuration
[2020-07-23 13:39:01.502][3497][info][config] [external/envoy/source/server/configuration_impl.cc:116] loading stats sink configuration
[2020-07-23 13:39:01.502][3497][info][main] [external/envoy/source/server/server.cc:550] starting main dispatch loop
[2020-07-23 13:39:01.506][3497][info][upstream] [external/envoy/source/common/upstream/cds_api_impl.cc:74] cds: add 1 cluster(s), remove 1 cluster(s)
[2020-07-23 13:39:01.532][3497][info][upstream] [external/envoy/source/common/upstream/cds_api_impl.cc:90] cds: add/update cluster âdc2.internal.9560fc88-3471-6a9c-c64c-9174e676b207.consulâ
[2020-07-23 13:39:01.532][3497][info][upstream] [external/envoy/source/common/upstream/cluster_manager_impl.cc:145] cm init: initializing secondary clusters
[2020-07-23 13:39:01.533][3497][info][upstream] [external/envoy/source/common/upstream/cluster_manager_impl.cc:171] cm init: all clusters initialized
[2020-07-23 13:39:01.533][3497][info][main] [external/envoy/source/server/server.cc:529] all clusters initialized. initializing init manager
[2020-07-23 13:39:01.536][3497][info][upstream] [external/envoy/source/server/lds_api.cc:73] lds: add/update listener âlan:127.0.0.1:8443â
[2020-07-23 13:39:01.536][3497][warning][misc] [external/envoy/source/common/protobuf/utility.cc:441] Using deprecated option âenvoy.api.v2.listener.Filter.configâ from file listener_components.proto. This configuration will be removed from Envoy soon. Please see Deprecated â envoy 1.29.0-dev-2de016 documentation for details.
[2020-07-23 13:39:01.537][3497][info][upstream] [external/envoy/source/server/lds_api.cc:73] lds: add/update listener âwan:10.1.168.149:8443â
[2020-07-23 13:39:01.537][3497][info][config] [external/envoy/source/server/listener_manager_impl.cc:707] all dependencies initialized. starting workers
[2020-07-23 13:39:01.561][3497][info][upstream] [external/envoy/source/common/upstream/cds_api_impl.cc:74] cds: add 1 cluster(s), remove 1 cluster(s)
[2020-07-23 13:39:01.562][3497][warning][misc] [external/envoy/source/common/protobuf/utility.cc:441] Using deprecated option âenvoy.api.v2.listener.Filter.configâ from file listener_components.proto. This configuration will be removed from Envoy soon. Please see Deprecated â envoy 1.29.0-dev-2de016 documentation for details.
[external/envoy/source/server/drain_manager_impl.cc:68] shutting down parent after drain
Mesh gateway log in Kubernetes:
[2020-07-23 04:05:55.200][1][info][upstream] [source/common/upstream/cds_api_impl.cc:93] cds: add/update cluster âdc1.internal.9560fc88-3471-6a9c-c64c-9174e676b207.consulâ
7/23/2020 12:05:55 AM [2020-07-23 04:05:55.307][1][warning][misc] [bazel-out/k8-opt/bin/source/extensions/common/_virtual_includes/utility_lib/extensions/common/utility.h:65] Using deprecated extension name âenvoy.listener.tls_inspectorâ for âenvoy.filters.listener.tls_inspectorâ. This name will be removed from Envoy soon. Please see Deprecated â envoy 1.29.0-dev-2de016 documentation for details.
7/23/2020 12:05:55 AM [2020-07-23 04:05:55.400][1][info][upstream] [source/server/lds_api.cc:76] lds: add/update listener âdefault:10.42.1.112:8443â
7/23/2020 12:09:17 AM [2020-07-23 04:09:17.760][1][info][upstream] [source/common/upstream/cds_api_impl.cc:77] cds: add 4 cluster(s), remove 1 cluster(s)
7/23/2020 12:09:26 AM [2020-07-23 04:09:26.111][1][info][upstream] [source/common/upstream/cds_api_impl.cc:77] cds: add 4 cluster(s), remove 1 cluster(s)
The Helm config for Kubernetes cluster:
I only bootstrapped one server. Also disabled acl and gossip encryption.
Instead of using load balancer, I used NodePort service to expose the mesh gateway.
My vm server config:
cert_file = â/home/ubuntu/dc1-server-consul-0.pemâ
key_file = â/home/ubuntu/dc1-server-consul-0-key.pemâ
ca_file = â/home/ubuntu/consul-agent-ca.pemâ
primary_gateways = [â10.1.168.145:32001â]
server = true
bootstrap_expect = 1
datacenter = âdc1â
data_dir = â/opt/consulâ
enable_central_service_config = true
primary_datacenter = âdc2â
connect {
enabled = true
enable_mesh_gateway_wan_federation = true
}
verify_incoming_rpc = true
verify_outgoing = true
verify_server_hostname = true
ports {
https = 8501
http = 8500
grpc = 8502
}
The command I used to launch mesh gateway on vm server:
consul connect envoy -mesh-gateway -register -service âgateway-secondaryâ -address â127.0.0.1:8443â -wan-address â10.1.168.149:8443â -admin-bind
127.0.0.1:19005 -grpc-addr=https://127.0.0.1:8502 -ca-file=/home/ubuntu/consul-agent-ca.pem
From the vm server I was able to show the services in kubernetes cluster, however from the kuberenetes consul server when I query the services in vm cluster I got:
Error listing services: Unexpected response code: 500 (Remote DC has no server currently reachable)
I followed this guide to set up consul on vm
Can someone please help me see whatâs wrong here?