Error WAN Federation beween vms and EKS

  • I create EKS consul 1.8 with Helm chart (eks-dev) and create a mesh gateway with AWS load balancer.

meshGateway:
enabled: true
globalMode: local
replicas: 3
wanAddress:
source: “Service”
port: 443
static: “”
service:
enabled: true
type: LoadBalancer
port: 443
nodePort: null
annotations: |
service.beta.kubernetes.io/aws-load-balancer-type: nlb
additionalSpec: null
imageEnvoy: envoyproxy/envoy-alpine:v1.14.2
hostNetwork: false
dnsPolicy: null
consulServiceName: “mesh-gateway”
containerPort: 8443
hostPort: null
resources:
requests:
memory: “100Mi”
cpu: “100m”
limits:
memory: “100Mi”
cpu: “100m”
affinity: |
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: {{ template “consul.name” . }}
release: “{{ .Release.Name }}”
component: mesh-gateway
topologyKey: kubernetes.io/hostname
tolerations: null
nodeSelector: null
priorityClassName: “”
annotations: null

When I check:
nc -v 443
Connection to 443 port [tcp/https] succeeded!

I create another region follow WAN Federation Through Mesh Gateways - VMs and Kubernetes | Consul | HashiCorp Developer.

I config in VMs (do-singapore):

{
“addresses”: {
“dns”: “0.0.0.0”,
“grpc”: “0.0.0.0”,
“http”: “0.0.0.0”,
“https”: “0.0.0.0”
},
“advertise_addr”: “10.130.118.203”,
“advertise_addr_wan”: “”,
“bind_addr”: “0.0.0.0”,
“bootstrap”: false,
“bootstrap_expect”: 3,
“ca_file”: “/etc/consul/ssl/ca.crt”,
“cert_file”: “/etc/consul/ssl/server.crt”,
“client_addr”: “0.0.0.0”,
“data_dir”: “/var/consul”,
“datacenter”: “do-singapore”,
“disable_update_check”: false,
“domain”: “consul1.do-singapore.consul.uizadev.io”,
“enable_local_script_checks”: true,
“enable_script_checks”: false,
“encrypt”: “”,
“key_file”: “/etc/consul/ssl/server.key”,
“log_file”: “/var/log/consul/consul.log”,
“log_level”: “INFO”,
“log_rotate_bytes”: 0,
“log_rotate_duration”: “24h”,
“log_rotate_max_files”: 0,
“node_name”: “consul1”,
“performance”: {
“leave_drain_time”: “5s”,
“raft_multiplier”: 1,
“rpc_hold_timeout”: “7s”
},
“ports”: {
“dns”: 8600,
“grpc”: 8502,
“http”: 8500,
“https”: 8501,
“serf_lan”: 8301,
“serf_wan”: 8302,
“server”: 8300
},
“raft_protocol”: 3,
“retry_interval”: “30s”,
“retry_join”: [
“10.130.119.102”,
“10.130.118.203”,
“10.130.118.183”
],
“retry_max”: 0,
“server”: true,
“tls_min_version”: “tls12”,
“tls_prefer_server_cipher_suites”: false,
“translate_wan_addrs”: true,
“enable_central_service_config”: true,
“ui”: true,
“verify_incoming”: false,
“verify_incoming_https”: false,
“verify_incoming_rpc”: true,
“verify_outgoing”: true,
“verify_server_hostname”: false,
“primary_datacenter”: “eks-dev”,
“primary_gateways”: [" < load balancer dns > :443"],
“connect”: {
“enabled”: true,
“enable_mesh_gateway_wan_federation”: true
}
}

we got the error:

2020-07-01T04:37:41.788Z [WARN] agent: (WAN) couldn’t join: number_of_nodes=0 error="1 error occurred:
* Failed to join 192.0.2.2: read tcp 128.199.208.31:33014->54.151.170.130:443: read: connection reset by peer

Someone can help me about this problem?

Hi,
It looks like you haven’t enabled federation or TLS in your Kubernetes Helm cnfig. You need to use the configuration from the docs: https://www.consul.io/docs/k8s/installation/multi-cluster/kubernetes#primary-datacenter

This is likely why the gateway is giving you a connection reset by peer.

1 Like

I have solved this problem. Thank you @lkysow but EKS has been enabled federation. In VMs, we need to config:

“verify_incoming_rpc”: true
“verify_outgoing”: true
“verify_server_hostname”: true

Sorry, so is everything working now?

Note that that config is what’s described in the docs: https://www.consul.io/docs/k8s/installation/multi-cluster/vms-and-kubernetes

I’m having a similar problem in gcp between a vm and gke cluster. i have got it to work between gke clusters.
The primary cluster config -

  global:
      name: consul
      image: consul:1.8.0
      imageK8S: hashicorp/consul-k8s:0.16.0
      datacenter: dc1
      federation:
        enabled: true
        createFederationSecret: true
      tls:
        enabled: true
   meshGateway:
      enabled: true
   connectInject:
      enabled: true

the server on VM has the following config-

cert_file = "/<location>/consul/config/vm-dc-server-consul-0.pem"

 key_file = "/<location>/consul/config/vm-dc-server-consul-0-key.pem"

 ca_file = "/<location>/consul/config/consul-agent-ca.pem"

 primary_gateways = ["<IP of mesh service>:443"]

 # Other server settings

 server = true

 datacenter = "vm"

 data_dir = "/<location>/consul/data"

 enable_central_service_config = true

 primary_datacenter = "dc1"

 connect {

 enabled = true

 enable_mesh_gateway_wan_federation = true

 }

 verify_incoming_rpc = true

 verify_outgoing = true

 verify_server_hostname = true

 ports {

 https = 8501

 http = 8500

 grpc = 8502

 }

the logs from the VM are -

    `2020-07-09T07:25:58.304Z [ERROR] agent.server: failed to establish leadership: error="Failed to set the intermediate certificate with the CA provider: could not verify intermediate cert against root: x509: certificate has expired or is not yet valid: current time 2020-07-09T07:25:58Z is before 2020-07-09T07:27:05Z"
    2020-07-09T07:25:58.304Z [ERROR] agent.server: failed to transfer leadership attempt, will retry: attempt=0 retry_limit=3 error="cannot find peer"
    2020-07-09T07:25:58.304Z [ERROR] agent.server: failed to transfer leadership attempt, will retry: attempt=1 retry_limit=3 error="cannot find peer"
    2020-07-09T07:25:58.304Z [ERROR] agent.server: failed to transfer leadership attempt, will retry: attempt=2 retry_limit=3 error="cannot find peer"
    2020-07-09T07:25:58.304Z [ERROR] agent.server: failed to transfer leadership: error="failed to transfer leadership in 3 attempts"
    2020-07-09T07:25:58.449Z [WARN]  agent: Check socket connection failed: check=service:vm-gateway error="dial tcp 10.154.0.17:7051: connect: connection refused"
    2020-07-09T07:25:58.449Z [WARN]  agent: Check is now critical: check=service:vm-gateway
    2020-07-09T07:26:01.820Z [WARN]  agent.server.rpc: RPC request for DC is currently failing as no path was found: datacenter=dc2 method=Internal.ServiceDump
    2020-07-09T07:26:03.343Z [ERROR] agent.server: failed to establish leadership: error="Failed to set the intermediate certificate with the CA provider: could not verify intermediate cert against root: x509: certificate has expired or is not yet valid: current time 2020-07-09T07:26:03Z is before 2020-07-09T07:27:10Z"
    2020-07-09T07:26:03.343Z [ERROR] agent.server: failed to transfer leadership attempt, will retry: attempt=0 retry_limit=3 error="cannot find peer"
    2020-07-09T07:26:03.343Z [ERROR] agent.server: failed to transfer leadership attempt, will retry: attempt=1 retry_limit=3 error="cannot find peer"
    2020-07-09T07:26:03.343Z [ERROR] agent.server: failed to transfer leadership attempt, will retry: attempt=2 retry_limit=3 error="cannot find peer"
    2020-07-09T07:26:03.343Z [ERROR] agent.server: failed to transfer leadership: error="failed to transfer leadership in 3 attempts"
    2020-07-09T07:26:08.370Z [ERROR] agent.server: failed to establish leadership: error="Failed to set the intermediate certificate with the CA provider: could not verify intermediate cert against root: x509: certificate has expired or is not yet valid: current time 2020-07-09T07:26:08Z is before 2020-07-09T07:27:15Z"
    2020-07-09T07:26:08.370Z [ERROR] agent.server: failed to transfer leadership attempt, will retry: attempt=0 retry_limit=3 error="cannot find peer"
    2020-07-09T07:26:08.370Z [ERROR] agent.server: failed to transfer leadership attempt, will retry: attempt=1 retry_limit=3 error="cannot find peer"
    2020-07-09T07:26:08.370Z [ERROR] agent.server: failed to transfer leadership attempt, will retry: attempt=2 retry_limit=3 error="cannot find peer"
    2020-07-09T07:26:08.370Z [ERROR] agent.server: failed to transfer leadership: error="failed to transfer leadership in 3 attempts"
    2020-07-09T07:26:08.450Z [WARN]  agent: Check socket connection failed: check=service:vm-gateway error="dial tcp 10.154.0.17:7051: connect: connection refused"
    2020-07-09T07:26:08.450Z [WARN]  agent: Check is now critical: check=service:vm-gateway
    2020-07-09T07:26:13.397Z [ERROR] agent.server: failed to establish leadership: error="Failed to set the intermediate certificate with the CA provider: could not verify intermediate cert against root: x509: certificate has expired or is not yet valid: current time 2020-07-09T07:26:13Z is before 2020-07-09T07:27:20Z"
    2020-07-09T07:26:13.397Z [ERROR] agent.server: failed to transfer leadership attempt, will retry: attempt=0 retry_limit=3 error="cannot find peer"
    2020-07-09T07:26:13.397Z [ERROR] agent.server: failed to transfer leadership attempt, will retry: attempt=1 retry_limit=3 error="cannot find peer"
    2020-07-09T07:26:13.397Z [ERROR] agent.server: failed to transfer leadership attempt, will retry: attempt=2 retry_limit=3 error="cannot find peer"
    2020-07-09T07:26:13.397Z [ERROR] agent.server: failed to transfer leadership: error="failed to transfer leadership in 3 attempts"
    2020-07-09T07:26:16.740Z [INFO]  agent.server.memberlist.wan: memberlist: Suspect consul-server-1.dc1 has failed, no acks received
    2020-07-09T07:26:17.731Z [WARN]  agent: grpc: Server.Serve failed to complete security handshake from "127.0.0.1:51388": tls: first record does not look like a TLS handshake
    2020-07-09T07:26:18.428Z [ERROR] agent.server: failed to establish leadership: error="Failed to set the intermediate certificate with the CA provider: could not verify intermediate cert against root: x509: certificate has expired or is not yet valid: current time 2020-07-09T07:26:18Z is before 2020-07-09T07:27:25Z"
    2020-07-09T07:26:18.428Z [ERROR] agent.server: failed to transfer leadership attempt, will retry: attempt=0 retry_limit=3 error="cannot find peer"
    2020-07-09T07:26:18.428Z [ERROR] agent.server: failed to transfer leadership attempt, will retry: attempt=1 retry_limit=3 error="cannot find peer"
    2020-07-09T07:26:18.428Z [ERROR] agent.server: failed to transfer leadership attempt, will retry: attempt=2 retry_limit=3 error="cannot find peer"
    2020-07-09T07:26:18.428Z [ERROR] agent.server: failed to transfer leadership: error="failed to transfer leadership in 3 attempts"
    2020-07-09T07:26:18.450Z [WARN]  agent: Check socket connection failed: check=service:vm-gateway error="dial tcp 10.154.0.17:7051: connect: connection refused"
    2020-07-09T07:26:18.450Z [WARN]  agent: Check is now critical: check=service:vm-gateway
    2020-07-09T07:26:18.976Z [WARN]  agent.server.rpc: RPC request for DC is currently failing as no path was found: datacenter=dc2 method=Internal.ServiceDump
    2020-07-09T07:26:19.395Z [WARN]  agent.server.rpc: RPC request for DC is currently failing as no path was found: datacenter=dc2 method=Internal.ServiceDump
    2020-07-09T07:26:21.925Z [WARN]  agent: grpc: Server.Serve failed to complete security handshake from "127.0.0.1:51392": tls: first record does not look like a TLS handshake
    2020-07-09T07:26:22.052Z [WARN]  agent.server.rpc: RPC request for DC is currently failing as no path was found: datacenter=dc2 method=Internal.ServiceDump
    2020-07-09T07:26:23.455Z [ERROR] agent.server: failed to establish leadership: error="Failed to set the intermediate certificate with the CA provider: could not verify intermediate cert against root: x509: certificate has expired or is not yet valid: current time 2020-07-09T07:26:23Z is before 2020-07-09T07:27:30Z"
    2020-07-09T07:26:23.455Z [ERROR] agent.server: failed to transfer leadership attempt, will retry: attempt=0 retry_limit=3 error="cannot find peer"
    2020-07-09T07:26:23.455Z [ERROR] agent.server: failed to transfer leadership attempt, will retry: attempt=1 retry_limit=3 error="cannot find peer"
    2020-07-09T07:26:23.455Z [ERROR] agent.server: failed to transfer leadership attempt, will retry: attempt=2 retry_limit=3 error="cannot find peer"
    2020-07-09T07:26:23.455Z [ERROR] agent.server: failed to transfer leadership: error="failed to transfer leadership in 3 attempts"`

and the envoy proxy shows the following logs -

`==> Registered service: vm-gateway
[2020-07-09 07:28:25.174][4737][info][main] [external/envoy/source/server/server.cc:255] initializing epoch 0 (hot restart version=disabled)
[2020-07-09 07:28:25.174][4737][info][main] [external/envoy/source/server/server.cc:257] statically linked extensions:
[2020-07-09 07:28:25.174][4737][info][main] [external/envoy/source/server/server.cc:259]   envoy.filters.http: envoy.buffer, envoy.cors, envoy.csrf, envoy.ext_authz, envoy.fault, envoy.filters.http.adaptive_concurrency, envoy.filters.http.aws_lambda, envoy.filters.http.aws_request_signing, envoy.filters.http.buffer, envoy.filters.http.cache, envoy.filters.http.cors, envoy.filters.http.csrf, envoy.filters.http.dynamic_forward_proxy, envoy.filters.http.dynamo, envoy.filters.http.ext_authz, envoy.filters.http.fault, envoy.filters.http.grpc_http1_bridge, envoy.filters.http.grpc_http1_reverse_bridge, envoy.filters.http.grpc_json_transcoder, envoy.filters.http.grpc_stats, envoy.filters.http.grpc_web, envoy.filters.http.gzip, envoy.filters.http.header_to_metadata, envoy.filters.http.health_check, envoy.filters.http.ip_tagging, envoy.filters.http.jwt_authn, envoy.filters.http.lua, envoy.filters.http.on_demand, envoy.filters.http.original_src, envoy.filters.http.ratelimit, envoy.filters.http.rbac, envoy.filters.http.router, envoy.filters.http.squash, envoy.filters.http.tap, envoy.grpc_http1_bridge, envoy.grpc_json_transcoder, envoy.grpc_web, envoy.gzip, envoy.health_check, envoy.http_dynamo_filter, envoy.ip_tagging, envoy.lua, envoy.rate_limit, envoy.router, envoy.squash
[2020-07-09 07:28:25.174][4737][info][main] [external/envoy/source/server/server.cc:259]   envoy.clusters: envoy.cluster.eds, envoy.cluster.logical_dns, envoy.cluster.original_dst, envoy.cluster.static, envoy.cluster.strict_dns, envoy.clusters.aggregate, envoy.clusters.dynamic_forward_proxy, envoy.clusters.redis
[2020-07-09 07:28:25.174][4737][info][main] [external/envoy/source/server/server.cc:259]   envoy.dubbo_proxy.protocols: dubbo
[2020-07-09 07:28:25.174][4737][info][main] [external/envoy/source/server/server.cc:259]   envoy.filters.network: envoy.client_ssl_auth, envoy.echo, envoy.ext_authz, envoy.filters.network.client_ssl_auth, envoy.filters.network.direct_response, envoy.filters.network.dubbo_proxy, envoy.filters.network.echo, envoy.filters.network.ext_authz, envoy.filters.network.http_connection_manager, envoy.filters.network.kafka_broker, envoy.filters.network.local_ratelimit, envoy.filters.network.mongo_proxy, envoy.filters.network.mysql_proxy, envoy.filters.network.ratelimit, envoy.filters.network.rbac, envoy.filters.network.redis_proxy, envoy.filters.network.sni_cluster, envoy.filters.network.tcp_proxy, envoy.filters.network.thrift_proxy, envoy.filters.network.zookeeper_proxy, envoy.http_connection_manager, envoy.mongo_proxy, envoy.ratelimit, envoy.redis_proxy, envoy.tcp_proxy
[2020-07-09 07:28:25.174][4737][info][main] [external/envoy/source/server/server.cc:259]   envoy.thrift_proxy.transports: auto, framed, header, unframed
[2020-07-09 07:28:25.174][4737][info][main] [external/envoy/source/server/server.cc:259]   envoy.thrift_proxy.filters: envoy.filters.thrift.rate_limit, envoy.filters.thrift.router
[2020-07-09 07:28:25.174][4737][info][main] [external/envoy/source/server/server.cc:259]   envoy.dubbo_proxy.route_matchers: default
[2020-07-09 07:28:25.175][4737][info][main] [external/envoy/source/server/server.cc:259]   envoy.retry_host_predicates: envoy.retry_host_predicates.omit_canary_hosts, envoy.retry_host_predicates.omit_host_metadata, envoy.retry_host_predicates.previous_hosts
[2020-07-09 07:28:25.175][4737][info][main] [external/envoy/source/server/server.cc:259]   envoy.dubbo_proxy.filters: envoy.filters.dubbo.router
[2020-07-09 07:28:25.175][4737][info][main] [external/envoy/source/server/server.cc:259]   envoy.udp_listeners: raw_udp_listener
[2020-07-09 07:28:25.175][4737][info][main] [external/envoy/source/server/server.cc:259]   envoy.dubbo_proxy.serializers: dubbo.hessian2
[2020-07-09 07:28:25.175][4737][info][main] [external/envoy/source/server/server.cc:259]   envoy.health_checkers: envoy.health_checkers.redis
[2020-07-09 07:28:25.175][4737][info][main] [external/envoy/source/server/server.cc:259]   envoy.stats_sinks: envoy.dog_statsd, envoy.metrics_service, envoy.stat_sinks.dog_statsd, envoy.stat_sinks.hystrix, envoy.stat_sinks.metrics_service, envoy.stat_sinks.statsd, envoy.statsd
[2020-07-09 07:28:25.175][4737][info][main] [external/envoy/source/server/server.cc:259]   envoy.resolvers: envoy.ip
[2020-07-09 07:28:25.175][4737][info][main] [external/envoy/source/server/server.cc:259]   envoy.resource_monitors: envoy.resource_monitors.fixed_heap, envoy.resource_monitors.injected_resource
[2020-07-09 07:28:25.175][4737][info][main] [external/envoy/source/server/server.cc:259]   http_cache_factory: envoy.extensions.http.cache.simple
[2020-07-09 07:28:25.175][4737][info][main] [external/envoy/source/server/server.cc:259]   envoy.filters.udp_listener: envoy.filters.udp.dns_filter, envoy.filters.udp_listener.udp_proxy
[2020-07-09 07:28:25.175][4737][info][main] [external/envoy/source/server/server.cc:259]   envoy.thrift_proxy.protocols: auto, binary, binary/non-strict, compact, twitter
[2020-07-09 07:28:25.175][4737][info][main] [external/envoy/source/server/server.cc:259]   envoy.grpc_credentials: envoy.grpc_credentials.aws_iam, envoy.grpc_credentials.default, envoy.grpc_credentials.file_based_metadata
[2020-07-09 07:28:25.175][4737][info][main] [external/envoy/source/server/server.cc:259]   envoy.tracers: envoy.dynamic.ot, envoy.lightstep, envoy.tracers.datadog, envoy.tracers.dynamic_ot, envoy.tracers.lightstep, envoy.tracers.opencensus, envoy.tracers.xray, envoy.tracers.zipkin, envoy.zipkin
[2020-07-09 07:28:25.175][4737][info][main] [external/envoy/source/server/server.cc:259]   envoy.transport_sockets.downstream: envoy.transport_sockets.alts, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, tls
[2020-07-09 07:28:25.175][4737][info][main] [external/envoy/source/server/server.cc:259]   envoy.retry_priorities: envoy.retry_priorities.previous_priorities
[2020-07-09 07:28:25.175][4737][info][main] [external/envoy/source/server/server.cc:259]   envoy.access_loggers: envoy.access_loggers.file, envoy.access_loggers.http_grpc, envoy.access_loggers.tcp_grpc, envoy.file_access_log, envoy.http_grpc_access_log, envoy.tcp_grpc_access_log
[2020-07-09 07:28:25.175][4737][info][main] [external/envoy/source/server/server.cc:259]   envoy.transport_sockets.upstream: envoy.transport_sockets.alts, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, tls
[2020-07-09 07:28:25.175][4737][info][main] [external/envoy/source/server/server.cc:259]   envoy.filters.listener: envoy.filters.listener.http_inspector, envoy.filters.listener.original_dst, envoy.filters.listener.original_src, envoy.filters.listener.proxy_protocol, envoy.filters.listener.tls_inspector, envoy.listener.http_inspector, envoy.listener.original_dst, envoy.listener.original_src, envoy.listener.proxy_protocol, envoy.listener.tls_inspector
[2020-07-09 07:28:25.196][4737][warning][misc] [external/envoy/source/common/protobuf/utility.cc:198] Using deprecated option 'envoy.api.v2.Cluster.hosts' from file cluster.proto. This configuration will be removed from Envoy soon. Please see https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated for details.
[2020-07-09 07:28:25.197][4737][info][main] [external/envoy/source/server/server.cc:340] admin address: 127.0.0.1:19005
[2020-07-09 07:28:25.198][4737][info][main] [external/envoy/source/server/server.cc:459] runtime: layers:
  - name: static_layer
    static_layer:
      envoy.deprecated_features:envoy.config.trace.v2.ZipkinConfig.HTTP_JSON_V1: true
      envoy.deprecated_features:envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager.Tracing.operation_name: true
      envoy.deprecated_features:envoy.api.v2.Cluster.tls_context: true
[2020-07-09 07:28:25.198][4737][info][config] [external/envoy/source/server/configuration_impl.cc:103] loading tracing configuration
[2020-07-09 07:28:25.198][4737][info][config] [external/envoy/source/server/configuration_impl.cc:69] loading 0 static secret(s)
[2020-07-09 07:28:25.198][4737][info][config] [external/envoy/source/server/configuration_impl.cc:75] loading 1 cluster(s)
[2020-07-09 07:28:25.206][4737][info][upstream] [external/envoy/source/common/upstream/cluster_manager_impl.cc:167] cm init: initializing cds
[2020-07-09 07:28:25.208][4737][info][config] [external/envoy/source/server/configuration_impl.cc:79] loading 0 listener(s)
[2020-07-09 07:28:25.208][4737][info][config] [external/envoy/source/server/configuration_impl.cc:129] loading stats sink configuration
[2020-07-09 07:28:25.209][4737][info][main] [external/envoy/source/server/server.cc:554] starting main dispatch loop
[2020-07-09 07:28:25.210][4737][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:92] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2020-07-09 07:28:25.514][4737][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:92] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
[2020-07-09 07:28:26.474][4737][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:92] StreamAggregatedResources gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination`

the logs from the kuberenetes consul server -

     `2020-07-09T07:31:59.081Z [WARN]  agent.server.rpc: RPC request to DC is currently failing as no server can be reached: datacenter=vm
    2020-07-09T07:32:01.050Z [WARN]  agent.server.rpc: RPC request to DC is currently failing as no server can be reached: datacenter=vm
    2020-07-09T07:32:03.018Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send gossip to 34.105.243.87:8302: read tcp 10.32.1.8:38300->10.32.2.14:8443: read: connection reset by peer
    2020-07-09T07:32:03.514Z [INFO]  agent.server.memberlist.wan: memberlist: Suspect simba.vm has failed, no acks received
    2020-07-09T07:32:03.515Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send compound ping and suspect message to 34.105.243.87:8302: read tcp 10.32.1.8:46742->10.32.1.7:8443: read: connection reset by peer`

can someone tell me what mistake I have made ?

the ports specified for the envoy mesh-gateway other than the admin port are not listed when using netstat and other tool and probing into the admin url's /ready endpoint returns LIVE

1 Like

It’s working now, thank you for you help.

1 Like

Hi, I see you’ve created a new discuss post so I’ll respond there: Error WAN federation between GKE cluster and VM's on GCP