Read: connection reset by peer

Ok, Currently I am working with WAN federation between multiple datacenters in kubernetes and will be using a VM datacenter as well. When running the “consul members -wan” command it shows all of the servers from the multiple datacenters as it should. However I still get a “500 backend error” when I try to change the datacenter in the web ui

or I get

" Failed to send ack: read tcp 10.234.68.180:40024->10.234.70.44:8443: read: connection reset by peer from=10.234.73.94:50788"
or
“error=“rpc error getting client: failed to get conn: read tcp 10.234.68.180:56636->10.234.73.94:8443: read: connection reset by peer””

The servers see each other and I am using loadbalancers as my mesh-gateway service with a static IP with metallb in the annotation. There arent any ports closed off or firewall issues but I constantly get the RPC errors.

I tried exposeGossipAndRPCPorts: true under server and exposeGossipPorts: true as well to no success. I have included my current values chart for helm that I am using as well but would really appreciate any insight into resolving the issue.

primary dc values

global:
enabled: true
name: consul
datacenter: it-k8s-at
domain: consul
image: “hashicorp/consul:1.9.2”
imageK8S: “hashicorp/consul-k8s:0.23.0”
gossipEncryption:
secretName: consul-gossip-encryption-key
secretKey: key
tls:
enabled: true
enableAutoEncrypt: true
verify: true
caCert:
secretName: consul-federation
secretKey: caCert
caKey:
secretName: consul-federation
secretKey: caKey
acls:
enabled: false

federation:
enabled: true

ui:
enabled: true
service:
enabled: true
type: NodePort
connectInject:
enabled: true
server:
enabled: true
connect: enabled
storageClass: rbd-fast
exposeGossipAndRPCPorts: false
client:
enabled: true
grpc: true
hostNetwork: false
exposeGossipPorts: false
controller:
enabled: true
meshGateway:
enabled: true
globalMode: local
wanAddress:
source: “Service”
port: 8443
service:
enabled: true
type: LoadBalancer
port: 8443
annotations: |
metallb.universe.tf/address-pool: consul
hostNetwork: false
consulServiceName: “mesh-gateway”
containerPort: 8443
resources:
requests:
memory: “100Mi”
cpu: “100m”
limits:
memory: “100Mi”
cpu: “100m”
initCopyConsulContainer:
resources:
requests:
memory: “25Mi”
cpu: “50m”
limits:
memory: “150Mi”
cpu: “50m”
affinity: |
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: {{ template “consul.name” . }}
release: “{{ .Release.Name }}”
component: mesh-gateway
topologyKey: kubernetes.io/hostname

@bratate Did you manage to solve this issue? It’s exactly what I’m seeing.

I can get the connection working between different consul clusters in the same cloud provider, but when I try to connect from a separate VM on a different network I get exactly the errors you are seeing.