Consul Helm with very similar values does not behave the same

Hello,

My setup is pretty simple : a primary DC hosted on VMs and two secondary datacenter that are hosted on K8s, I use the helm chart to deploy Consul on my clusters.

I experience a wierd behaviour as both the secrets and the configuration are very similar but k8s-1 can access any service and works fine while k8s-2 is not, it can reach local services via upstream but not within the mesh. Other DCs can access k8s-2 just fine which is why I don’t really understand. The UI says that everything is fine, all checks are passing. But when I try to reach a service in k8s-1 from k8s-2 I get the following :

/ # curl localhost:9090
curl: (56) Recv failure: Connection reset by peer

The annotations are as follows:

consul.hashicorp.com/connect-service-upstreams: 'alertmanager:9093,payments:9090:dc2'

Alermanager is reachable as its located in the same DC. But not payments.

Here are my configs for reference (each key of the secrets are exactly the same and have been checked)

This is the faulty one :

global:
  image: "consul:1.8.5"
  name: consul
  datacenter: dc-sandbox
  tls:
    enabled: true
    enableAutoEncrypt: true
    caCert:
      secretName: consul-ca-cert
      secretKey: tls.crt
    caKey:
      secretName: consul-ca-key
      secretKey: tls.key
  acls:
    manageSystemACLs: true
    bootstrapToken:
      secretName: consul-federation
      secretKey: replicationToken
    replicationToken:
      secretName: consul-federation
      secretKey: replicationToken
  federation:
    enabled: true
  gossipEncryption:
    secretName: consul-federation
    secretKey: gossipEncryptionKey
connectInject:
  enabled: true
  default: true
  healthChecks:
    enabled: false
meshGateway:
  enabled: true
  globalMode: local
server:
  replicas: 2
  connect: true
  service:
    enabled : true
  bootstrapExpect: 2
  extraVolumes:
  - type: secret
    name: vault-config
    load: true
    items:
      - key: config
        path: vault-config.json
  - type: secret
    name: vault-ca
    load: false
  extraConfig: |
    {
      "primary_datacenter": "aws",
      "primary_gateways": ["mesh-gateway.primary.datacenter:9443"]
    }
syncCatalog:
  enabled: true
  image: null
  default: true
  toConsul: true
  toK8S: true
  centralConfig:
    enabled: true
  aclSyncToken:
    secretName: consul-federation
    secretKey: replicationToken

This is the one that works :

global:
  image: "consul:1.8.5"
  name: consul
  datacenter: dc3
  tls:
    enabled: true
    enableAutoEncrypt: true
    caCert:
      secretName: consul-ca-cert
      secretKey: tls.crt
    caKey:
      secretName: consul-ca-key
      secretKey: tls.key
  acls:
    manageSystemACLs: true
    bootstrapToken:
      secretName: consul-federation
      secretKey: replicationToken
    replicationToken:
      secretName: consul-federation
      secretKey: replicationToken
  federation:
    enabled: true
  gossipEncryption:
    secretName: consul-federation
    secretKey: gossipEncryptionKey
connectInject:
  enabled: true
  default: true
  healthChecks:
    enabled: false
meshGateway:
  enabled: true
server:
  replicas: 2
  connect: true
  service:
    enabled : true
  bootstrapExpect: 2
  extraVolumes:
  - type: secret
    name: vault-config
    load: true
    items:
      - key: config
        path: vault-config.json
  - type: secret
    name: vault-ca
    load: false
  extraConfig: |
    {
      "primary_datacenter": "aws",
      "primary_gateways": ["mesh-gateway.primary.datacenter:9443"]
    }
syncCatalog:
  enabled: true
  image: null
  default: true
  toConsul: true
  toK8S: true
  centralConfig:
    enabled: true
  aclSyncToken:
    secretName: consul-federation
    secretKey: replicationToken

I can’t see much logs anywhere, neither in Envoy nor on the consul agents/servers.

Any help is appreciated.

Thanks.
Marius