How to prevent two AKS Kubernetes clusters in same network from sharing member list

Hi,

We are having data pollution between two kubernetes clusters.

We deploy using a helm chart in AKS, so both consul clusters get consul-discovery-consul-server-0, consul-discovery-consul-server-1 and consul-discovery-consul-server-2 as well as their member services (like dbs and vms)

Both AKS clusters are on same network with different subnets and when we start the consul cluster, it only has members in it’s own kubernetes cluster, vms and dbs, but after a few hours, both have all members.

We have NSGs blocking all consul ports between the two which translates into errors in the logs saying a member is down because it can’t be reached. Also all the VM members and DB members in each cluster are unique so that’s not breaking things but since we use same helm chart with same names in both clusters, those names (consul-discovery-consul-server-X) clash.

Not sure if that causes our AKS upgrade issues corrupting our consul cluster but how can we prevent this pollution?

Is it possible that, during AKS upgrade, when stopping one of the follower servers (like consul-discovery-consul-server-1), the same named member from the other cluster joins the voting and when the same cluster one is brought back up, election breaks.