Clients pods on different node cannot reach the consul cluster

Kubernetes 1.18
Helm v3


As per picture , pods on different node than server cannot join the cluster.

The logs/describe on thoses pods says :Failed to resolve consul-server-0.consul-server.default.svc:8301: lookup consul-server-0.consul-server.default.svc on 10.100.5.5:53: read udp 10.200.1.157:38142->10.100.5.5:53: i/o timeout

Helm chart :global:
name: consul
enabled: true
datacenter: dc1

server:
replicas: 1
bootstrapExpect: 1

connectInject:
enabled: true
controller:
enabled: true

Hi @Abdel1979,

This log indicates the DNS query from this pod is timing out when communicating to the DNS server. Have you confirmed DNS is working properly in your environment?

Thanks Blake, you were right, there were some firewall rules blocking communications.