Running consul in multiple namespaces within the same cluster

Having problems running consul in different namespaces. I have a 1M + 3W K8s cluster, I’m running consul with replica count 3 , and removed pod affinity definition in the helm charts. The first consul instance in a specific namespace comes up fine, the problem occurs when i try to replicate this setup in a different namespace in the same cluster. In the second namespace consul servers are in running state whereas the 3 consul clients remain in pending state. Is it so that we can have only one client run per node in k8s cluster? What’s the recommendation to make it work in different namespaces?

kubectl get pods -n consul1
NAME READY STATUS RESTARTS AGE
consul-consul-nlxhd 1/1 Running 0 70s
consul-consul-pkwn7 1/1 Running 0 70s
consul-consul-server-0 1/1 Running 0 69s
consul-consul-server-1 1/1 Running 0 69s
consul-consul-server-2 1/1 Running 0 69s
consul-consul-zn8c6 1/1 Running 0 70s

kubectl get pods -n consul2
NAME READY STATUS RESTARTS AGE
consul-consul-7wbjk 0/1 Pending 0 13m
consul-consul-kpb92 0/1 Pending 0 13m
consul-consul-l8v5r 0/1 Pending 0 13m
consul-consul-server-0 1/1 Running 0 13m
consul-consul-server-1 1/1 Running 0 13m
consul-consul-server-2 1/1 Running 0 13m

I see warnings like below in the second namespace,
98s Warning FailedScheduling pod/consul-consul-kpb92 0/4 nodes are available: 3 node(s) didn’t have free ports for the requested pod ports, 3 node(s) didn’t match node selector.
98s Warning FailedScheduling pod/consul-consul-7wbjk 0/4 nodes are available: 3 node(s) didn’t have free ports for the requested pod ports, 3 node(s) didn’t match node selector.
25s Warning FailedScheduling pod/consul-consul-l8v5r 0/4 nodes are available: 3 node(s) didn’t have free ports for the requested pod ports, 3 node(s) didn’t match node selector.