Vault - multiple active nodes in the same cluster

I have a non-enterprise Vault with 5 nodes that has a Postgres backend with which I am running configuration tests. An initial configuration, which was working, pointed the API address to the VIP. I reconfigured the application by pointing the API address to localhost:8200 and now all nodes are elected as active nodes and they don’t see each other, so vault operator members returns only localhost as active nodes and the configured load balancer rotates me between the cluster machines without errors. What is causing this error? It seems there is no communication between the cluster nodes, yet reconfiguring the nodes should not lead to this inconsistent state. Do all the information reside only in the backend, or am I wrong?
When I reconfigured, I shut down all the nodes and restarted them one by one.

It would be helpful if you provided the config HCL you are using, but from what you’re describing it sounds like you set the api_addr parameter to localhost:8200 which would be wrong. This parameter is meant for each node to communicate its API address to the other nodes of the cluster.

The issue was related to the cluster address and firewall, which was blocking the communication between nodes.

Vault is able to use a different configuration, then in any case if network connections are ok you use different api_address as far as i understood.