Can't connect masters between AWS VPCs

I’ve been stuck trying to get 2 sets of nomad servers to communicate between VPCs.

Ultimately I’d like to decom one entirely, so if there is a simple way to just lift and shift the data I’d take that two, but every google search for “migrate” just points to the migration strategy for jobs.

At present I’ve done presumably all the needfuls to connect the VPCs with peering connections and route tables and security groups. I can freely ping the machines one to the other, but performing nomad server join x.x.x.x from new VPC to old returns a connection reset by peer error, and I cannot for the life of me determine why. As this is personal, I could ultimately just destroy all the state and start over, but that’s a hassle and I’d really rather avoid that if possible. Any help getting this moving on again would be incredible. Thanks in advance

You need Nomad’s TCP and UDP (?) ports opens.

Having ping enabled will not help.

HTH,
Shantanu Gadgil

If they couldn’t reach the port it wouldn’t be connection reset by peer, no? tcpdump shows constant connections both coming and going from both ends. Both sets of server nodes are still defaulted to bind 0.0.0.0.

Agent nodes from new VPC can connect and receive jobs from the servers in old vpc as well so presumably there isn’t a port blockage here

before reaching out for tcpdump, it might help to try and debug using nc (netcat) which can test connectivity using TCP and UDP.