I have installed on my localhost, a consul server (leader) with an IP address of 192.168.48.1 => running ok
Then I installed a vagrant box (ubuntu 20.04) as a consul agent, with an ip address of 10.0.2.15 and I informed about the bridge within the Vagrantfile.
The issue is:
The Consul leader sees the agent node but the agent health status keeps failing and recovering, with the following message :
Failing serf check
This node has a failing serf node check.
And a few seconds after that, going back to green status, and so on and so forth.
If the leader can see the node, that means the configuration on the agent side, is ok. But the health status fails at regular intervals (few seconds).
I updated the iptables for the required ports for consul, but it still fails.
I checked the logs with the command “consul monitor” on the localhost (leader host) and it says about an issue with the ack:
2022-07-19T12:12:46.945+0200 [INFO] agent.server.memberlist.lan: memberlist: Suspect 10.0.2.15 has failed, no acks received
2022-07-19T12:12:50.179+0200 [ERROR] agent.server.memberlist.lan: memberlist: Push/Pull with 10.0.2.15 failed: dial tcp 10.0.2.15:8301: i/o timeout
2022-07-19T12:12:50.945+0200 [INFO] agent.server.memberlist.lan: memberlist: Marking 10.0.2.15 as failed, suspect timeout reached (0 peer confirmations)
2022-07-19T12:12:50.945+0200 [INFO] agent.server.serf.lan: serf: EventMemberFailed: 10.0.2.15 10.0.2.15
2022-07-19T12:12:50.945+0200 [INFO] agent.server: member failed, marking health critical: member=10.0.2.15 partition=default