Consul Serf Health Status

I have installed on my localhost, a consul server (leader) with an IP address of => running ok
Then I installed a vagrant box (ubuntu 20.04) as a consul agent, with an ip address of and I informed about the bridge within the Vagrantfile.

The issue is:
The Consul leader sees the agent node but the agent health status keeps failing and recovering, with the following message :

   Failing serf check
    This node has a failing serf node check.

And a few seconds after that, going back to green status, and so on and so forth.

If the leader can see the node, that means the configuration on the agent side, is ok. But the health status fails at regular intervals (few seconds).

I updated the iptables for the required ports for consul, but it still fails.

I checked the logs with the command “consul monitor” on the localhost (leader host) and it says about an issue with the ack:

       2022-07-19T12:12:46.945+0200 [INFO]  agent.server.memberlist.lan: memberlist: Suspect has failed, no acks received
    2022-07-19T12:12:50.179+0200 [ERROR] agent.server.memberlist.lan: memberlist: Push/Pull with failed: dial tcp i/o timeout
    2022-07-19T12:12:50.945+0200 [INFO]  agent.server.memberlist.lan: memberlist: Marking as failed, suspect timeout reached (0 peer confirmations)
    2022-07-19T12:12:50.945+0200 [INFO]  agent.server.serf.lan: serf: EventMemberFailed:
    2022-07-19T12:12:50.945+0200 [INFO]  agent.server: member failed, marking health critical: member= partition=default

The error has given you the protocol, IP address, and port which your networking setup blocked from communicating - so that’s what you need to solve.

Unfortunately it doesn’t solve the serf issue. The service keeps going and leaving