Consul KV synchronisation/replication monitoring

Hello everyone,

I am using consul as a k/V store in a unique datacenter. As i check savage usage on failure i want to check if my 3 nodes a synched and possess the same data in K/V on each node.

How can i check that at time i have the same data on each node ? Is there anything that allow to check data replication between nodes ? Can it be done with raft index ? I want to check that if the leader crash i have the date on the other nodes available.

For another example i made some insert in one cluster, crush 2 of the 3 node, remove raft data, reinsert the 2 node in the cluster. How can i ensure that data are now the same as the ones on the node that was not crashed on my 2 ‘new’ members ?

Thank you

Hi @geifoley,

I’ll try my best to answer this.

Consul uses a consensus protocol called Raft to ensure that data is reliably replicated to servers in a cluster. Consul’s architecture docs contain an overview of this consensus protocol. I also recommend checking out of the links on that page, The Secret Lives of Data, for a visual explanation of Raft.

Based on how Raft works, any data successfully written to the cluster is guaranteed to have been successfully replicated to the follow servers in the cluster, and that they’ve written it to disk.

In the event that you need to replace a server, you can look at the Healthy field from the /v1/operator/autopilot/health API to determine whether the replacement server has successfully re-joined the cluster and has received an up-to-date copy of the data from the leader.

You can specifically look at the LastIndex field for each server returned in that endpoint to see the index of each server’s last committed Raft log entry. If all servers have the same index, it means they have the same data (note that the index constantly increments).

I hope this helps. Let know if you have any other questions.