Nomad Clients hearbeat missed on single Server in Cluster failing

Hi!

I am trying to better understand the Nomad Client - Server communication and am looking for documentation/clarification. Specifically for the Client - Server hearbeat messages, the behaviour, on Client and Server, when a heartbeat is missed and when a missed hearbeat is to be expected.
We had an issue with a single Nomad Server failing in a Cluster of 3. This seems to have led to missed hearbeats from multiple Nomad Clients and their allocations were restarted on other Nomad Clients.
I have found this git issue, which I think describes what we have seen. The answer to this issue mentions that this is expected behaviour.

My open questions are:

  1. From reading the issue I understand that a missed heartbeat between a Nomad Client and Server always leads to reallocation of all allocations running on the Nomad Client. Is this correct?
  2. It also mentions that a Nomad Client only talks to one specifc Nomad Server for heartbeat messages. So even when the Nomad Server cluster is still healthy, when one Nomad Server instance fails, the Nomad Clients which were connected to this instance will miss a heartbeat and the allocations will be restarted?
  3. Is there a configuration option / options to prevent missed heartbeats / allocations being transfered on Nomad Server failure? Assuming that the Nomad Server Cluster is still functioning when one Nomad Server instance fails, can we configure a nomad Client node to communicate with a different Nomad Server instance, if the one it is connected to for heartbeats fails?

We were thinking about scaling up our nomad Cluster from 3 to 5 instances, but if I understand this process correctly, the increased resiliency would not prevent missed hearbeats. There would be a possibility of reduced nomad Client impact (Assuming the nomad Clients are spread evenly between the nomad Server instances), but a failing nomad Server would still lead to missed heartbeats and healthy allocations being restarted.