Task get killed

I have a cluster with 5 Nomad Servers, Clients, Vaults and Consuls.

Nomad runs a job as type system which looks like the following:

job "nomad_monitor" {

  type="system"
  group "nomad_monitor" {
    
    

    task "nomad_monitor" {
      
      driver = "docker"
			
      config {
        image="ubuntu:24.04"
      	network_mode = "host"
        command = "bash"
        #args = [ "-c", "echo asdf"]
        args = ["-c", "/local/checkHealth.sh" ]
    	}
      template {
     data        = <<-EOF

echo "INFO: Installing required packages..."
apt-get update && apt-get install -y --no-install-recommends \
    curl \
    jq \
    kafkacat \
    && rm -rf /var/lib/apt/lists/*
echo "INFO: Installation complete."

while (true); do
    numOfPeers=$(curl --insecure --silent  https://localhost:4646/v1/status/peers | jq 'length')
    nomadHealthy=0
    if [[ "$numOfPeers" -ge 3 ]]; then
        nomadHealthy=1
    fi 
    JSON=$(getJson "NOMAD" $nomadHealthy)
    
    echo "$nomadHealty" | tr -d '\r\n' | kcat -q -P -b 192.168.123.10:9092 -t monitor

    sleep 30
done
EOF
        destination = "/local/checkHealth.sh"
        perms       = "777"
      }

      # Specify the maximum resources required to run the task
      resources {
        cpu    = 300
        memory = 256
      }
      env{
        UNIQUE_HOSTNAME="${attr.unique.hostname}"
      }
    }
  }
}

This job is killed by Nomad without reason (about 1-2 kills per day on different servers)

Some of the logs:

2025-10-06T09:22:07.089+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.19:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T09:24:19.762+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.19:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T09:30:56.856+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.19:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T09:30:57.668+0200 [INFO]  nomad: memberlist: Suspect rihm.global has failed, no acks received
2025-10-06T09:32:03.638+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.19:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T09:36:27.669+0200 [INFO]  nomad: memberlist: Suspect rihm.global has failed, no acks received
2025-10-06T09:40:10.768+0200 [WARN]  nomad.raft: heartbeat timeout reached, starting election: last-leader-addr=192.168.123.13:4647 last-leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T09:40:10.768+0200 [INFO]  nomad.raft: entering candidate state: node="Node at 192.168.123.15:4647 [Candidate]" term=566
2025-10-06T09:40:10.791+0200 [INFO]  nomad.raft: entering follower state: follower="Node at 192.168.123.15:4647 [Follower]" leader-address=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T09:45:20.156+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.19:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T09:46:26.920+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.19:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T09:47:13.789+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.14:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T09:48:21.093+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.14:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T09:49:28.542+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.14:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T09:50:52.667+0200 [INFO]  nomad: memberlist: Suspect rihm.global has failed, no acks received
2025-10-06T09:51:16.679+0200 [WARN]  nomad.raft: heartbeat timeout reached, starting election: last-leader-addr=192.168.123.13:4647 last-leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T09:51:16.679+0200 [INFO]  nomad.raft: entering candidate state: node="Node at 192.168.123.15:4647 [Candidate]" term=566
2025-10-06T09:51:16.696+0200 [INFO]  nomad.raft: entering follower state: follower="Node at 192.168.123.15:4647 [Follower]" leader-address=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T09:51:58.946+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.19:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T09:58:37.668+0200 [INFO]  nomad: memberlist: Suspect rihm.global has failed, no acks received
2025-10-06T09:58:55.328+0200 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=cd0b824e-64a0-23dc-76d7-7f51e458066e task=nomad_monitor type=Killing msg="Sent interrupt. Waiting 5s before force killing" failed=false
2025-10-06T09:58:55.330+0200 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=bc35be75-4628-e257-da80-970b174c1b36 task=nomad_monitor type=Received msg="Task received by client" failed=false
2025-10-06T09:58:55.332+0200 [INFO]  client.alloc_migrator: waiting for previous alloc to terminate: alloc_id=bc35be75-4628-e257-da80-970b174c1b36 previous_alloc=cd0b824e-64a0-23dc-76d7-7f51e458066e
2025-10-06T09:59:00.574+0200 [INFO]  client.driver_mgr.docker: stopped container: container_id=ddba3487afc419e3383e8dd13715dc6fd381a6ac8e1c69c4fb07efabacb24cae driver=docker
2025-10-06T09:59:00.580+0200 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=cd0b824e-64a0-23dc-76d7-7f51e458066e task=nomad_monitor type=Terminated msg="Exit Code: 137, Exit Message: \"Docker container exited with non-                                       zero exit code: 137\"" failed=false
2025-10-06T09:59:00.590+0200 [INFO]  client.driver_mgr.docker.docker_logger: plugin process exited: driver=docker plugin=/usr/bin/nomad id=4151971
2025-10-06T09:59:00.655+0200 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=cd0b824e-64a0-23dc-76d7-7f51e458066e task=nomad_monitor type=Killed msg="Task successfully killed" failed=false
2025-10-06T09:59:00.666+0200 [INFO]  client.alloc_runner.task_runner.task_hook.logmon: plugin process exited: alloc_id=cd0b824e-64a0-23dc-76d7-7f51e458066e task=nomad_monitor plugin=/usr/bin/nomad id=4151889
2025-10-06T09:59:00.667+0200 [INFO]  agent: (runner) stopping
2025-10-06T09:59:00.667+0200 [INFO]  client.gc: marking allocation for GC: alloc_id=cd0b824e-64a0-23dc-76d7-7f51e458066e
2025-10-06T09:59:00.667+0200 [INFO]  agent: (runner) received finish
2025-10-06T09:59:00.667+0200 [INFO]  client.alloc_migrator: waiting for previous alloc to terminate: alloc_id=bc35be75-4628-e257-da80-970b174c1b36 previous_alloc=cd0b824e-64a0-23dc-76d7-7f51e458066e
2025-10-06T09:59:00.669+0200 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=bc35be75-4628-e257-da80-970b174c1b36 task=nomad_monitor type="Task Setup" msg="Building Task Directory" failed=false
2025-10-06T09:59:00.786+0200 [INFO]  agent: (runner) creating new runner (dry: false, once: false)
2025-10-06T09:59:00.787+0200 [INFO]  agent: (runner) creating watcher
2025-10-06T09:59:00.787+0200 [INFO]  agent: (runner) starting
2025-10-06T09:59:00.820+0200 [INFO]  agent: (runner) rendered "(dynamic)" => "/data2/NOMAD/nomad/data/alloc/bc35be75-4628-e257-da80-970b174c1b36/nomad_monitor/local/checkHealth.sh"
2025-10-06T09:59:01.004+0200 [INFO]  client.driver_mgr.docker: created container: driver=docker container_id=d77275e9b2603be6751660cb5254a387b00b599b7f492d89a8083df7c7fd2942
2025-10-06T09:59:01.396+0200 [INFO]  client.driver_mgr.docker: started container: driver=docker container_id=d77275e9b2603be6751660cb5254a387b00b599b7f492d89a8083df7c7fd2942
2025-10-06T09:59:01.463+0200 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=bc35be75-4628-e257-da80-970b174c1b36 task=nomad_monitor type=Started msg="Task started by client" failed=false
2025-10-06T10:00:49.684+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.19:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d

Other Log:

2025-10-06T23:17:51.532+0200 [INFO]  nomad.raft: entering follower state: follower="Node at 192.168.123.15:4647 [Follower]" leader-address=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T23:17:52.983+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.14:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T23:17:55.700+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.19:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T23:17:57.058+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.19:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T23:17:58.204+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.19:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T23:18:57.599+0200 [WARN]  nomad.raft: heartbeat timeout reached, starting election: last-leader-addr=192.168.123.13:4647 last-leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T23:18:57.599+0200 [INFO]  nomad.raft: entering candidate state: node="Node at 192.168.123.15:4647 [Candidate]" term=570
2025-10-06T23:18:57.609+0200 [INFO]  nomad.raft: entering follower state: follower="Node at 192.168.123.15:4647 [Follower]" leader-address=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T23:18:58.881+0200 [WARN]  nomad.raft: heartbeat timeout reached, starting election: last-leader-addr=192.168.123.13:4647 last-leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T23:18:58.881+0200 [INFO]  nomad.raft: entering candidate state: node="Node at 192.168.123.15:4647 [Candidate]" term=570
2025-10-06T23:18:59.150+0200 [INFO]  nomad.raft: entering follower state: follower="Node at 192.168.123.15:4647 [Follower]" leader-address= leader-id=
2025-10-06T23:20:07.686+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.19:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T23:21:10.814+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.20:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T23:21:13.768+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.14:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T23:22:20.285+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.19:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T23:22:21.070+0200 [WARN]  nomad: memberlist: Refuting a suspect message (from: young.global)
2025-10-06T23:23:27.468+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.19:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T23:23:28.010+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.14:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T23:25:39.891+0200 [WARN]  nomad: memberlist: Refuting a suspect message (from: rihm.global)
2025-10-06T23:25:41.976+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.14:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-06T23:26:48.847+0200 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=bc35be75-4628-e257-da80-970b174c1b36 task=nomad_monitor type=Killing msg="Sent interrupt. Waiting 5s before force killing" failed=false
2025-10-06T23:26:48.850+0200 [WARN]  client: missed heartbeat: req_latency=6.903803353s heartbeat_ttl=19.976432725s since_last_heartbeat=30.400486953s
2025-10-06T23:26:48.949+0200 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=ea488ede-78fa-613c-cf1a-254c36f67bbb task=nomad_monitor type=Received msg="Task received by client" failed=false
2025-10-06T23:26:48.953+0200 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=ea488ede-78fa-613c-cf1a-254c36f67bbb task=nomad_monitor type="Task Setup" msg="Building Task Directory" failed=false
2025-10-06T23:26:49.072+0200 [INFO]  agent: (runner) creating new runner (dry: false, once: false)
2025-10-06T23:26:49.072+0200 [INFO]  agent: (runner) creating watcher
2025-10-06T23:26:49.073+0200 [INFO]  agent: (runner) starting
2025-10-06T23:26:49.098+0200 [INFO]  agent: (runner) rendered "(dynamic)" => "/data2/NOMAD/nomad/data/alloc/ea488ede-78fa-613c-cf1a-254c36f67bbb/nomad_monitor/local/checkHealth.sh"
2025-10-06T23:26:49.291+0200 [INFO]  client.driver_mgr.docker: created container: driver=docker container_id=f0c0eedb3ecd63efe690d3bc8141637299f48dd5dc924300b6d5dd3c084fc2dd
2025-10-06T23:26:49.666+0200 [INFO]  client.driver_mgr.docker: started container: driver=docker container_id=f0c0eedb3ecd63efe690d3bc8141637299f48dd5dc924300b6d5dd3c084fc2dd
2025-10-06T23:26:49.736+0200 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=ea488ede-78fa-613c-cf1a-254c36f67bbb task=nomad_monitor type=Started msg="Task started by client" failed=false
2025-10-06T23:26:54.089+0200 [INFO]  client.driver_mgr.docker: stopped container: container_id=d77275e9b2603be6751660cb5254a387b00b599b7f492d89a8083df7c7fd2942 driver=docker
2025-10-06T23:26:54.095+0200 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=bc35be75-4628-e257-da80-970b174c1b36 task=nomad_monitor type=Terminated msg="Exit Code: 137, Exit Message: \"Docker container exited with non-                                       zero exit code: 137\"" failed=false
2025-10-06T23:26:54.106+0200 [INFO]  client.driver_mgr.docker.docker_logger: plugin process exited: driver=docker plugin=/usr/bin/nomad id=3309938
2025-10-06T23:26:54.179+0200 [INFO]  client.alloc_runner.task_runner: Task event: alloc_id=bc35be75-4628-e257-da80-970b174c1b36 task=nomad_monitor type=Killed msg="Task successfully killed" failed=false
2025-10-06T23:26:54.190+0200 [INFO]  client.alloc_runner.task_runner.task_hook.logmon: plugin process exited: alloc_id=bc35be75-4628-e257-da80-970b174c1b36 task=nomad_monitor plugin=/usr/bin/nomad id=3309808
2025-10-06T23:26:54.191+0200 [INFO]  agent: (runner) stopping
2025-10-06T23:26:54.192+0200 [INFO]  client.gc: marking allocation for GC: alloc_id=bc35be75-4628-e257-da80-970b174c1b36
2025-10-06T23:26:54.192+0200 [INFO]  agent: (runner) received finish
2025-10-06T23:28:54.173+0200 [WARN]  nomad.raft: heartbeat timeout reached, starting election: last-leader-addr=192.168.123.13:4647 last-leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d


Log of server (192.168.123.13) at 2025-10-06 ~23:25

2025-10-07T10:24:48.830+0200 [WARN]  nomad.stats_fetcher: failed retrieving server health: server=young.global error="context deadline exceeded"
2025-10-07T10:24:48.848+0200 [WARN]  nomad.raft: failed to contact: server-id=fdb7a613-8016-2108-a4d4-127710204598 time=1.382896962s
2025-10-07T10:24:50.829+0200 [WARN]  nomad.stats_fetcher: failed retrieving server health: server=young.global error="context deadline exceeded"
2025-10-07T10:24:52.829+0200 [WARN]  nomad.stats_fetcher: failed retrieving server health: server=young.global error="context deadline exceeded"
2025-10-07T10:24:57.126+0200 [WARN]  nomad.raft: failed to contact: server-id=b01bce76-a860-6447-10a1-cc2609bcd0bb time=500.283781ms
2025-10-07T10:24:57.547+0200 [WARN]  nomad.raft: failed to contact: server-id=b01bce76-a860-6447-10a1-cc2609bcd0bb time=921.280604ms
2025-10-07T10:24:57.988+0200 [WARN]  nomad.raft: failed to contact: server-id=b01bce76-a860-6447-10a1-cc2609bcd0bb time=1.361915077s
2025-10-07T10:24:58.829+0200 [WARN]  nomad.stats_fetcher: failed retrieving server health: server=rihm.global error="context deadline exceeded"
2025-10-07T10:25:00.829+0200 [WARN]  nomad.stats_fetcher: failed retrieving server health: server=rihm.global error="context deadline exceeded"
2025-10-07T10:25:02.828+0200 [WARN]  nomad.stats_fetcher: failed retrieving server health: server=rihm.global error="context deadline exceeded"
2025-10-07T10:25:55.640+0200 [WARN]  nomad.raft: failed to contact: server-id=fdb7a613-8016-2108-a4d4-127710204598 time=500.120341ms
2025-10-07T10:25:56.101+0200 [WARN]  nomad.raft: failed to contact: server-id=fdb7a613-8016-2108-a4d4-127710204598 time=960.744984ms
2025-10-07T10:25:56.558+0200 [WARN]  nomad.raft: failed to contact: server-id=fdb7a613-8016-2108-a4d4-127710204598 time=1.418072877s
2025-10-07T10:25:56.830+0200 [WARN]  nomad.stats_fetcher: failed retrieving server health: server=young.global error="context deadline exceeded"
2025-10-07T10:25:58.009+0200 [WARN]  nomad.raft: failed to contact: server-id=633ba3fd-61ea-1dbb-7e38-0fe8bfdaedc8 time=500.240349ms
2025-10-07T10:25:58.828+0200 [WARN]  nomad.stats_fetcher: failed retrieving server health: server=young.global error="context deadline exceeded"
2025-10-07T10:26:00.829+0200 [WARN]  nomad.stats_fetcher: failed retrieving server health: server=young.global error="context deadline exceeded"
2025-10-07T10:26:00.950+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.14:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-07T10:26:03.133+0200 [WARN]  nomad.raft: failed to contact: server-id=b01bce76-a860-6447-10a1-cc2609bcd0bb time=500.134687ms
2025-10-07T10:26:03.591+0200 [WARN]  nomad.raft: failed to contact: server-id=b01bce76-a860-6447-10a1-cc2609bcd0bb time=957.50392ms
2025-10-07T10:26:04.020+0200 [WARN]  nomad.raft: failed to contact: server-id=b01bce76-a860-6447-10a1-cc2609bcd0bb time=1.386469607s
2025-10-07T10:26:04.829+0200 [WARN]  nomad.stats_fetcher: failed retrieving server health: server=rihm.global error="context deadline exceeded"
2025-10-07T10:26:06.829+0200 [WARN]  nomad.stats_fetcher: failed retrieving server health: server=rihm.global error="context deadline exceeded"
2025-10-07T10:26:08.828+0200 [WARN]  nomad.stats_fetcher: failed retrieving server health: server=rihm.global error="context deadline exceeded"
2025-10-07T10:27:02.662+0200 [WARN]  nomad.raft: failed to contact: server-id=fdb7a613-8016-2108-a4d4-127710204598 time=500.205222ms
2025-10-07T10:27:03.139+0200 [WARN]  nomad.raft: failed to contact: server-id=fdb7a613-8016-2108-a4d4-127710204598 time=977.422459ms
2025-10-07T10:27:03.581+0200 [WARN]  nomad.raft: failed to contact: server-id=fdb7a613-8016-2108-a4d4-127710204598 time=1.419176308s
2025-10-07T10:27:04.828+0200 [WARN]  nomad.stats_fetcher: failed retrieving server health: server=young.global error="context deadline exceeded"
2025-10-07T10:27:06.829+0200 [WARN]  nomad.stats_fetcher: failed retrieving server health: server=young.global error="context deadline exceeded"
2025-10-07T10:27:07.747+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.14:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d
2025-10-07T10:27:09.727+0200 [WARN]  nomad.raft: failed to contact: server-id=b01bce76-a860-6447-10a1-cc2609bcd0bb time=500.088444ms
2025-10-07T10:27:10.179+0200 [WARN]  nomad.raft: failed to contact: server-id=b01bce76-a860-6447-10a1-cc2609bcd0bb time=951.74078ms
2025-10-07T10:27:10.612+0200 [WARN]  nomad.raft: failed to contact: server-id=b01bce76-a860-6447-10a1-cc2609bcd0bb time=1.384399419s
2025-10-07T10:27:10.829+0200 [WARN]  nomad.stats_fetcher: failed retrieving server health: server=rihm.global error="context deadline exceeded"
2025-10-07T10:27:12.829+0200 [WARN]  nomad.stats_fetcher: failed retrieving server health: server=rihm.global error="context deadline exceeded"
2025-10-07T10:27:14.003+0200 [WARN]  nomad.raft: rejecting pre-vote request since we have a leader: from=192.168.123.19:4647 leader=192.168.123.13:4647 leader-id=5d504fbc-08e9-9839-4804-7acf365c5a1d

The client and the servers are running in the same application:

bind_addr = "0.0.0.0"

data_dir = "/data2/NOMAD/nomad/data"

log_file="/data2/NOMAD/nomad/log/"
log_rotate_max_files=3

client {
  enabled = true
}

server {
  enabled          = true
  job_gc_threshold = "8760h"
  deployment_gc_threshold = "168h"
  eval_gc_threshold = "168h"
}
acl {
  enabled = true
}

plugin "docker" {
  config {
    volumes {
      enabled      = true
      selinuxlabel = "z"
    }
  }
}

plugin "fs" {
  config {
    enabled = true
  }
}
vault {
  enabled = true
  address = "https://vault.service.consul:8200"
  ca_file = "/data2/NOMAD/tls/cert.pem"
  jwt_auth_backend_path = "jwt-nomad/"
  create_from_role="nomad-workloads"
  default_identity {
    aud = ["vault.io"]
    ttl = "1h"
  }
}

consul{
  address = "localhost:8500"
  ssl=true
  scheme  = "https"
  token="*******"
  ca_file ="/data2/NOMAD/tls/cert.pem"

}

Does anybody have an idea why the containers are being killed or how to prevent this?

Hello

>why the containers are being killed

Looking at all the heartbeat timeout reached messages I would suspect the node is getting lost.

how to prevent this?

Make sure clients connect to servers properly. You can increase https://developer.hashicorp.com/nomad/docs/configuration/server#heartbeat_grace . I once set it to few hours to make sure all works smoothly.

Thank you very much for the answer. It seems that the disconnect of the client is indeed the case.

For everybody who has the same issue in the future:

In the UI under Clients → overview, you can see logs like the following:

Cluster	Node heartbeat missed
Cluster	Node reregistered by heartbeat

The restarts in my case match the “reregistered by heartbeat” entries.

Additionally, to the heartbeat_grace there is a setting for behavior after the disconnect:

	disconnect {
      lost_after = "8760h"
      replace = false
      reconcile = "keep_original"
    } 

My main issue now is that I do not understand why the heartbeats are missing. I have these 5 instances where on every instance the client is on the same server as the server. So in my opinion, the heartbeat could be sent to the local server and should never be missed.

If anybody sees an issue with my configuration input would be appreciated. Otherwise, we try to dig into checks if our network has any issues.

My main issue now is that I do not understand why the heartbeats are missing. I have these 5 instances where on every instance the client is on the same server as the server. So in my opinion, the heartbeat could be sent to the local server and should never be missed.

I don’t have a direct answer to this. In general, I observed missed heartbeats (for any distributed system, unrelated to Nomad) for many reasons, among them temporary overload of the nodes involved.

If I understand correctly, you have a total of 5 hosts, and on each of them you run both a Nomad server and a Nomad client ?

That is not the recommended architecture. Nomad servers should be separated from Nomad clients, see https://developer.hashicorp.com/nomad/docs/deploy/production/reference-architecture.

If you want HA, you need at least 3 hosts for the servers. If you have a strict budget, this will leave you with 2 hosts for the clients. Depending on the workloads, if you are running in a cloud, you might have different instance types for the servers and the clients.

Thank you very much for the answer.
I know that this is not a recommended setup, but I don’t have any choice.
We have 5 main servers where systemctl for the startup of the containers should be replaced via nomad. And I don’t get any other servers.

There is a ticket for the “server and client on same host” ( Question: What could go wrong if a nomad server and client ran on the same host? · Issue #5053 · hashicorp/nomad · GitHub ) and if the answer of schmichael ( Question: What could go wrong if a nomad server and client ran on the same host? · Issue #5053 · hashicorp/nomad · GitHub ) is right, then we should not have any huge issues since the servers have quite a lot of spare resources.

Maybe one day I’ll get the servers, but in the meantime if there are no obvious configuration errors I just have to increase the timeouts and add the disconnect stance.