Nomad - communication between docker containers within one task seems not to be working

I am using Apache Kafka V3.1 in docker and trying to orchestrate it with Nomad. I am facing a problem creating a distributed cluster.

the goal is to have 3 broker/controller nodes on 3 EC2 Instances

:~$ nslookup broker.service.brain.consul
Server:         127.0.0.1
Address:        127.0.0.1#53

Name:   broker.service.brain.consul
Address: 30.10.12.52
Name:   broker.service.brain.consul
Address: 30.10.11.8
Name:   broker.service.brain.consul
Address: 30.10.13.172

from inside one of the Nomad Client Instances:

IPv4 address for docker0: 172.17.0.1
IPv4 address for ens5:    30.10.13.172
IPv4 address for nomad:   172.26.64.1

here is the relevant Nomad Job configuration

job "kafka" {
  datacenters = ["stream"]
  type = "service"
  group "broker" {
    count = 3
    service {
      name = "broker"
      port = "9092"
      tags = ["kafka","broker"]
      connect {
        sidecar_service {}
      }
    }
    network {
      mode = "bridge"
      hostname = "${attr.unique.hostname}"
      dns {
        servers = ["172.17.0.1"]
      }
      port "broker" {
        static = 9092
        to     = 9092
      }
      port "controler" {
        static = 9093
        to     = 9093
      }
    }
...
    task "broker" {

      driver = "docker"
      config {

        image = "registry.gitlab.com/.../kafka"
        volumes = ["files/server.properties:/kafka/config/kraft/server.properties"]
        

        ports = [
          "broker",
          "controler"
        ]
...

the server.properties after rendering from template looks as following: (the node.id changes accross 3 brokers)

process.roles=broker,controller
node.id=2
controller.quorum.voters=1@30.10.11.8:9093,2@30.10.12.52:9093,3@30.10.13.172:9093
listeners=PLAINTEXT://:9092,CONTROLLER://:9093
advertised.listeners=PLAINTEXT://:9092
inter.broker.listener.name=PLAINTEXT
controller.listener.names=CONTROLLER
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
num.network.threads=3
num.io.threads=8
request.timeout.ms=60000
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/logs/kraft-combined-logs

However, The cluster is not able to start and it seems to be a connections issue.


[2022-01-24 01:31:15,405] ERROR [BrokerLifecycleManager id=2] Shutting down because we were unable to register with the controller quorum. (kafka.server.BrokerLifecycleManager)
[2022-01-24 01:31:15,407] INFO [BrokerLifecycleManager id=2] registrationTimeout: shutting down event queue. (org.apache.kafka.queue.KafkaEventQueue)
[2022-01-24 01:31:15,407] INFO [BrokerLifecycleManager id=2] Transitioning from STARTING to SHUTTING_DOWN. (kafka.server.BrokerLifecycleManager)
[2022-01-24 01:31:15,408] INFO [BrokerServer id=2] Transition from STARTING to STARTED (kafka.server.BrokerServer)
[2022-01-24 01:31:15,408] INFO [BrokerToControllerChannelManager broker=2 name=heartbeat]: Shutting down (kafka.server.BrokerToControllerRequestThread)
[2022-01-24 01:31:15,409] INFO [BrokerToControllerChannelManager broker=2 name=heartbeat]: Stopped (kafka.server.BrokerToControllerRequestThread)
[2022-01-24 01:31:15,410] INFO [BrokerToControllerChannelManager broker=2 name=heartbeat]: Shutdown completed (kafka.server.BrokerToControllerRequestThread)
[2022-01-24 01:31:15,412] ERROR [BrokerServer id=2] Fatal error during broker startup. Prepare to shutdown (kafka.server.BrokerServer)
java.util.concurrent.CancellationException
	at java.base/java.util.concurrent.CompletableFuture.cancel(CompletableFuture.java:2396)
	at kafka.server.BrokerLifecycleManager$ShutdownEvent.run(BrokerLifecycleManager.scala:478)
	at org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:174)
	at java.base/java.lang.Thread.run(Thread.java:829)
[2022-01-24 01:31:15,417] INFO [BrokerServer id=2] Transition from STARTED to SHUTTING_DOWN (kafka.server.BrokerServer)

...

also 

...

[2022-01-24 02:02:19,304] INFO [RaftManager nodeId=2] Disconnecting from node 1 due to socket connection setup timeout. The timeout value is 10341 ms. (org.apache.kafka.clients.NetworkClient)
[2022-01-24 02:02:19,306] INFO [RaftManager nodeId=2] Disconnecting from node 3 due to socket connection setup timeout. The timeout value is 11036 ms. (org.apache.kafka.clients.NetworkClient)
[2022-01-24 02:02:20,100] INFO [RaftManager nodeId=2] Re-elect as candidate after election backoff has completed (org.apache.kafka.raft.KafkaRaftClient)

I did try to set the listeners to match the new docker hostname hostname = "${attr.unique.hostname}", or to the EC2 host IP but those didn’t help neither.

I’ve spent few days on this puzzle but currently I am out of ideas. Would appreciate any help on this issue.

1 Like