Nomad using Consul Service Mesh

I’am learning the site-mesh-capabilities of Nomad and followed the tutorial on Consul Service Mesh | Nomad | HashiCorp Developer.

I adapted my job-description slightly having two dashboard-tasks connecting to the counter-api. I’ve done this to try out the intentions-feature of consul later on.

However I am surprised, that only one of the dashboard-tasks can make a connection to the backend. The other one says that the “counting service” is unreachable. What I am doing wrong? Here’s my job description, I am using Nomad 1.3.5

job "counter7" {

  group "backend" {
    network {
      mode = "bridge"
    }
    
    service {
        port = "9001"
        connect {
          sidecar_service {}
        }
    }

    task "api" {
      driver = "docker"
      config {
        image = "hashicorpdev/counter-api:v3"
      }
    }  
  }

  group "frontend" {
    network {
      mode = "bridge"

      port "http" {
        static = 9002
        to     = 9002
      }
    }
    service {
      port = "http"

      connect {
       	sidecar_service {
         	proxy {
           	upstreams {
             	destination_name = "counter7-backend"
             	local_bind_port  = 8080
           	}
         	}
       	}
      }
    }

    task "dashboard" {
      driver = "docker"

      env {
        COUNTING_SERVICE_URL = "http://${NOMAD_UPSTREAM_ADDR_counter7-backend}"
      }

      config {
        image = "hashicorpdev/counter-dashboard:v3"
      }
    }
  }
  
  group "frontend2" {
    network {
      mode = "bridge"

      port "http" {
        static = 9002
        to     = 9002
      }
    }
    service {
      port = "http"

      connect {
       	sidecar_service {
         	proxy {
           	upstreams {
             	destination_name = "counter7-backend"
             	local_bind_port  = 8080
           	}
         	}
       	}
      }
    }

    task "dashboard2" {
      driver = "docker"

      env {
        COUNTING_SERVICE_URL = "http://${NOMAD_UPSTREAM_ADDR_counter7-backend}"
      }

      config {
        image = "hashicorpdev/counter-dashboard:v3"
      }
    }
  }
}

Hi @frank.wettstein, nothing stands out as being incorrect in your jobspec - though by using the same static port for both frontend groups you do of course need two client nodes.

Can you post the Client configuration for each of the clients? How is each client being started? The thing I’d be looking for now is that the client’s network configurations actually make sense for Connect.

Thanks for the quick feedback! I’ve a cluster with one server and three clients, so the static-binding should not be an issue. But to be sure I removed the “static”-part. But the problem remains the same, one of the frontends can connect to the backend, the other not.

What I’ve seen meanwhile: It must be a firewall-issue. When I stopped the firewall, it was working. So I added the nomad-interface

[root@poc-nomad-client2 ~]# firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: ens192 nomad
  sources:
  services: cockpit dhcpv6-client ssh
  ports: 53/tcp 53/udp 4646/tcp 4647/tcp 4648/tcp 20000-32000/tcp 15000-15999/tcp 8300/tcp 8301/tcp 8302/tcp 8600/tcp 8500/tcp 8502/tcp 80/tcp 8080/tcp 9998/tcp 9999/tcp 1883/tcp 5683/udp
  protocols:
  forward: no
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

A journalctl -x -e -f gives me
Apr 27 05:09:41 poc-nomad-client2.intersys.internal consul[1452]: 2023-04-27T05:09:41.374-0400 [WARN] agent: Check socket connection failed: check=service:_nomad-task-5e4c5fab-a6a5-e5fd-4de7-4192ac46a16e-group-frontend-counter7-frontend-http-sidecar-proxy:2 error="dial tcp 192.168.3.42:27129: connect: connection refused"

journalctl -u firewalld gives

COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?).

I am not very expert yet in Linux-Firewall-Configuration, any idea?

I found the solution :slight_smile: Adding the interface ‘nomad’ is not enought, adding forward between interfaces in a zone has to be added as well.
firewall-cmd --add-forward --permanent
This should maybe added somewhere in the documentation of the tutorials?