I can not use my newly created subnet

Hi,

I would like to do something similar to this, but I would like to use nomad for creating the subnet and the job.
As far as I understand the nomad docs I have to use CNI for this. I was able to come up with a config file:

{
        "name": "mynet",
        "cniVersion": "0.4.0",
        "plugins": [
                {
                        "type": "bridge",
                        "isDefaultGateway": true,
                        "ipMasq": true,
                        "ipam": {
                                "type": "host-local",
                                "ranges": [
                                        [
                                                {
                                                        "subnet": "172.22.0.0/16"
                                                }
                                        ]
                                ]
                        }
                }
        ]
}

I restart nomad agent, it parses the config file, so far so good.
The job file for pihole is the following:

job "pihole" {
  datacenters = ["dc1"]

  group "pihole" {

    network {
      mode = "cni/mynet"

      port "http" {
        static = 80
        to = 80
      }
      port "DNS" {
        static = 53
        to = 53
      }
    }

    task "pihole" {
      driver = "docker"

      env {
        TZ = "Europe/Budapest"
        VIRTUAL_HOST = "pi.hole"
        PROXY_LOCATION = "pi.hole"
        FTLCONF_LOCAL_IPV4 = "127.0.0.1"
        WEBPASSWORD = "piholepass"
      }

      config {
        image = "pihole/pihole:latest"
        ports = ["http", "DNS"]

        ipv4_address = "172.22.0.2"

        volumes = [
            "/home/voroskoi/.docker/pihole/etc:/etc/pihole",
            "/home/voroskoi/.docker/pihole/dnsmasq.d:/etc/dnsmasq.d",
        ]
      }
    }
  }
}

If I use set network.mode to bridge (and comment out ipv4_address) it works fine. However with this job file the job starts up fine, but the container do not have access to the internet.

I find this in pihole log:

[âś—] DNS resolution is currently unavailable

I have checked ip addresses and firewall rules, CNI seems to take care of everything:

9: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether 72:e3:c6:d0:fd:06 brd ff:ff:ff:ff:ff:ff
    inet 172.22.0.1/16 brd 172.22.255.255 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::fc3a:eaff:fe94:19df/64 scope link
       valid_lft forever preferred_lft forever
13: veth8b554574@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master cni0 state UP
    link/ether 72:e3:c6:d0:fd:06 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::70e3:c6ff:fed0:fd06/64 scope link
       valid_lft forever preferred_lft forever
voroskoi ~ ❯ doas iptables -t nat --list-rules |grep -e 172.22 -e CNI-8cbcba57a0fe6959dd7aee6d
-N CNI-8cbcba57a0fe6959dd7aee6d
-A POSTROUTING -s 172.22.0.12/32 -m comment --comment "name: \"mynet\" id: \"34f2526f-5cc1-4714-8c6a-1efbfc606981\"" -j CNI-8cbcba57a0fe6959dd7aee6d
-A CNI-8cbcba57a0fe6959dd7aee6d -d 172.22.0.0/16 -m comment --comment "name: \"mynet\" id: \"34f2526f-5cc1-4714-8c6a-1efbfc606981\"" -j ACCEPT
-A CNI-8cbcba57a0fe6959dd7aee6d ! -d 224.0.0.0/4 -m comment --comment "name: \"mynet\" id: \"34f2526f-5cc1-4714-8c6a-1efbfc606981\"" -j MASQUERADE

The ipv4_address line does not seem to work as the pihole ip is 172.22.0.12, but that is an other problem.

What am I doing here wrong?

1 Like

Hi @voroskoi,

Thanks for the detail in this post which makes it easier to understand and help solve the problem you are encountering.

Your CNI configuration file is missing a firewall plugin entry which “creates firewall rules to allow traffic to/from container IP address via the host network”. Your CNI config file would therefore look like the following with this plugin added:

{
  "name": "mynet",
  "cniVersion": "0.4.0",
  "plugins": [
    {
      "type": "bridge",
      "isDefaultGateway": true,
      "ipMasq": true,
      "ipam": {
        "type": "host-local",
        "ranges": [
          [
            {
              "subnet": "172.30.0.0/20"
            }
          ]
        ]
      }
    },
    {
      "type": "firewall",
      "backend": "iptables",
      "iptablesAdminChainName": "NOMAD-ADMIN"
    }
  ]
}

Depending on what exactly your CNI requirements are, it might be possible to use the built-in Nomad bridge network mode which is CNI based. You can customize the subnet used per client via the client bridge_network_subnet configuration option and this would mean you do not need to maintain your own CNI setup.

Thanks,
jrasell and the Nomad team

Hi @jrasell,

Thank You very much for the answer, it indeed fixes my outbound network issue.
However, adding these lines makes it work under the hood, but the ports are not exposed.

I think that I need portmap for that, so here is the modified CNI config file.

{
    "name": "mynet",
    "cniVersion": "0.4.0",
    "plugins": [
        {
            "type": "bridge",
            "isDefaultGateway": true,
            "ipMasq": true,
            "ipam": {
                "type": "host-local",
                "ranges": [
                    [
                        {
                            "subnet": "172.22.0.0/16"
                        }
                    ]
                ]
            }
        },
        {
            "type": "firewall",
            "backend": "iptables",
            "iptablesAdminChainName": "NOMAD-ADMIN"
        },
        {
            "type": "portmap",
            "capabilities": {"portMappings": true}
        }
    ]
}

However I still can not reach pihole on 192.168.0.200:80, which is the desired state.

The CNI docs says: “The plugin expects to receive the actual list of port mappings via the portMappings capability argument”, but according to nomad CLI docs “The Nomad client will build the correct capabilities arguments for the portmap plugin based on the defined port stanzas.”

I have tried adding a host_network with cidr = "192.168.0.0/24", but did not help.

Thanks,