NFS on nomad, the modern way

Hi,
So i am investigating using nomad and I have some confusion of the best way to mount an existing remote NFS folder that already contains data in a container.

I keep seeing Democratic-csi or docker mounts, can someone point me in the right direction of how this can be done in the modern nomad way?

Thanks

I tried to dive into democratic-csi trying to configure it, but I was crushed from the weight of documentation. I couldn’t make it work, and it would also require running a service on every of our 300 machines.

We are just using docker mounts, which are simple and well understood. Mount option. I think it would be nice to add NFS example to the docs, as I think it’s a common case.

        mount {
          target = "/dir/inside/container"
          volume_options {
            # https://docs.docker.com/compose/compose-file/compose-file-v3/#long-syntax-3
            no_copy = true
            driver_config {
              options {
                type   = "nfs"
                # server name is not needed here, but is nice for debugging
                device = "nfs.server.com:/dir/on/nfs"
                # you can specify readonly here or with readonly=true
                o      = "addr=nfs.server.com,ro"
              }
            }
          }
        }

I’m using the default NFS CSI driver and it works like a charm.
Just for context, my NAS exporting NFS has the DNS name ‘storage.home’ … pretty boring, I know.
In the NFS settings, make sure that either security is disabled (not recommended) or at least the IPs of your Nomad nodes are white-listed.

Controller:

job “csi-nfs-controller” {
datacenters = [“home”]
type = “system”

constraint {
attribute = “${node.class}”
value = “compute”
}

group “nfs” {

task "controller" {
  driver = "docker"

  config {
    image = "registry.k8s.io/sig-storage/nfsplugin:v4.3.0"
    args = [
      "--v=5",
      "--nodeid=${attr.unique.hostname}",
      "--endpoint=unix:///csi/csi.sock",
      "--drivername=nfs.csi.k8s.io"
    ]
  }

  csi_plugin {
    id        = "nfs"
    type      = "controller"
    mount_dir = "/csi"
  }

  resources {
    memory = 64
    cpu    = 100
  }
}

}
}

Node plugin:

job “csi-nfs-plugin” {
datacenters = [“home”]
type = “system” # ensures that all nodes in the DC have a copy.

group “nfs” {

restart {
  interval = "30m"
  attempts = 10
  delay    = "15s"
  mode     = "fail"
}

task "plugin" {
  driver = "docker"

  config {
    image = "registry.k8s.io/sig-storage/nfsplugin:v4.3.0"
    args = [
      "--v=5",
      "--nodeid=${attr.unique.hostname}",
      "--endpoint=unix:///csi/csi.sock",
      "--drivername=nfs.csi.k8s.io"
    ]
    # node plugins must run as privileged jobs because they
    # mount disks to the host
    privileged = true
  }

  csi_plugin {
    id        = "nfs"
    type      = "node"
    mount_dir = "/csi"
  }

  resources {
    memory = 100
    cpu = 200
  }
}

}
}

Example volume:

plugin_id = “nfs”
type = “csi”
id = “nginx”
name = “NGINX”

capability {
access_mode = “single-node-writer”
attachment_mode = “file-system”
}

context {
server = “storage.home”
share = “/volume2/homelab”
subDir = “nginx/content”
mountPermissions = “0”
}

mount_options {
fs_type = “nfs”
mount_flags = [ “timeo=30”, “vers=4.1”, “nolock”, “sync” ]
}

Example job using the volume:

job “nginx” {
datacenters = [“home”]
type = “service”

group “nginx” {

constraint {
  attribute = "${node.class}"
  value     = "compute"
}

restart {
  attempts = 3
  delay = "1m"
  mode = "fail"
}

network {
  mode = "bridge"

  port "envoy_metrics" { to = 9102 }
}

service {
  name = "nginx"

  port = 80

  check {
    type     = "http"
    path     = "/alive"
    interval = "10s"
    timeout  = "2s"
    expose   = true # required for Connect
  }

  tags = [
    "traefik.enable=true",
    "traefik.consulcatalog.connect=true",
    "traefik.http.routers.nginx.rule=Host(`www.example.domain`)",
    "traefik.http.routers.nginx.entrypoints=inet-websecure"
  ]

  meta {
    envoy_metrics_port = "${NOMAD_HOST_PORT_envoy_metrics}" # make envoy metrics port available in Consul
  }
  connect {
    sidecar_service {
      proxy {
        config {
          protocol = "http"
          envoy_prometheus_bind_addr = "0.0.0.0:9102"
        }
      }
    }

    sidecar_task {
      resources {
        cpu    = 50
        memory = 48
      }
    }
  }
}

task "server" {

  driver = "docker"

  config {
    image = "nginx:latest"

    volumes = [ "local:/etc/nginx/conf.d" ]      
  }

  template {
    data        = file("default.conf")
    destination = "local/default.conf"
    change_mode   = "signal"
    change_signal = "SIGHUP"
  }

  resources {
    memory = 50
    cpu    = 50
  }

  volume_mount {
    volume      = "nginx"
    destination = "/usr/share/nginx/content"
  }
}

volume "nginx" {
  type            = "csi"
  source          = "nginx"
  access_mode     = "single-node-writer"
  attachment_mode = "file-system"
}

}
}

1 Like

There is Rocketduck csi nfs plugin, it is a very good options since it targets Nomad specifically.

One big advantage over the kubernetes nfs csi is that the Rocketduck plugin creates subdirectories in the nfs share for each volume, so you only need to create the volume with Nomad.

2 Likes

RocketDuck is indeed very lightweight and works well. It also has a nice feature (which I asked for :slight_smile: ) : it support subvolumes.
Another option is democratic CSI with a TrueNAS server. I have a sample job (for controller and nodes) here

hi livioribeiro

I’m encountering an issue when trying to create snapshots using the rocketduck/csi-plugin-nfs

Error snapshotting volume: Unexpected response code: 500 (1 error occurred: * plugin "nfs-data" does not support snapshot)