I am in the process of learning Nomad and I have a question about managing storage volumes with an NFS server

Scenario:

  • I have a single NFS server configured with only one shared directory.
  • I want to encapsulate the storage volumes of each Nomad job into separate folders within this single directory on the NFS server.

Problem:

  • I am not sure how to configure Nomad so that each job has its own subdirectory within the shared directory of the NFS server.

Current Configuration:

  • I have a single mount point on the NFS server and need a solution that allows me to organize the volumes of each job into individual subfolders within this mount point.

Question:

  • Is it possible to configure Nomad so that each job uses a specific subdirectory within the shared NFS directory?
  • What would be the best way to manage these volumes such that each job has its own isolated space within the NFS directory?

Configuration

serverCSI.hcl

datacenter = "dc1"
data_dir = "/opt/nomad/data"

server {
  enabled = true
  bootstrap_expect = 1
}

client {
  enabled = true
  options {
    "docker.privileged.enabled" = "true"
  }
}

 plugin "csi" {
   config {
    enabled = true
 }
}

plugin "docker" {
  config {
    allow_privileged = true
  }
}

nfs-volume.hcl

type = "csi"
id = "nfs"
name = "nfs"
plugin_id = "nfs"

capability {
  access_mode = "multi-node-multi-writer"
  attachment_mode = "file-system"
}

capability {
  access_mode = "single-node-writer"
  attachment_mode = "file-system"
}

context {
  server = "192.168.10.206"
  share = "/mnt/shared"
}

mount_options {
  fs_type = "nfs"
}

plugin

job "plugin-nfs-nodes" {
  datacenters = ["dc1"]

  type = "system"

  group "nodes" {
    task "plugin" {
    driver = "docker"
    config {
      image = "mcr.microsoft.com/k8s/csi/nfs-csi:latest"
      args = [
        "--endpoint=unix://csi/csi.sock",
        "--nodeid=${attr.unique.hostname}",
        "--logtostderr",
        "--v=5",
      ]
      privileged = true
    }

    csi_plugin {
      id = "nfs"
      type = "node"
      mount_dir = "/csi"
    }

    resources {
      cpu = 250
      memory = 128
    }
  }
 }
}

controller

job "plugin-nfs-controller" {
  datacenters = ["dc1"]

  group "controller" {
    task "plugin" {
    driver = "docker"

    config {
      image = "mcr.microsoft.com/k8s/csi/nfs-csi:latest"
      args = [
        "--endpoint=unix://csi/csi.sock",
        "--nodeid=${attr.unique.hostname}",
        "--logtostderr",
        "-v=5",
      ]
    }

    csi_plugin {
      id = "nfs"
      type = "controller"
      mount_dir = "/csi"
    }

    resources {
      cpu = 250
      memory = 128
    }
  }
 }
}

test job
rabbitMq

job "rabbit" {
  datacenters = ["dc1"]
  type = "service"
  group "server" {
    count = 1
    volume "test" {
      type = "csi"
      source = "nfs"
      access_mode = "multi-node-multi-writer"
      attachment_mode = "file-system"
    }
    task "rabbitmq" {
      user = "nobody"
      driver = "docker"
      config {
        image = "rabbitmq:latest"  
      }
      volume_mount {
        volume = "test"
        destination = "/var/lib/rabbitmq"
      }
    }
  }
}

postgresSql

job "postgres8" {
  datacenters = ["dc1"]
  group "postgres8" {
    count = 1
		
    volume "postgres8" {
      type = "csi"
      read_only = false
      source = "nfs"
      attachment_mode = "file-system"
      access_mode = "multi-node-multi-writer"
    }
    
    network {
      port "db" {
        static = 5790
        to = 5432
      }
    }

  

    task "postgres8" {
       user   = "nobody"
       volume_mount {
        volume = "postgres8"
        destination = "/var/lib/postgresql/data"
        read_only = false
      }
      env {
        POSTGRES_DB       = "testDB"
        POSTGRES_USER     = "testUser"
        POSTGRES_PASSWORD = "testPassword"
      }

      driver = "docker"

      config {
        
        image = "postgres:15-alpine"
        ports = ["db"]
      }
    }
  }
}

** on server nfs file **

root@w3v59-transversales-p1-test-2:/mnt/shared# ls
PG_VERSION  global  pg_commit_ts  pg_hba.conf    pg_logical    pg_notify    pg_serial     pg_stat      pg_subtrans  pg_twophase  pg_xact               postgresql.conf  postmaster.pid
base        mnesia  pg_dynshmem   pg_ident.conf  pg_multixact  pg_replslot  pg_snapshots  pg_stat_tmp  pg_tblspc    pg_wal       postgresql.auto.conf  postmaster.opts

Conclusion:

  • Currently, this configuration does not seem to be the best way to manage volumes on an NFS server.
  • I am looking for a configuration that allows encapsulating each job into a specific folder using a single registered NFS server.

I would appreciate any guidance or configuration examples that could help me achieve this. Thank you!

1 Like

Hi, you might be intersted in Facilitate the creation of CSI Volumes within a Jobspec · Issue #11195 · hashicorp/nomad · GitHub .

Bottom line, for now, you need to create the volume yourself. Instead of creating one name = "nfs" volume, create one per each job name, and then start your job with different volume sources names.

I understand, but there isn’t a local way or method because I find it tedious to create the folder and then register the volume. Is there a simpler way?

Create a directory and just mount that nfs directory. Here i shared how to mount nfs. The whole thread there might be an interest of you.

2 Likes

This topic was automatically closed 62 days after the last reply. New replies are no longer allowed.