GCP Persistent Disk CSI on demand

Hi,

I have read through the following issues post:

From what i understand, when running a job that requires a GCP persistent disks, we need to create the volume csi. My question is, can we provision the disk dynamically on demand in our jobspec instead of pre-creating the volume. From the 2 posts above, I do see we need to create the volume first with the external_id. I want to know if there is a way we can just define our volume in our job spec without the need to precreate the disk using terraform or gcloud, assuming the docker plugin and CSI has setup correctly in the agent node.
Can we omit the external_id and let Nomad auto-provision depending on where the task is running. Assuming the task is scheduled at a node located in us-west1-b under GCP project A, the volume will then be created based on where the task is deployed with the same region, zone and project.

I’m not sure if i’m also missing anything, how do I specify the disk size on demand as well when using GCEPD CSI plugin. Assuming I want to provision the volume with 100GB for mysql, then I have kafka that needs 200GB. Instead of pre-creating the disk, would like to know if dynamic provisioning with different fs_type, disk_type and size is possible

disk_type as in pd-balanced, pd-ssd or pd-standard

Job Spec

group "mysql" {
  volume {
    type = "csi"
    id = "mysql"
    name = "mysql"
    access_mode = "single-node-writer"
    attachment_mode = "file-system"
    per_allow = true
    plugin_id = "gcepd"
    mount_options {
       fs_type = "xfs"
       mount_flags = "noatime"
    }
  }

  task "mysql-server" {
    driver = "docker"
    volume_mount {
      volume = "mysql"
      destination = "/var/lib/mysql"
      read_only = false
    }
  }
}

Hi @josephlim75 :wave:

Dynamic volume provisioning from a jobspec is not supported.

Starting in Nomad 1.1.0, however, you can use Nomad to provision CSI volumes via the CLI or API. One of the reasons for this 2 step process is that CSI volume provisioning can be unreliable in some conditions.

Creating the volume on-demand during job scheduling could slow down the scheduler, impacting other jobs.

Hi @lgfa29 , thanks for your prompt response. That makes sense and at least now I know what is my next step. Thank you

1 Like

Hi @lgfa29 it’s kind of a pain to do not be able to dynamically create the required storage. Is there any improvements coming in the next releases? I’m actually using nomad 1.3.2.