Azure Files CSI volume on Nomad

Hi, I’m trying to user an Azure service account container as the storage backend for some of the containers, although it doesn’t seem to be working currently as the controllers said resource not found for some reason. And I couldn’t find any example of mounting an Azure Files container on Nomad either.

Controller job:
job “azurefiles_csi” {

  datacenters = ["ct-stg"]
  region = "global"
  type = "system"

  group "controller" {
    task "controller" {
      driver = "docker"

      template {
        change_mode = "restart"
        destination = "local/azurefiles_csi_config.json"
        data = <<EOH
{
    "cloud":"AzurePublicCloud",
    "tenantId": "",
    "aadClientId": "",
    "aadClientSecret": "",
    "subscriptionId": ""
}
EOH
      }

      config {
        image = "mcr.microsoft.com/k8s/csi/azurefile-csi:latest"
        args = [
          "--nodeid=${attr.unique.hostname}-vm",
          "--endpoint=unix://csi/csi.sock",
          "--logtostderr",
          "--v=5",
        ]

        volumes = [
          "local/azurefiles_csi_config.json:/etc/kubernetes/azure.json"
        ]

        privileged = true
      }

      csi_plugin {
        id        = "azurefiles_csi"
        type      = "monolith"
        mount_dir = "/csi"
      }

      resources {
        cpu    = 500
        memory = 256
      }
    }
  }
}

Volume config file:

id              = "devcontainercsi"
name            = "devcontainercsi"
type            = "csi"
external_id     = "storage account resource ID"
plugin_id       = "azurefiles"
access_mode     = "single-node-writer"
attachment_mode = "block-device"

Hi @therealhanlin :wave:

Does this happen when you try to register the volume (nomad volume register)? Does the volume already exist? Maybe check if the Nomad client has the proper Azure permissions as well.

Once you have the volume registered, you can use the volume block in your group and volume_mount in your tasks.

Checkout our E2E test suite for an example. It’s not for Azure, but once a volume is registered they work the same way.

Hi,

Thanks for your help.

I’m not registering an Azure managed disk tho. I’m trying to mount the storage account container, idk if that’s the same procedure. Does Nomad support mounting Azure Files or AWS EFS block storage?

Thanks.

Yes, if the volume is mounted in the client you can use a host volume to make it available to your task.

You can also use a CSI plugin for these storage services, but you would need to write a job for them based on the Kubernetes spec. Here’s an example for Azure Disks: Error registering csi volume - Azure Disk · Issue #7812 · hashicorp/nomad · GitHub and this is the Kubernetes plugin for Azure Files: https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/deploy/v1.2.0/csi-azurefile-controller.yaml#L128-L173

I’ve spun up the controller and node, and it seems they are working fine. I mounted a volume with configs specified below.

id              = "Dev_Container_Storage#devcontainerstorage604#raspicsi"
name            = "devcontainercsi"
type            = "csi"
plugin_id       = "azurefiles_csi"
access_mode     = "single-node-writer"
attachment_mode = "block-device"

parameters {
  storageAccount = "devcontainerstorage604"
  protocol = "smb"
  resourceGroup = "Dev_Container_Storage"
  shareName = "raspicsi"
}

I tried to mount it into a container, but it failed with the error:

failed to setup alloc: pre-run hook “csi_hook” failed: node plugin returned an internal error, check the plugin allocation logs for more information: rpc error: code = Internal desc = volume(Dev_Container_Storage#devcontainerstorage604#raspicsi) mount “//devcontainerstorage604.file.core.windows.net/raspicsi” on “/csi/staging/Dev_Container_Storage#devcontainerstorage604#raspicsi/rw-block-device-single-node-writer” failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t cifs -o dir_mode=0777,actimeo=30,mfsymlinks,file_mode=0777, //devcontainerstorage604.file.core.windows.net/raspicsi /csi/staging/Dev_Container_Storage#devcontainerstorage604#raspicsi/rw-block-device-single-node-writer Output: mount error(2): No such file or directory Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

Container job:

job "owncloud" {
  datacenters = ["raspi-dc01"]
  type = "service"

  group "owncloud" {

    network {
      port "http" {}
    }

    volume "http_dir" {
      type            = "csi"
      source          = "Dev_Container_Storage#devcontainerstorage604#raspicsi"
      access_mode     = "single-node-writer"
      attachment_mode = "block-device"
    }

    task "server" {
      driver = "docker"

      config {
          image = "owncloud/server"
          // args  = ["-text", "hello world"]
          ports = ["http"]
      }
      service {
        check {
          type     = "http"
          port     = "http"
          path     = "/"
          interval = "5s"
          timeout  = "2s"
        }
      }

      volume_mount {
        volume      = "http_dir"
        destination = "/mnt"
      }
    }
  }
}

This seems more like a CSI plugin issue, so it’s outside the scope of my knowledge :sweat_smile:

A quick Google search led me to this: 1833437 – Red Hat CoreOS unable to mount Azure File share

The root cause seems to be the Nomad client not being properly setup to reach the Azure private network

This may be an issue with the OpenShift nodes not being able to reach the Azure Files shares.

Let me set up private endpoints in the subnet that these worker nodes are in and test.

I don’t know enough about Azure, but does this help?

I spun up a test container in Azure and mounted the Azure File Share manually and it worked fine. Seems like my ISP is blocking the SMB traffic.

Man, this generic error message generated by mount.cifs kinda sucks…

Anyway, thanks a lot for helping out man! I’ll figure out something else to make it work.

1 Like