Nomad nfs issue

Hi,
I am using nomad in a cluster a server and a client
using this job

job "promtest" {
  datacenters = ["dc1"]

group "promtest" {
  count = 1

task "grafana" {
  driver = "docker"
  config {
    image = "prom/prometheus:v2.16.0"
    volumes = [
      "local/prometheus.yml:/etc/prometheus/prometheus.yml",
      "/mnt/clientshare/docker/prometheus:/prometheus"
    ]
   
  }
}
}
}

the nfs share is created on the client
I am getting the error when deploying the nomad job “volumes are not enabled cannot mount host paths.”
what could be the reason for this? appreciate your help

See docs here: Drivers: Docker | Nomad | HashiCorp Developer

You can allow mounting host paths outside of the allocation working directory on individual clients by setting the docker.volumes.enabled option to true in the client’s configuration.

@KK123 :wave:

First, @pphysch is correct, you can enable volumes for the docker driver and mount arbitrary paths outside of the allocation sandbox into Docker tasks. However, if you are using existing NFS mounts on a host running a Nomad client, you could instead define them as Nomad host volumes. Nomad host volumes can be mounted into running workload without having to resort to enabling Docker volumes wholesale.

So for your example, I would have a host volume configured for prometheus like so:
(You can either add this to your configuration file, or (if you are pointing to a directory for configuration files) create a specific configuration file just for this host-volume.

client {
  host_volume "prometheus" {
    path = "/mnt/clientshare/docker/prometheus"
    read_only = false
  }
}

Restart the node once you have it configured. You should be able to see the host volume listed in the output of nomad node status -self and nomad node status -self -verbose on the client hosting the volume.

And then use it in the job like this:

job "promtest" {
  datacenters = ["dc1"]

  group "promtest" {
    count = 1

    volume "prometheus" {
      type   = "host"
      source = "prometheus"
    }

    task "grafana" {
      driver = "docker"
      config {
        image = "prom/prometheus:v2.16.0"

        volume_mount {
          volume      = "prometheus"
          destination = "/prometheus"
        }

        mount {
          type     = "bind"
          source   = "local/prometheus.yml"
          target   = "/etc/prometheus/prometheus.yml"
          readonly = true
        }
      }
    }
  }
}

Using host volumes will work in the same way between the exec, java, and docker task drivers. Switching from the volume attribute to the more expressive mount syntax is also preferred for docker bind mounts.

Hopefully this helps!

Regards,
Charlie

1 Like

thanks @angrycub
I can now see the host volume which is the nfs share in nomad node status -self -verbose however when i run the job i am getting an error failed to create container API error (400) invalid mount config for type bind invalid mount path prometheus mount path must be absolute

Darn I forgot about this, and you will run into an interpolation issue if you try and interpolate the alloc_dir path into the source attribute. For now, the volume syntax for binding in your configuration should do the trick. Scratch the mount stanza and put back in

volumes = [ "local/prometheus.yml:/etc/prometheus/prometheus.yml" ]

I will follow up on the interpolation issue to see if there is an issue, and file one if there isn’t.

Hopefully this gets you going!

Best,
Charlie

Hi @angrycub
getting similar error failed to create container API error (400) invalid mount config for type bind invalid mount path prometheus mount path must be absolute

my job file is

job "promtest" {
  datacenters = ["dc1"]

group "promtest" {
  count = 1

 volume "prometheus" {
  type = "host"
  source = "prometheus"
 }
task "grafana" {
template {
        change_mode = "noop"
        destination = "local/prometheus.yml"

        data = <<EOH
---
global:
  scrape_interval:     5s
  evaluation_interval: 5s

scrape_configs:

  - job_name: 'nomad_metrics'

    static_configs:
    - targets: 
        - ipaddress:4646
    scrape_interval: 5s
    metrics_path: /v1/metrics
    params:
      format: ['prometheus']
EOH
      }


  driver = "docker"
  volume_mount {
      volume = "prometheus"
      destination = "prometheus"

     } 
   
  config {
    image = "prom/prometheus:v2.16.0"
    volumes = [
      "local/prometheus.yml:/etc/prometheus/prometheus.yml",
       
       ]
 
  }
}
}
}

Think I spotted it!

The volume mount’s destination needs to be set absolutely for the container, so perhaps you meant /prometheus here.

Fingers crossed!
Charlie

Working thank you very much @angrycub

Is it possible to mount a sub-directory of the host volume into the container? I can’t seem to figure out the syntax for this.

1 Like