Hello,
i try to mount a folder from host (/tmp/volume) in a exec driver job.
The folder is created in the allocation of the job but do not have the files inside. The permissions of the host’s folder is owned by the user of the task user, and other have rwx too.
The exactly same configuration work with docker driver.
nomad.hcl :
host_volume “kafka” {
path = “/tmp/volume”
read_only = true
}
nomad job:
volume “kafka” {
type = “host”
read_only = true
source = “kafka”
}
volume_mount {
volume = "kafka"
destination = "local/volume/"
read_only = true
}
Let me know if there is any other way that i can have access to host’s rootfs so the job can read and modify data there, for job persistence . I tried with the rootfs_env but this only create a copy of the rootfs inside the allocation. I cannot modify the host’s files with that. After that i tried with volumes and i have this problem.
I checked the docs and didn’t find any answer here too.
Nomad v1.3.5
Hi @dtzampanakis, it’s difficult to diagnose what the issue may be without a complete picture of your job specification. I was able to get this simple example working, perhaps you can use it as a reference for your own job.
job file:
job "example" {
datacenters = ["dc1"]
type = "batch"
group "group" {
volume "test" {
type = "host"
source = "test"
read_only = true
}
task "task" {
driver = "exec"
config {
command = "/usr/bin/bash"
args = ["-c", "tree ${NOMAD_TASK_DIR}"]
}
volume_mount {
volume = "test"
destination = "${NOMAD_TASK_DIR}/volume"
read_only = true
}
}
}
}
client config:
client {
enabled = true
host_volume "test" {
path = "/tmp/volume"
read_only = true
}
}
result:
➜ nomad alloc logs 77
/local
└── volume
└── hi.txt
Thnx seth hoenig for your time. Something i cannot understand correct. Yes with the tree example above it is working. When i navigate to the alloc dir in /opt/nomad/data/alloc/12312/local/volume i cannot see the file there. This is what made me confused.
I run a kafka setup with nomad. And the installation is inside the alloc dir. The only persistence data tha i want is the kafka logs that i mount_volume them in /data/kafka-storage dir with
host_volume “kafka” {
path = “/data/kafka-storage”
read_only = false
}
Now inside the allocation of the job i can see the
volume_mount {
volume = “kafka”
destination = “${NOMAD_TASK_DIR}/volume/kafka-storage”
read_only = false
}
volume/kafka-storage is there, but in the host /data/kafka-storage it remain empty. If i purge the job and run it again the data are there but i cannot find them in host’s filesystem nowhere. I am confused cause this isn’t working like that with docker driver and i can see the volumes outside the nomad.
I just want to be sure that the persistence will be fine.
Something i am missing