Hi,
I’m hoping to use AutoFS to manage the mounts in /smb
. On the host it works fine, but I’m having an issue from inside a running container.
In my client config, I have:
"host_volume": {
"autofs_smb": {
"path": "/smb",
"read_only": false
}
In my group config I have:
volume "autofs_smb" {
type = "host"
read_only = false
source = "autofs_smb"
}
and in the task I have:
volume_mount {
volume = "autofs_smb"
destination = "/smb"
}
I’m using the docker driver.
The symptoms are:
-
ls /smb/servername
in the container, it mounts and I can see the share folders. -
ls /smb/servername/sharename
in the container returns “No such file or directory”. -
ls /smb/servername/sharename
on the host returns files and folders - Restart the Nomad task and
ls /smb/servername/sharename
in the container returns files and folders
It looks like AutoFS lazy-mounts the shares, and the container isn’t picking up that change. i.e. ls /smb/servername
returns a list of local files that autofs just created. ls /smb/servername/sharename
then mounts that share to that location, and that change isn’t “seen” by the container.
I’ve seen references to switching to Docker bind mounts with rshared
- but if I can, I’d like to stick to the Nomad volumes for the increased security they give me.
Anyone else come across this? any hints?
Many thanks,
Geoff
EDIT: I gave up (this post was at the end of several hours of trying) and switched to bind mounts in the Job->Group->task->config section:
mount {
type = "bind"
target = "/net"
source = "/net"
readonly = false
bind_options {
propagation = "rshared"
}
}
and set driver.docker.volumes.enabled
to true
on the clients.
I would have preferred to keep the Nomad volumes, as they provide better control. But this works, and I’m able to browse the autofs mounts with no issues.