Difference between volume_mount and volumes for a docker container

Hi folks, question regarding the behaviour of the stuff in the subject. In the past we used a glusterfs mount to give apps some static storage, this was done by mounting the glusterfs brick on all nodes, and just using the “volumes” block in the docker config to mount it (no host volumes). This worked fine. File permissions and ownership were as expected (the docker re-mapped uid/gid).

Recently we changed this to using sets of NFS mounts that are defined as host volumes, and we replaced the original ‘volumes’ configuration with a volume_mount one. This also works great, however, it seems dockers userns remapping isn’t applied, for some reason. Files created on the mounted host volume are owned by the uid/gid that the app in the container runs as (so often this turns out root:root).

I couldn’t find anything in the documentation so I’m wondering if this is the expected behaviour?

1 Like

Hi @benvanstaveren :wave:

I haven’t used Docker’s userns_mode before, but have you tried using volume_mount and volumes?

volume_mount is a task driver agnostic way to mount host volumes into tasks. volumes is a Docker specific configuration to mount Docker volumes into your containers.

I think you can use both, so use volume_mount to mount the host volume somewhere in the task directory (like local/host), and the mount this path in your container using

volumes = ["local/host:/path/inside/container"]

As I mentioned, I’m not sure if it will work, but give it a try and let me know :grinning_face_with_smiling_eyes:

Late to the party but we solved it by just anon squashing everything on the NFS side of things, maybe not the most ideal solution but it seems to work :slight_smile: