Linux shared memory of tasks for exec driver /dev/shm

Trying to run a non-containerized workload that uses Linux shared memory in /dev/shm to be run via Nomad, and via the exec driver

By default, only 64M is allocated for shared-memory by the exec driver, but the app needs much more. For the docker driver this is controlled by the parameter shm_size for the task. But how do we do the same for exec jobs as containerization is not an option for our use case?

Obvious do not prefer to do it via raw_exec driver. Solutions seem to suggest that we modify the /etc/fstab on the client host. Is that the case please?

Attached below are the mount and df outputs. Can someone kindly suggest a path?

output of “mount” command from inside the task exec

/dev/mapper/appvg-appvol on / type xfs rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
..
tmpfs on /secrets type tmpfs (rw,noexec,relatime,size=1024k)

shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
..

The shm was provided with a default size of 65536K

DF output - /dev/shm is 64M
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/appvg-appvol 600G 270G 330G 45% /
tmpfs 1.0M 4.0K 1020K 1% /secrets
tmpfs 1.0M 0 1.0M 0% /private
tmpfs 20G 0 20G 0% /dev
shm 64M 0 64M 0% /dev/shm
tmpfs 20G 0 20G 0% /sys/firmware

Looking at it further, here..

This seems to be hard-coded in the configureIsolation method. Is that correct?

Are there alternative approaches?

  • using the exec2 driver
  • can we specify it as a host volume and configure an environment var?
  • also going to look at raw_exec

Thanks.

we actually used the raw_exec driver and ran these procs without isolation to access the host shared memory as is (/dev/shm).

this post/answer was immensely helpful. We used nomad 1.10.5 that addressed the issue.

we also need a poststop task to reap all the child processes stopped, as the raw_exec leaves some child procs behind.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.