Trying to run a non-containerized workload that uses Linux shared memory in /dev/shm to be run via Nomad, and via the exec driver
By default, only 64M is allocated for shared-memory by the exec driver, but the app needs much more. For the docker driver this is controlled by the parameter shm_size for the task. But how do we do the same for exec jobs as containerization is not an option for our use case?
Obvious do not prefer to do it via raw_exec driver. Solutions seem to suggest that we modify the /etc/fstab on the client host. Is that the case please?
Attached below are the mount and df outputs. Can someone kindly suggest a path?
output of “mount” command from inside the task exec
/dev/mapper/appvg-appvol on / type xfs rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
..
tmpfs on /secrets type tmpfs (rw,noexec,relatime,size=1024k)
…
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
..
The shm was provided with a default size of 65536K
DF output - /dev/shm is 64M
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/appvg-appvol 600G 270G 330G 45% /
tmpfs 1.0M 4.0K 1020K 1% /secrets
tmpfs 1.0M 0 1.0M 0% /private
tmpfs 20G 0 20G 0% /dev
shm 64M 0 64M 0% /dev/shm
tmpfs 20G 0 20G 0% /sys/firmware