Production hardening - leverage memory locking in kubernetes

I busy with vault production hardening and trying to get memory locking enabled and working in a k3d cluster running on an Ubuntu VM. I have run into various issues and would like to seek some input around this. I have referenced this Hashicorp documentation residing at the link (Vault on Kubernetes security considerations | Vault | HashiCorp Developer). I am using the vault helm chart “0.13.0” for managing vault in the cluster.

I observed the following behavior while attempting this change:

  • I updated the vault helm chat to specify this configuration:

securityContext:
capabilities:
add: [ “IPC_LOCK” ]

  • I added “disable_mlock = false” to the extra-config values property list.
  • I had to manually edit the helm chart and comment out the line with “disable_mlock = true” in “/vault-helm-0.13.0/templates/server-config-configmap.yaml”, since this value always took preference over the extra-config values supplied.
  • Changed ulimits on my VM to set maximum lockable memory to be unlimited.
  • Restarted and deployed vault again, but vault cannot start and logs this error in the container logs:
    Error initializing core: Failed to lock memory: cannot allocate memory
    This usually means that the mlock syscall is not available.
    Vault uses mlock to prevent memory from being swapped to
    disk. This requires root privileges as well as a machine
    that supports mlock. Please enable mlock on your system or
    disable Vault from using it. To disable Vault from using it,
    set the disable_mlock configuration option in your configuration
    file.

I am not sure what I am missing to get memory locking enabled correctly. The message above implies that vault needs to be run as root in order to utilize memory locking, but that goes against other recommendations to not run as root.

Any ideas on what I am missing here or comments on similar behavior encountered would be very much appreciated.