Incorrect memory stats on Raspberry pi 4 (Ubuntu)

A couple of days ago I was reporting that my Nomad cluster was not showing memory stats, after enabling the cgroups it seemed to fix the issue, but after a day or so I started sizing the memory my jobs according to the readings I was getting from Nomad and I started to noticed some task failing, depending on the task the error would be different, but nothing obvious from the Nomad side. After digging a bit more I started tailing dmesg on the nodes and found messages like this:

[99929.267165] Memory cgroup out of memory: Killed process 295522 (java) total-vm:3493160kB, anon-rss:291648kB, file-rss:15020kB, shmem-rss:0kB, UID:911 pgtables:920kB oom_score_adj:0

It appears to me that the tasks are being killed because of high memory usage but this OOM event is never reported to Nomad UI or the memory stats which coincidentally are quite lower that I normally see on the amd64 counter part.

Does anyone have any idea whats going on here?

Some details of my cluster:
Raspberry 4 8GB (amd64) x 3 (running on SSD)
Ubuntu 20.04.2
Nomad 1.2.0