Diference bettween docker stats and nomad memory

HI,

on one of my jobs I have a big diference between nomad memory information and docker stat

8ac1b49b118c   server-76a3a6f0-577b-9c2e-0764-7b514fd8e00f   0.55%     104.9MiB / 300MiB   34.98%    5.21MB / 242MB   2.88MB / 1.59MB   13  
ID                  = 76a3a6f0-577b-9c2e-0764-7b514fd8e00f
Eval ID             = c4b48c24
Name                = supysonic.supysonic[0]
Node ID             = a7e0fc8c
Node Name           = oscar
Job ID              = supysonic
Job Version         = 14
Client Status       = running
Client Description  = Tasks are running
Desired Status      = run
Desired Description = <none>
Created             = 5h31m ago
Modified            = 5h31m ago

Allocation Addresses (mode = "host")
Label  Dynamic  Address
*http  yes      192.168.1.40:24082 -> 5000

Task "server" is "running"
Task Resources
CPU        Memory           Disk     Addresses
8/100 MHz  296 MiB/300 MiB  300 MiB

Task Events:
Started At     = 2022-05-10T11:27:52Z
Finished At    = N/A
Total Restarts = 0
Last Restart   = N/A

here top of my container

 PID  PPID USER     STAT   VSZ %VSZ CPU %CPU COMMAND
  12    10 supysoni S    56492   1%   0   0% /usr/local/bin/python /usr/local/b
  10     1 supysoni S    51868   1%   3   0% {flask} /usr/local/bin/python /usr
   9     1 supysoni S    47408   1%   1   0% /usr/local/bin/python3 -m supysoni
 163     0 supysoni S     1668   0%   2   0% sh
   1     0 supysoni S     1604   0%   3   0% {entrypoint.sh} /bin/sh /entrypoin
 170   163 supysoni S     1600   0%   1   0% top
 216     0 supysoni R     1596   0%   0   0% top

somebody know why this huge diference?

I just request API and I have this information

  "memory_stats": {
    "usage": 258109440,
    "stats": {
      "active_anon": 671744,
      "active_file": 4812800,
      "anon": 66469888,
      "anon_thp": 0,
      "file": 188764160,
      "file_dirty": 0,
      "file_mapped": 782336,
      "file_writeback": 0,
      "inactive_anon": 65421312,
      "inactive_file": 184328192,
      "kernel_stack": 196608,
      "pgactivate": 757,
      "pgdeactivate": 18,
      "pgfault": 157152,
      "pglazyfree": 37983,
      "pglazyfreed": 56,
      "pgmajfault": 8,
      "pgrefill": 23,
      "pgscan": 11624,
      "pgsteal": 11274,
      "shmem": 0,
      "slab": 2338288,
      "slab_reclaimable": 1679168,
      "slab_unreclaimable": 659120,
      "sock": 0,
      "thp_collapse_alloc": 0,
      "thp_fault_alloc": 0,
      "unevictable": 0,
      "workingset_activate": 0,
      "workingset_nodereclaim": 0,
      "workingset_refault": 0
    },

could you help me to understand

Hi @vincentDcmps, you can see how the Nomad docker driver gets task resources in

I’m not sure how that compares with what Docker is reporting.

I come back on this thread after time.
I use an container with lftp to get data from a storage box.

Volume mount in container is a nfs mount on the host.

I notice that memory in nomad increase when use nfs share and not in docker stats but I notice when memory reach limit in nomad the transfert rate on nfs fall.

Same comportment when I launch container directly with docker

I have test with volume on local host and I don’t have this limitation.

I will test on a samba share to see same thing occur .

Do you have already meet this?

I just check with a samba share I didn’t meed the issue. and the rate is is 3 time more faster than on NFS.