How to stall Garbage collection for nomad jobs

Very often the GC is run and the allocation directories disappear . One of the quickest way to check the job status etc is by going to their allocation and finding details . Whats the best way to stall the GC for a week ? Is it a server setting or client setting ?
I saw

job_gc_interval (string: "5m") - Specifies the interval between the job garbage collections. Only jobs who have been terminal for at least job_gc_threshold will be collected. Lowering the interval will perform more frequent but smaller collections. Raising the interval will perform collections less frequently but collect more jobs at a time. Reducing this interval is useful if there is a large throughput of tasks, leading to a large set of dead jobs. This is specified using a label suffix like "30s" or "3m". job_gc_interval was introduced in Nomad 0.10.0.

And also this in Client

gc_interval (string: "1m") - Specifies the interval at which Nomad attempts to garbage collect terminal allocation directories.

Which one should I use ?

I tried the following settings on both client and server nomad.hcl however to test it I ran “nomad system gc” and it ignored all the settings and remove the dead job and alloc directory from the node .

Server 
node_gc_threshold="43200h"

Client
 gc_interval = "43200h"
      gc_disk_usage_threshold = 90
      gc_max_allocs = 1000

I was hoping this would allow 30 days of allocations . I have a very powerful baremetal with lot of memory and cpu and disk so resource is not an issue .

@sammy676776 if you run nomad system gc you’re asking Nomad to force a garbage collection, ignoring the schedule / thresholds.

job_gc_interval is what garbage collects terminal jobs that the server remembers.

gc_interval is what garbage collections allocations on a client.

@seth.hoenig Thanks . So if I set “gc_interval=43200h” then I should see allocations for 30 days on all my clients ?