Stall Nomad garbage collection

Hi have below configuration at server end and client end. When ever we do db restore we bring down the allocation, but by the time it is finished the nomad job get garbage collected . Below is the configuration that i am working with.

server {
default_scheduler_config {
scheduler_algorithm = “spread”
}
job_gc_threshold = “12h”
}

client {
host_volume “docker-sock-ro” {
path = “/var/run/docker.sock”
read_only = true
}
options {
“docker.volumes.enabled” = true
}
}

Hi @ajitesh007,

Could you share the logs from the leader that show the job being garbage collected or steps on how to reproduce it please?

Thanks,
jrasell and the Nomad team

Sure will try to get the logs

A long way round to prevent the garbage collection of the job is to enable scaling on the job.

During db restore, make the count 0, and when ready to continue, make he count 1.

Also limit the scaling counts to min 0 and max 1.

This is a workaround we use in a different scenario