Batch job with docker image taking too much resources?

I have an app I deploy with nomad and this works well. As I need backups, I added a second job specified as below:

job "wallabag-backup" {
    datacenters = ["dc1"]
    type = "batch"

    periodic {
        cron = "0 22 * * * *"
        time_zone = "Europe/Paris"
        prohibit_overlap = true
    }

    group "wbackup" {
        network {
            mode = "bridge"
        }
        task "walbackup" {
            driver = "docker"
            config {
                image = "BACKUP_IMAGE"
                auth {
                    username = "REGISTRY_USER"
                    password = "REGISTRY_PASSWORD"
                    server_address = "REGISTRY_SERVER"
                }
                command = "/usr/local/bin/pg_dump"
            }
            template {
                data = <<EOH
                    {{ with secret "kv/steinmetz/wallabag/symfony" }}
                    POSTGRESQL_DB_NAME  = {{ .Data.data.db_name }}
                    POSTGRESQL_DB_USER = {{ .Data.data.db_user }}
                    POSTGRESQL_DB_PASSWORD = {{ .Data.data.db_password }}
                    {{ end }}

                    {{ with secret "kv/steinmetz/wallabag/ovh" }}
                    OS_AUTH_URL  = {{ .Data.data.OS_AUTH_URL }}
                    OS_IDENTITY_API_VERSION = {{ .Data.data.OS_IDENTITY_API_VERSION }}
                    OS_USER_DOMAIN_NAME = {{ .Data.data.OS_USER_DOMAIN_NAME }}
                    OS_PROJECT_DOMAIN_NAME = {{ .Data.data.OS_PROJECT_DOMAIN_NAME }}
                    OS_TENANT_ID = {{ .Data.data.OS_TENANT_ID }}
                    OS_TENANT_NAME = {{ .Data.data.OS_TENANT_NAME }}
                    OS_REGION_NAME = {{ .Data.data.OS_REGION_NAME }}
                    OS_USERNAME = {{ .Data.data.OS_USERNAME }}
                    OS_PASSWORD = {{ .Data.data.OS_PASSWORD }}
                    SWIFT_UPLOAD = {{ .Data.data.SWIFT_UPLOAD }}
                    SWIFT_CONTAINER = {{ .Data.data.SWIFT_CONTAINER }}
                    {{ end }}

                    {{ with service "wallabag-postgres" }}
                    {{ with index . 0 }}
                    POSTGRESQL_DB_HOST = "{{ .Address }}"
                    POSTGRESQL_DB_PORT = {{ .Port }}
                    {{ end }}{{ end }}
                    EOH
                destination = "secrets/file.env"
                env         = true
            }
            vault {
                policies = ["read-all-secrets"]
            }
            resources {
                cpu = 1000
                memory = 1000
            }
        }
    }
    affinity {
        attribute = "${meta.usage}"
        value = "web"
        weight = "-100"
    }
}

but when I run it:

  • it takes too much ressources, leading the server to be overloaded - I see a lot of pg_dump and swift command in htop - seems scripts is run in an infinite loop
  • script never succeed

When I run the container on one of the nomad clients:

  • I can use psql command and connect to db
psql -U wallabag -d wallabag -h wallabag-postgres.service.consul -p 30835
Password for user wallabag: 
psql (12.5 (Debian 12.5-1.pgdg100+1))
Type "help" for help.
  • pg_dump never works:
pg_dump -U wallabag -d wallabag -h wallabag-postgres.service.consul -p 30835
bash: warning: shell level (1000) too high, resetting to 1
bash: warning: shell level (1000) too high, resetting to 1
bash: warning: shell level (1000) too high, resetting to 1

But it does not create the same overload at server level.

So I can’t have backup working now.

Any hint on how I should implement this backup job ?

I use latest versions on nomad / consul / vault.

Ok found my issue : my script is named “pg_dump” and call the “pg_dump” binary from postgresql. So in fact, I create an infinite loop.