Nomad - cron (periodic jobs) in docekrized huge legacy app

Hi,

I have many legacy applications and I currently moving them to nomad, but I’m still not sure how to run the cron jobs in this ENV. Each application requires at least one cron job running with all application libraries (huge monolithic RAILS app).

I can’t run the same huge container just for jobs, is there any nice way to run each 5 minutes command in an already running container from nomad?

Thanks for your recommendations…

1 Like

Hi @msirovy,

This is a kind of tricky situation because the app container and the batch job workload would not necessarily run in the same client.

You could use a periodic job that uses exec or raw_exec to run a docker exec command, and the job affinity to make sure it runs in the same client as the app container, but it feels too hacky to call it a recommendation :sweat_smile:

You could also use the nomad alloc exec command inside the job, which wouldn’t require the container to be in the same client, but then you will need to find the allocation ID.

If the container is huge in terms of filesystem size, you could set the Docker task driver to not garbage collect images, and so starting the container wouldn’t download the image again.

If the container itself is huge (uses a lot of memory or CPU), perhaps you could change its entrypoint so it doesn’t actually starts the application?

But overall I am not sure if there would a nice approach to this.

I am curious to see if anyone else has other ideas.

Only the start of this application requires 2G of RAM, 2500 units of CPU, and the Docker image has 3G. It is a crazy app and I need to run more than 1000 similar apps in the cloud :smiley:

My only try was a crazy script that lists all containers running on the machine and matching my filter. This script run cron jobs inside all these containers using docker exec. But it is not nomad solution and I am afraid it is not sustainable for such a big cluster…

Ah I see. That’s a tricky situation :thinking:

What Nomad could give you is the exec API so you wouldn’t have to worry about finding individual containers. You could build an external service that uses it.

If you need to run the same cron job in every container you could also have a sidecar task that runs the script and sleeps for 5min, or something like this. The advantage in this case is that you would be able to access the allocation ID from the NOMAD_ALLOC_ID environment variable:

job "example" {
  datacenters = ["dc1"]

  group "cache" {
    task "redis" {
      driver = "docker"

      config {
        image = "redis:3.2"
      }
    }

    task "cron" {
      driver = "raw_exec"

      config {
        command = "/bin/bash"
        args    = ["local/script.sh"]
      }

      template {
        data        = <<EOF
while true; do
  nomad alloc exec -task redis $NOMAD_ALLOC_ID redis-cli ping
  sleep 3
done
EOF
        destination = "local/script.sh"
      }
    }
  }
}

But it’s still a bit hacky :sweat_smile:

This is the missing part! After a few iterations, I’ve found the following code as a sufficiency solution:

task "cron" {
  driver = "raw_exec"
  config {
    command = "/bin/bash"
    args = ["/usr/local/bin/nomad alloc exec -task legacy_app $NOMAD_ALLOC_ID cron -f"]
  }
 ...

Thanks for your help!

3 Likes