Database backup best practices

Hi all,

I’m looking for best practices how to backup the databases of my jobs.

Data directories of the databases is sitting on nfs shares on my NAS, which is snapshotting the share every hour and doing a cloud backup every night.

To do a proper online-backup, I would like to use the supplied tools to create a backup into a directory, which will then be picked up by the cloud backup.
Ideally, this would be part of the job description, to have everything in one place.

Digging through the documentation, I found the following approaches:

  1. Create a new scheduled job, which connects to the database and triggers the backup. Unfortunately, this requires a second job.
  2. Run a script as a second task of the main job, which does some cron-magic to trigger the online-backup.
  3. any other ideas?

Ideally, it would be possible to create a schedules task as part of a job, but according to the documentation this is not possible (yet).

Hi @matthias,

The two approaches you detail are the two best approaches and as you mention a single job specification cannot mix scheduler types meaning you cannot have a service and batch together.

In my previous operator experience running Nomad, I preferred running DB backup jobs via a separate job. While this has overhead of maintaining another job specification, it allows you to change the DB and backup jobs independently. This separates concerns nicely which I believe is an important aspect of job deployment.

Thanks,
jrasell and the Nomad team

Hi @jrasell,

thanks for the clarification.

Do you have any examplex for variant one and two? Especially curious about a template for running a cron job inside a task.

Best regards
Matthias

Hi @matthias,

I don’t have exact examples as that would depend on the DB backup tool, but I would suggest looking at the periodic job specification block as a starting point.

Thanks,
jrasell and the Nomad team