In the Waypoint container args within the job definition when deployed to Nomad, I observed that the DB path is “/alloc/data/data.db”.
"args": [
"server",
"run",
"-accept-tos",
"-vv",
"-db=/alloc/data/data.db",
However, in the same job definition, the volume_mount stanza is configured to mount the designated host volume to “/data”
"VolumeMounts": [
{
"Volume": "waypoint-server",
"Destination": "/data",
"ReadOnly": false,
"PropagationMode": "private"
}
],
On the host where I’ve deployed Waypoint, the directory where I’ve mounted the volume is empty, and /data within the container is also empty. However, within the container /alloc/data/data.db does indeed have the database files. This leads me to believe that the -db path being inequivalent to the volume_mount stanza’s path might actually not be persisting the DB at all (on the host). I’ve observed that when the container (Nomad alloc) restarts, the DB comes back online OK. However, if the alloc stops and is garbage collected by Nomad, and then Waypoint is started back up on the host later with the same configurations, the previous database is not loaded and there are no projects/apps in my Waypoint env. Is this expected behavior, and should I not expect to be able to “save” the database when the Nomad alloc is wiped from the host? (note that in the meantime I’m still working on automating Waypoint’s snapshot so this doesn’t affect my users as much)