I’m implementing a batch job that will convert something from 1 format to another. To this end, logically I need an input directory and an output directory for the task. I am specifying these through some parameterized metadata:
job "foo" {
parameterized {
meta_required [
"DIR_SOURCE", # Somewhere on the host machine - will be on the NFS mount every node in the cluster has mounted
"DIR_DEST" # Somewhere else on the host machine - will be on the NFS mount every node in the cluster has mounted
]
meta_optional [
"SOME_OTHER_OPTION"
]
}
# ......
}
To import the input directory, I understand that I can use the artifact
stanza, and that this is required because an exec
task’s access to the host system is limited. I currently have this in my config file:
# .......
task "bar" {
driver = "exec"
config {
command = "...."
args = [ ".....", "...." ]
}
artifact {
source = "/mnt/nfs_shared/somedata"
destination = "local/"
}
artifact {
source = "/mnt/nfs_shared/moredata"
destination = "local/"
}
resources {
cpu = 1500 # MHz
memory = 256 # MiB
}
}
# ......
This leaves me with the output directory. My conversion program converts the input directory and generates a new output directory. I want the output directory to be moved to a location given by a parameterized metadata value (DIR_DEST
) on the host once the batch job completes. This is probably going to be set to a location on my shared NFS mount that all the nodes in my Raspberry Pi cluster have mounted.
How can I tell Nomad about the output directory, and get it to move it to a given location once the batch job exits?