I have a process that dispatches a parameterized job and then wants to read the dispatched job’s result (which is written as a file in the
alloc directory). The
dispatch API call returns the job ID so I can use the API to read allocation IDs and then use the
client API to check if the result file was produced.
I’m trying to optimize this to not have to poll the API (check in a loop if the file was created). It looks possible but “hacky”:
- at the dispatched job’s start, create an empty file
- the dispatching job calls “stream file” for
waiting_for_resultClient - HTTP API | Nomad by HashiCorp
- when the dispatched job produces
result_file, it deletes
- the dispatching job receives the event
- the dispatched job can now read
I also thought about using the
/event/stream API to wait for the job/alloc finish but then I can’t be sure the result file will be available (Ensure alloc directories stay for some time - #3 by aartur).
So the question is if there are better ways to pass data between the jobs. The above solution might work but also looks not that easy to implement correctly (e.g. there can be race conditions between the dispatching job’s streaming initialization and the dispatched job result calculation).