Perhaps we’re doing this wrong.
When the disk usage exceeds the gc_disk_usage_threshold, our experience is that allocations get deleted immediately that the workload has terminated. The effect is that if you try and run a batch job, the output from that job, the log files from the job, are deleted before anything can retrieve them from nomad.
Is this expected? Is there anything that can be done to help this, other than to up the disk usage threshold?
It’s possible that this is primarily a problem in a development environment, where a developers machine has the kind of disk a developer would normally have. In a production environment my guess is that it would be expected that nomad have its own volume, so that this then becomes less of a problem?
Any insight here would welcome, thanks.