How to deal with CSI node driver restarts/crashes/updates

In the CSI Docs it clearly specifies:

If you are stopping jobs on a node, you must stop tasks that claim volumes before stopping the node or monolith plugin for those volumes.

This seems like weird behavior to me, it means that updating a CSI driver requires “manually” (can script it but still) stopping all jobs that use it, updating it, then starting them again. Any driver crash also renders all the mounts stale and if the running application in those jobs doesn’t itself crash on a stale mount, the job will keep running with the stale mount, erroring out everywhere.

Nomad knows it’s a CSI driver job, it knows the health of the driver and it knows if the driver alloc is stopped, why doesn’t it take care of kicking out those allocations in jobs that use the volumes when something happens to the driver? I could workaround it with a pre-start task on the driver job that does this, but it’s pretty hacky, I’m just wondering if there’s another way.