this happened to me after rebooting the whole cluster at once: containers are scheduled again by nomad after killing the containers, but no job or status can be found.
My work around/fix is deep cleaning the nomad nodes where the containers appear and reattach it to the cluster:
drain node
(nothing should be running here, except for the rogue job)
stop nomad and docker
empty the nomad and docker working dirs (often /var/lib/nomad and /var/lib/docker)