Nomad - remove down or left nodes from cluster

We set up a Nomad cluster with a Consul cluster following the best practices provided in the documentation on top of Google cloud infrastructure. So far so good.

We did some tests, but then we have a lot of empty clients when we go on Topology in the Nomad UI Interface

And also we have a lot of down clients

The same goes for left servers

Can we do something in order to remove them from the UI? Thank you :pray:

EDIT: We enabled leave_on_interrupt and leave_on_terminate but we still have this problem.

EDIT 2: We have the same problem when we run nomad node status

organizations/gcp/bendingspoons.com on  master [$] via 💠 default on ☁️  sv@bendingspoons.com
❯ nomad node status
ID        DC           Name                     Class   Drain  Eligibility  Status
d71b3262  us-central1  nomad-nomad-client-4l6l  <none>  false  eligible     ready
cb4d299a  us-central1  nomad-nomad-client-1q54  <none>  false  eligible     ready
a06f390c  us-central1  nomad-nomad-client-xxlv  <none>  false  eligible     ready
196f2053  us-central1  nomad-nomad-client-lg7r  <none>  false  eligible     down
5908adc2  us-central1  nomad-nomad-client-dslk  <none>  false  eligible     down
93c3cf1e  us-central1  nomad-nomad-client-25r8  <none>  false  eligible     down
c5ec4fd3  us-central1  nomad-nomad-client-bwb3  <none>  false  eligible     down
ae6ae3a6  us-central1  nomad-nomad-client-jtvb  <none>  false  ineligible   down
a63d21eb  us-central1  nomad-nomad-client-45rf  <none>  false  ineligible   down
bacb8c9e  us-central1  nomad-nomad-client-qxb0  <none>  false  ineligible   down
f3f29acf  us-central1  nomad-nomad-client-3gzp  <none>  false  ineligible   down
e679e47e  us-central1  nomad-nomad-client-sm0k  <none>  false  ineligible   down
7c3a13eb  us-central1  nomad-nomad-client-vznm  <none>  false  ineligible   down
a10d0c66  us-central1  nomad-nomad-client-89j7  <none>  false  ineligible   down
f45f3eb7  us-central1  nomad-nomad-client-2117  <none>  false  ineligible   down
3444c520  us-central1  nomad-nomad-client-bht1  <none>  false  ineligible   down
6c00709e  us-central1  nomad-nomad-client-4mlq  <none>  false  ineligible   down
bdccc266  us-central1  nomad-nomad-client-wrqc  <none>  false  ineligible   down
89193843  us-central1  nomad-nomad-client-2d28  <none>  false  ineligible   down
2acca668  us-central1  nomad-nomad-client-8j8n  <none>  false  ineligible   down
f5593a7b  us-central1  nomad-nomad-client-mrm0  <none>  false  ineligible   down
b9e95158  us-central1  nomad-nomad-client-v5dz  <none>  false  ineligible   down
b42c0d2f  us-central1  nomad-nomad-client-7nnx  <none>  false  ineligible   down
5ff00f7a  us-central1  nomad-nomad-client-8gfs  <none>  false  ineligible   down
2cd75c27  us-central1  nomad-nomad-client-g77h  <none>  false  ineligible   down
bd6d77fb  us-central1  nomad-nomad-client-ps8l  <none>  false  ineligible   down
88606107  us-central1  nomad-nomad-client-xrb6  <none>  false  ineligible   down
2b7ac743  us-central1  nomad-nomad-client-4dkx  <none>  false  ineligible   down
67b723cf  us-central1  nomad-nomad-client-mjz0  <none>  false  eligible     down
2c31618a  us-central1  nomad-nomad-client-6sz1  <none>  false  eligible     down

Hi @voxsim. Terminal resources such as nodes are cleaned from the Nomad state via an internal garbage collector which runs according to a periodic schedule. This can be triggered manually via the nomad system gc command or via the /v1/system/gc API endpoint. If you wish Nomad to be more aggressive with its periodic garbage collection, you can set the node_gc_threshold server configuration option.

How long have the servers been in their left state for and what triggered this?

Thanks,
jrasell and the Nomad team