Ghost nodes showing in http://localhost:8500/v1/health/service/whatever (but not in consul members)

Hi, a newbie in the forum, thanking you for your help in advance.

After a series of unfortunate network events, two consul clusters joined (that shared the same name, but were, in priciple, seggregated on different network segments)

I have managed to clean up the mess, and both clusters are functional again (now with different names), and consul members only show members belonging to each cluster (all active, no left nodes, all ok). So far, so good, but, querying http://localhost:8500/v1/health/service/whatever I still (4 days later) get some results or ghost nodes from the other cluster… for instance:

  {
    "Node": {
      "ID": "",
      "Node": "fleischmann",
      "Address": "10.0.0.70",
      "Datacenter": "",
      "TaggedAddresses": {
        "lan": "10.0.0.70",
        "wan": "10.0.0.70"
      },
      "Meta": null,
      "CreateIndex": 203802620,
      "ModifyIndex": 203802620
    },
    "Service": {
      "ID": "consul",
      "Service": "consul",
      "Tags": [
        "cicely",
        "doctors"
      ],
      "Address": "",
      "Meta": null,
      "Port": 8300,
      "Weights": {
        "Passing": 1,
        "Warning": 1
      },
      "EnableTagOverride": false,
      "Proxy": {
        "MeshGateway": {},
        "Expose": {}
      },
      "Connect": {},
      "CreateIndex": 203802593,
      "ModifyIndex": 203802593
    },
    "Checks": []
  },

Environment was a mixed of 0.8.5 servers and some clients in 1.8.4… now all servers are 1.8.4 (after finally seizing the oportunity to retire the 0.8.5 servers)

How can I sanitise or prune/purge those ghost entries?

(update: forgot to mention that although not showing in consul members output, they were shoing in curl http://localhost:8500/v1/agent/members )

By the way, apart from forcing me to alter some scripts in order to validate results against valid members, I have just realised the issue is nastier than I expected because it is also interfering with prepared queries.

Update: As I needed an urgent remediation, I opted for the quickest (and more tedious) remediation: Using a spare server, I went reconfiguring the hostname of that server with every rogue node name that was impacting me , joining/leaving the cluster, and then doing a consul force-leave -prune from a consul server. (consul force-leave that was not working before although the node was showing in curl http://localhost:8500/v1/agent/members … I guess because the ID was empty)