Kubernetes Consul Cluster Outage Recovery

Hi there I was hoping to get some assistance with the following,

We use consul on aks kubentes in a 3 pod stateful set (consul 1.9.4) we did a certificate refresh for the AKS instance and as such all consul nodes were most likely down at the same time. Now all the nodes are failing to start as they cannot find the leader.

We are stuck in a loop where the endpoints are not created in K8s because nodes never pass their readiness check, but we need the nodes up for the endpoints to be created. however the readiness check is looking for a leader so it will never happen.

How do we recover from this outage in a kubernetes context, we have looked at the outage docs on the site but they do not cover kubernetes.
If we gain access to a raft.db from the pods can we recreate a single node cluster in a VM and take a snapshot.
Is there anything else we can do to recover from this?

In future contexts if we need to refresh certs on AKS, how do we prepare consul for everything being down at the same time?

Any assistance would be greatly appreciated.

Thanks in advance.