Consul with pvc

Hi,

We are using Hashicorp Consul as backend for our Hashicorp Vault in a Azure Kubernetes platform. Consul is using PVCs as persistent storage which is working fine (as specified here https://www.consul.io/docs/k8s/installation/install#server-agents and here https://www.consul.io/docs/k8s/installation/platforms/self-hosted-kubernetes).

However, when doing an AKS upgrade to upgrade Kubernetes to a newer version, AKS is performing a rolling update and adds new nodes and schedules the existing pods per node to these new nodes (until all nodes are running the new Kubernetes version). Since, Consul is relying on the PVCs it takes a long time before each Consul server is coming up on the new node (PVC need to be released on the old node before it can be claimed by another node). As a result the Consul server is not up and running on the new node yet, while AKS already start to upgrade another node. When unlucky the consul leader is down and 2 out of 3 or 3 out of 5 consul servers are unavailable and consul loses it majority.

I know this is a PVC issue and already tested it with local storage, which is working fine. However, I was wondering if there is a solution with persistent storage in AKS. I was thinking about an AzureFile, but I cannot set the hostname of a pod as environment variable in the statefulset to specify which directory in this AzureFile each Consul server should use. Maybe it is possible but doing it wrong, if someone knows how to do this please let me know.

Specifying a longer waiting time for each node in the AKS upgrade is not possible, but also not a desired solution in my opinion.

If anyone got ideas, they are more than welcome!