Consul on multi-node MiniKube Installation

Good day,

I am attempting to install Consul, using consul-k8s to a three node minikube cluster. Unfortunately it always fails when deploying t he 45 resources. I’ve checked the pods and see:

ubectl get po -n consul                                                                                                                 INT ✘  minikube ⎈ 
NAME                                           READY   STATUS             RESTARTS         AGE
consul-connect-injector-76f9c4b7db-zrw7b       0/1     CrashLoopBackOff   33 (4m9s ago)    109m
consul-server-0                                0/1     CrashLoopBackOff   25 (4m31s ago)   109m
consul-server-1                                0/1     CrashLoopBackOff   25 (4m6s ago)    109m
consul-server-2                                0/1     Running            0                109m
consul-server-acl-init-5rtx5                   0/1     Error              0                92m
consul-server-acl-init-d94jd                   0/1     Error              0                61m
consul-server-acl-init-kljwc                   0/1     Error              0                109m
consul-server-acl-init-rfsh6                   0/1     Error              0                77m
consul-server-acl-init-s9xjh                   0/1     Error              0                16m
consul-server-acl-init-tjfb5                   0/1     Error              0                31m
consul-server-acl-init-vc4np                   0/1     Error              0                46m
consul-webhook-cert-manager-6b787686d5-4wdxx   1/1     Running            0                109m

and if I inspect one of the servers I get the following error:

kubectl logs consul-server-0 -n consul  :heavy_check_mark:  minikube ⎈
==> failed to setup node ID: failed to write NodeID to disk: open /consul/data/node-id: permission denied

If I create the cluster with a single node all works as expected and Consul is deployed successfully. Any thoughts please?

Hopefully I’ve found the reason why which appears to be an issue with the host storage provisioner not setting the correct permissions for PVs in a multi node cluster. This PR pretty much explains the problem Add `local-path-provisioner` addon by presztak · Pull Request #15062 · kubernetes/minikube · GitHub and hopefully will be rolled out in MiniKube v1.29.0. I did SSH into each MK node and chmod 777 the PV host path, deleted the pods, and then they started up successfully so :hand_with_index_finger_and_thumb_crossed:

Whoop :grin:

==> Consul Status Summary
Name  	Namespace	Status  	Chart Version	AppVersion	Revision	Last Updated            
consul	consul   	deployed	1.0.2        	1.14.2    	1       	2022/12/15 17:51:12 UTC	

==> Config:
    connectInject:
      enabled: true
    global:
      acls:
        manageSystemACLs: true
      datacenter: dc1
      enabled: true
      image: hashicorp/consul:1.14.0
      name: consul
      tls:
        enabled: true
    server:
      enabled: true
      replicas: 3
    ui:
      enabled: true
      service:
        type: NodePort
    

==> Status Of Helm Hooks:
consul-tls-init ServiceAccount: Succeeded
consul-tls-init Role: Succeeded
consul-tls-init RoleBinding: Succeeded
consul-tls-init Job: Succeeded

Consul servers healthy 3/3