Wait for condition when running with kubernetes_manifest resource

Hello,

I’m running a bunch of kubernetes_manifest resources in terraform to add some CRDs for metallb. The problem is it seems that I’m doing it to soon. The metallb pods start only after the workers have been bootstrapped.

The kubernetes resources depend on vsphere_virtual_machine.kube_workers (which is the resource that deploys the workers), but it’s still too soon, it seems. Metallb needs only one worker to be up and running, so I thought that by the end of the provisioning of the 3rd node, it should be ok, but it seems that’s not happening.

So the error I’m getting is that it cannot connect to metallb service, I get a timeout. Is there any elegant way of solving this? Waiting for a condition to before applying it/repeating the task (I don’t think it’s possible to do the latter in terraform). From my current experience, I guess the answer is “no”, but I want to make sure I’m not missing anything.

Eventually I added a null_resource to connect to the cluster over kubectl which checks if the metallb controller container is ready:

resource "null_resource" "metallb_controller_wait" {
  triggers = {
        build_number = "$(timestamp())"
  }
  provisioner "local-exec" {
    command = <<EOF
        export KUBECONFIG="/root/.kube/config"
        while [ "$(kubectl get pods -n metallb-system -o json -l component=controller | jq '.items[] | .status.containerStatuses[].ready')" != 'true' ]; do
        echo "Waiting for MetalLB Controller to start..."
          sleep 5
        done
        echo "MetalLB Controller has started."
    EOF
  }

  depends_on = [ vsphere_virtual_machine.kube_workers ]
}

This seems to work decently enough.