When running terraform plan, refreshing state makes kubernetes cluster unavailable

Every subsequent time I run terraform plan in my CI/CD (Github Actions), my state seems to refresh. If I understand what’s happening, that means my kubeconfig variable is updating and services that depend on it then are unable to access the kubernetes cluster. This is resulting in an error like the following:

Error: Kubernetes cluster unreachable: stat kube-config: no such file or directory

Ideally my kubeconfig would not get reproduced every time (I’m not sure why it is) but if it has to, then at least the things that depend on it would wait. But that doesn’t seem to be the case. What am I missing?

My main.tf is this:

terraform {
  required_providers {
    linode = {
      source  = "linode/linode"
      version = "=1.16.0"
    }
    helm = {
      source = "hashicorp/helm"
      version = "=2.1.0"
    }
  }
  backend "remote" {
    hostname      = "app.terraform.io"
    organization  = "MY-ORG-IN-HERE"
    workspaces {
      name = "MY-WORKSPACE"
    }    
  }
}

provider "linode" {
}

provider "helm" {
  debug   = true
  kubernetes {
    config_path = "${local_file.kubeconfig.filename}"
  }
}

resource "linode_lke_cluster" "lke_cluster" {
    label       = "MY-CLUSTER-LABEL"
    k8s_version = "1.21"
    region      = "us-central"

    pool {
        type  = "g6-standard-2"
        count = 3
    }
}

and my other file is outputs.tf

resource "local_file" "kubeconfig" {
  depends_on   = [linode_lke_cluster.lke_cluster]
  filename     = "kube-config"
  # filename     = "${path.cwd}/kubeconfig"
  content      = base64decode(linode_lke_cluster.lke_cluster.kubeconfig)
}

resource "helm_release" "ingress-nginx" {
  # depends_on   = [local_file.kubeconfig]
  depends_on = [linode_lke_cluster.lke_cluster, local_file.kubeconfig]
  name       = "ingress"
  repository = "https://kubernetes.github.io/ingress-nginx"
  chart      = "ingress-nginx"
}

resource "null_resource" "custom" {
  depends_on   = [helm_release.ingress-nginx]
  # change trigger to run every time
  triggers = {
    build_number = "${timestamp()}"
  }

  # download kubectl
  provisioner "local-exec" {
    command = "curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl"
  }

  # apply changes
  provisioner "local-exec" {
    command = "./kubectl apply -f ./k8s/ --kubeconfig ${local_file.kubeconfig.filename}"
  }
}

In terraform cloud I can see the resources created from the first run:

custom	
hashicorp/null
null_resourc...	root	Jul 19 2021
ingress-nginx	
hashicorp/helm
helm_release	root	Jul 19 2021
kubeconfig	
hashicorp/local
local_file	root	Jul 19 2021
lke_cluster	
linode/linode
linode_lke_c...	root	Jul 19 2021

But when I run the terraform apply, this is the output I get:

Terraform v1.0.2
on linux_amd64
Configuring remote state backend...
Initializing Terraform configuration...
linode_lke_cluster.lke_cluster: Refreshing state... [id=31946]
local_file.kubeconfig: Refreshing state... [id=fbb5520298c7c824a8069397ef179e1bc971adde]
helm_release.ingress-nginx: Refreshing state... [id=ingress]
╷
│ Error: Kubernetes cluster unreachable: stat kube-config: no such file or directory
│ 
│   with helm_release.ingress-nginx,
│   on outputs.tf line 16, in resource "helm_release" "ingress-nginx":
│   16: resource "helm_release" "ingress-nginx" {
│ 
╵
Error: Terraform exited with code 1.
Error: Process completed with exit code 1.