Errors while upgrading. Getting the right replace-provider command

So bear with me, I’m very new to Terraform. I was given an old code base that was on 0.12. I upgraded the code base to Terraform 1.0.8 and got that working for a CI pipeline, building a new eks cluster, and tearing it down. However I need to upgrade a live public eks cluster, and that has me scratching my head a bit. Because you have to go through the steps of doing all the upgrades one at a time.

Here are the steps I’ve run and where I’m confused on what needs to change.

terraform init --upgrade -backend-config=../deploy/public/backend.cfg
terraform 0.13upgrade
git commit -am "0.13 upgrade changes"
terraform plan -out outputs/public.plan -var-file ../deploy/public/public.json -var=cloudflare_api_token=*****

This leads to several errors like this(lets focus on incubator to untangle this knot):

Error: Provider configuration not present

To work with data.helm_repository.incubator its original provider
configuration at provider[“”] is required, but it
has been removed. This occurs when a provider configuration is removed while
objects created by that provider still exist in the state. Re-add the provider
configuration to destroy data.helm_repository.incubator, after which you can
remove the provider configuration again.

This is because I’ve removed the data element as part of the demands on upgrading.
Code in question:

-data "helm_repository" "incubator" {
-    name = "incubator"
-    url  = ""
-    depends_on = [
-        module.eks_cluster,
-        module.eks_operations_nodes,
-        module.eks_vpc,
-        kubernetes_service_account.tiller_service_account,
-        kubernetes_cluster_role_binding.tiller_role_binding
-    ]

resource "helm_release" "aws-alb-ingress-controller" {
     name       = "aws-alb-ingress-controller"
     namespace  = "kube-system"
-    repository = "incubator"
+    repository = ""
     chart      = "aws-alb-ingress-controller"
     version    = "0.1.7"

resource "helm_release" "aws-alb-ingress-controller" {
-        data.helm_repository.incubator,
-        aws_security_group.eks_alb_instances_sg
+        aws_security_group.eks_alb_instances_sg,
+        kubernetes_service_account.tiller_service_account,
+        kubernetes_cluster_role_binding.tiller_role_binding

Reading through stack overflow I believe I need to run some magic spell using

terraform state replace-provider FROM TO

But I’m a little concerned because I have all my errors are related to provider[“”] with a different target to destroy.

Well that was a short lived cry for help. I believe I’ve figured it out. For those that come after,

terraform state replace-provider “” '

hmmm… maybe not. Since that lead to.

Error: rendered manifests contain a resource that already exists.
Unable to continue with install: ServiceAccount “aws-alb-ingress-controller” in namespace “kube-system” exists and cannot be imported into the current release:
invalid ownership metadata; label validation error:
key “” must equal “Helm”:
current value is “Tiller”; annotation validation error:
missing key “”:
must be set to “aws-alb-ingress-controller”;
annotation validation error: missing key “”:
must be set to “kube-system”

on line 328, in resource “helm_release” “aws-alb-ingress-controller”:
328: resource “helm_release” “aws-alb-ingress-controller” {

Hi @CodeForCoffee,

I see that you managed to figure out the provider naming problem. For folks who might find this in future: hashicorp isn’t always the right namespace to use, so it’s worth checking Terraform Registry to see which namespace(s) a provider is in. In this case, hashicorp is the correct namespace because this is an official provider, but not all providers that existed prior to v0.13 are official and so some might now live in other namespaces managed by their authors.

The error you’re seeing now seems to be coming from the provider itself, rather than from Terraform Core, and unfortunately I’m not familiar enough with Helm or Kubernetes to suggest what’s up here, but it seems like it’s a conflict with something that already exists in your remote system.