So bear with me, I’m very new to Terraform. I was given an old code base that was on 0.12. I upgraded the code base to Terraform 1.0.8 and got that working for a CI pipeline, building a new eks cluster, and tearing it down. However I need to upgrade a live public eks cluster, and that has me scratching my head a bit. Because you have to go through the steps of doing all the upgrades one at a time.
Here are the steps I’ve run and where I’m confused on what needs to change.
terraform init --upgrade -backend-config=../deploy/public/backend.cfg
terraform 0.13upgrade
git commit -am "0.13 upgrade changes"
terraform plan -out outputs/public.plan -var-file ../deploy/public/public.json -var=cloudflare_api_token=*****
This leads to several errors like this(lets focus on incubator to untangle this knot):
Error: Provider configuration not present
To work with data.helm_repository.incubator its original provider
configuration at provider[“Terraform Registry”] is required, but it
has been removed. This occurs when a provider configuration is removed while
objects created by that provider still exist in the state. Re-add the provider
configuration to destroy data.helm_repository.incubator, after which you can
remove the provider configuration again.
This is because I’ve removed the data element as part of the demands on upgrading.
Code in question:
-data "helm_repository" "incubator" {
- name = "incubator"
- url = "http://storage.googleapis.com/incubator"
-
- depends_on = [
- module.eks_cluster,
- module.eks_operations_nodes,
- module.eks_vpc,
- kubernetes_service_account.tiller_service_account,
- kubernetes_cluster_role_binding.tiller_role_binding
- ]
-}
resource "helm_release" "aws-alb-ingress-controller" {
name = "aws-alb-ingress-controller"
namespace = "kube-system"
- repository = "incubator"
+ repository = "https://charts.helm.sh/incubator"
chart = "aws-alb-ingress-controller"
version = "0.1.7"
resource "helm_release" "aws-alb-ingress-controller" {
module.eks_cluster,
module.eks_operations_nodes,
module.eks_vpc,
- data.helm_repository.incubator,
aws_security_group.eks_alb_sg,
- aws_security_group.eks_alb_instances_sg
+ aws_security_group.eks_alb_instances_sg,
+ kubernetes_service_account.tiller_service_account,
+ kubernetes_cluster_role_binding.tiller_role_binding
]
}
Reading through stack overflow I believe I need to run some magic spell using
terraform state replace-provider FROM TO
But I’m a little concerned because I have all my errors are related to provider[“Terraform Registry”] with a different target to destroy.