Resolve interpolation issue for terraform code as part of upgrade to version 0.13.2

Dear,

I have upgraded recently from terraform v0.12.29 to v0.13.2 as part of upgrade below destroy module is not working and throwing below error

Error: Invalid function argument: Invalid value for “vars” parameter: invalid vars value: must be a map.

Configuration used by me is as below :-

locals {
  template_input = {
    url = var.url
    manifest = var.data
    validate = var.validate
    namespace = var.namespace
    kubeconfig = var.kubeconfig
  }
}

resource "null_resource" "kubernetes_manifest" {
  triggers = {
    manifest_sha1 = sha1(jsonencode(var.trigger == null ? local.template_input : var.trigger))
  }

Hi @Snehil03,

As far as I can remember, only the templatefile function has an argument called vars and so I have to assume that this error message relates to something you didn’t include in your configuration example.

It’d be helpful if you could share the entire error message, rather than just the description part, and also make sure that the code example you share includes the portion of code that the error message indicates as the subject of the problem.

Hello @apparentlymart ,

There were issue with below two modules while deleting and regenerating manifest ,

module.ingress.module.wildcard-domain-certificate.module.k8s_manifest_domain_wildcard.null_resource.kubernetes_manifest
module.ingress.module.wildcard-subdomain-certificate.module.k8s_manifest_domain_wildcard.null_resource.kubernetes_manifest

# module.ingress.module.wildcard-domain-certificate.module.k8s_manifest_domain_wildcard.null_resource.kubernetes_manifest must be replaced
-/+ resource "null_resource" "kubernetes_manifest" {
      ~ id       = "3204950641571041292" -> (known after apply)
      ~ triggers = { # forces replacement
          ~ "manifest_sha1" = "b2c8f53d4f5e8011437d81c29179ba8dec449e5f" -> "3f11b9f15398bfe191cb27d520330482e893232e"
        }
    }

  # module.ingress.module.wildcard-subdomain-certificate.module.k8s_manifest_domain_wildcard.null_resource.kubernetes_manifest must be replaced
-/+ resource "null_resource" "kubernetes_manifest" {
      ~ id       = "4611614669191685422" -> (known after apply)
      ~ triggers = { # forces replacement
          ~ "manifest_sha1" = "efba57495d14ee87ca40fa27c2d274b401fb64a9" -> "93e821c1264aa89d440cd0ff95ecccfbbe39c9e0"
        }
    }

I hope this will help to debug further into the issue , I have added the complete error what I had received after terraform apply execution.

my original code was as below which use to work in v0.12.29

locals {
  template_input = {
    url = var.url
    manifest = var.data
    validate = var.validate
    namespace = var.namespace
    kubeconfig = var.kubeconfig
  }
}

resource "null_resource" "kubernetes_manifest" {
  triggers = {
    manifest_sha1 = sha1(jsonencode(var.trigger == null ? local.template_input : var.trigger))
  }

      provisioner "local-exec" {
        environment = {
          KUBECONFIG = "/tmp/kubeconfig_${uuid()}"
        }
        command = templatefile("${path.module}/kubectl_apply.tpl.sh", local.template_input)
      }

      provisioner "local-exec" {
        when    = destroy
        environment = {
          KUBECONFIG = "/tmp/kubeconfig_${uuid()}"
          DESTROY = true
        }
        command = templatefile("${path.module}/kubectl_apply.tpl.sh", local.template_input)
      }
    }

Hello @apparentlymart ,

Could you please help me with this issue fix, I am stuck and not able to find a way out, I tried few combination like exporting variable in diffrent resource and catching but nothing worked out,

Thanks,
Snehil Belekar

Hi @Snehil03,

I’d also like to see the full error messages you saw, exactly as Terraform printed them without any parts removed, so I can understand which parts of your configuration are raising the errors.

Thanks!

Hello @apparentlymart ,

Kindly find below error stack as part of terraform plan

zampc816:test zabelesn$ terraform plan
Releasing state lock. This may take a few moments...

Error: Invalid reference from destroy provisioner

  on ../../modules/k8s_manifest/main.tf line 29, in resource "null_resource" "kubernetes_manifest":
  29:     command = templatefile("${path.module}/kubectl_apply.tpl.sh", local.template_input)

Destroy-time provisioners and their connection configurations may only
reference attributes of the related resource, via 'self', 'count.index', or
'each.key'.

References to other resources during the destroy phase can cause dependency
cycles and interact poorly with create_before_destroy.


Error: Invalid reference from destroy provisioner

  on ../../modules/k8s_manifest/main.tf line 29, in resource "null_resource" "kubernetes_manifest":
  29:     command = templatefile("${path.module}/kubectl_apply.tpl.sh", local.template_input)

Destroy-time provisioners and their connection configurations may only
reference attributes of the related resource, via 'self', 'count.index', or
'each.key'.

References to other resources during the destroy phase can cause dependency
cycles and interact poorly with create_before_destroy.


Error: Invalid reference from destroy provisioner

  on ../../modules/k8s_manifest/main.tf line 29, in resource "null_resource" "kubernetes_manifest":
  29:     command = templatefile("${path.module}/kubectl_apply.tpl.sh", local.template_input)

Destroy-time provisioners and their connection configurations may only
reference attributes of the related resource, via 'self', 'count.index', or
'each.key'.

References to other resources during the destroy phase can cause dependency
cycles and interact poorly with create_before_destroy.


Error: Invalid reference from destroy provisioner

  on ../../modules/k8s_manifest/main.tf line 29, in resource "null_resource" "kubernetes_manifest":
  29:     command = templatefile("${path.module}/kubectl_apply.tpl.sh", local.template_input)

Destroy-time provisioners and their connection configurations may only
reference attributes of the related resource, via 'self', 'count.index', or
'each.key'.

References to other resources during the destroy phase can cause dependency
cycles and interact poorly with create_before_destroy.


Error: Invalid reference from destroy provisioner

  on ../../modules/k8s_manifest/main.tf line 29, in resource "null_resource" "kubernetes_manifest":
  29:     command = templatefile("${path.module}/kubectl_apply.tpl.sh", local.template_input)

Destroy-time provisioners and their connection configurations may only
reference attributes of the related resource, via 'self', 'count.index', or
'each.key'.

References to other resources during the destroy phase can cause dependency
cycles and interact poorly with create_before_destroy.    

terraform version
Terraform v0.13.2
+ provider registry.terraform.io/carlpett/sops v0.6.0
+ provider registry.terraform.io/hashicorp/aws v3.33.0
+ provider registry.terraform.io/hashicorp/external v2.1.0
+ provider registry.terraform.io/hashicorp/helm v1.3.0
+ provider registry.terraform.io/hashicorp/kubernetes v1.13.1
+ provider registry.terraform.io/hashicorp/local v2.1.0
+ provider registry.terraform.io/hashicorp/null v3.1.0
+ provider registry.terraform.io/hashicorp/random v3.1.0

I have recently upgraded from version 0.12.29 to 0.13.2 these peice of code started failing during terraform plan.

Codebase related to it .

locals {
  template_input = {
    url = var.url
    manifest = var.data
    validate = var.validate
    namespace = var.namespace
    kubeconfig = var.kubeconfig
  }
}

resource "null_resource" "kubernetes_manifest" {
  triggers = {
    manifest_sha1 = sha1(jsonencode(var.trigger == null ? local.template_input : var.trigger))
  }

  provisioner "local-exec" {
    environment = {
      KUBECONFIG = "/tmp/kubeconfig_${uuid()}"
    }
    command = templatefile("${path.module}/kubectl_apply.tpl.sh", local.template_input)
  }

  provisioner "local-exec" {
    when    = destroy
    environment = {
      KUBECONFIG = "/tmp/kubeconfig_${uuid()}"
      DESTROY = true
    }
    command = templatefile("${path.module}/kubectl_apply.tpl.sh", local.template_input)
  }
}

it invokes below sh file .

#!/usr/bin/env sh

if [ -z "$DESTROY" ]

then

OPERATOR="apply --validate=${validate}"

else

OPERATOR="delete"

fi

cat <<EOF > $KUBECONFIG

${kubeconfig}

EOF

%{if url == null}

cat <<EOF |

${manifest}

EOF

%{else}

curl ${url} |

%{~ endif ~}

kubectl $OPERATOR\

%{if namespace != null}--namespace ${namespace} %{endif}\

-f -

ret_code=$?

rm $KUBECONFIG

exit $ret_code

As mentioned earlier , I tried to play around with multiple combinations like refer self. , then fetching the values in map, or exporting values in the null_resource but nothing worked well so now rollbacked to previous state where it was working before.

Hope I have included all the information related to look at the issue, In case of missing any information let me know, will add to it.

Your help is appreciated .

Thanks,
Snehil belekar

Hi @Snehil03,

This seems like quite a different problem than where you started so I guess you were able to work through your original problem and now have a new problem.

As this error message says, destroy-time provisioners (provisioners with when = destroy) can only refer to the resource instance they are connected to (using self), because during destruction the dependency ordering is reversed and so other objects are typically already destroyed by the time the destroy provisioner is running.

One way to get a working result with your configuration would be to generate the template as part of the triggers for the null_resource instance, instead of rendering it inline in the provisioner block:

resource "null_resource" "kubernetes_manifest" {
  triggers = {
    manifest_sha1      = sha1(jsonencode(var.trigger == null ? local.template_input : var.trigger))
    provisioner_script = templatefile("${path.module}/kubectl_apply.tpl.sh", local.template_input)
  }

  provisioner "local-exec" {
    environment = {
      KUBECONFIG = "/tmp/kubeconfig_${uuid()}"
    }
    command = self.triggers.provisioner_script
  }

  provisioner "local-exec" {
    when    = destroy
    environment = {
      KUBECONFIG = "/tmp/kubeconfig_${uuid()}"
      DESTROY = true
    }
    command = self.triggers.provisioner_script
  }
}

This works within the rules of only referring to self, and can work because Terraform will have saved the rendered script as part of the state of this object and so the earlier-rendered version will still be available when destroying the object. However, this does mean that if you change the template then Terraform will understand it as a request to destroy this object and recreate it; if that isn’t acceptable then a variant on this would be to move the data from local.template_input itself into triggers and refer to it from there when rendering the template for the destroy-time provisioner.

1 Like

Thanks @apparentlymart for giving some time for these issue.

I made the changes what you have suggested for provisioner_script added as part of trigger and executed terraform apply,

I ended with below error stack for missing map elements during terraform apply.

module.k8s_cluster_config.module.k8s_pod_security_policy.null_resource.kubernetes_manifest: Destroying... [id=2405609833578210128]
module.ingress.module.wildcard-domain-certificate.module.k8s_manifest_domain_wildcard.null_resource.kubernetes_manifest: Destroying... [id=5436481900513299472]
module.ingress.module.letsencrypt_issuers.null_resource.kubernetes_manifest: Destroying... [id=4850161818305540654]
module.ingress.module.wildcard-subdomain-certificate.module.k8s_manifest_domain_wildcard.null_resource.kubernetes_manifest: Destroying... [id=5380427471652883408]
module.k8s_cluster_config.module.k8s_pod_network_policies.null_resource.kubernetes_manifest: Destroying... [id=4324758910590133644]
module.cluster.module.eks.aws_launch_configuration.workers[0]: Creating...
module.cluster.module.eks.aws_launch_template.workers_launch_template[0]: Modifying... [id=lt-0b69f750ce74b89c2]
module.cluster.module.eks.aws_launch_template.workers_launch_template[0]: Modifications complete after 1s [id=lt-0b69f750ce74b89c2]
module.cluster.module.eks.aws_launch_configuration.workers[0]: Creation complete after 1s [id=digitalstudio-test-digitalstudio-test-on-demand20210324091355420800000002]
module.cluster.module.eks.random_pet.workers[0]: Creating...
module.cluster.module.eks.random_pet.workers[0]: Creation complete after 0s [id=glorious-guinea]
module.cluster.module.eks.aws_autoscaling_group.workers[0]: Modifying... [id=test-on-demand20200417144914313900000014]
module.cluster.module.eks.aws_autoscaling_group.workers[0]: Modifications complete after 1s [id=test-on-demand20200417144914313900000014]
module.cluster.module.eks.random_pet.workers[0]: Destroying... [id=immense-ladybird]
module.cluster.module.eks.random_pet.workers[0]: Destruction complete after 0s
module.cluster.module.eks.aws_launch_configuration.workers[0]: Destroying... [id=test-on-demand20210310154458863900000003]
module.cluster.module.eks.aws_launch_configuration.workers[0]: Destruction complete after 0s

Error: Missing map element: This map does not have an element with the key "provisioner_script".



Error: Missing map element: This map does not have an element with the key "provisioner_script".



Error: Missing map element: This map does not have an element with the key "provisioner_script".



Error: Missing map element: This map does not have an element with the key "provisioner_script".



Error: Missing map element: This map does not have an element with the key "provisioner_script".



Error: Provider produced inconsistent final plan

When expanding the plan for
module.cluster.module.eks.random_pet.workers_launch_template[0] to include new
values learned so far during apply, provider
"registry.terraform.io/hashicorp/random" changed the planned action from
CreateThenDelete to DeleteThenCreate.

Providers dependency as part of state

terraform providers

Providers required by configuration:
.
├── provider[registry.terraform.io/carlpett/sops] ~> 0.5
├── provider[registry.terraform.io/hashicorp/aws]
├── provider[registry.terraform.io/hashicorp/helm] 1.3.0
├── provider[registry.terraform.io/hashicorp/kubernetes] 1.13.1
├── module.cluster
│   ├── provider[registry.terraform.io/hashicorp/aws]
│   ├── provider[registry.terraform.io/hashicorp/external]
│   ├── provider[registry.terraform.io/hashicorp/kubernetes]
│   ├── module.eks
│   │   ├── provider[registry.terraform.io/hashicorp/random] >= 2.1.*
│   │   ├── provider[registry.terraform.io/hashicorp/kubernetes] >= 1.11.1
│   │   ├── provider[registry.terraform.io/hashicorp/aws] >= 2.52.0
│   │   ├── provider[registry.terraform.io/hashicorp/local] >= 1.4.*
│   │   ├── provider[registry.terraform.io/hashicorp/null] >= 2.1.*
│   │   └── module.node_groups
│   │       ├── provider[registry.terraform.io/hashicorp/aws]
│   │       └── provider[registry.terraform.io/hashicorp/random]
│   └── module.vpc
│       └── provider[registry.terraform.io/hashicorp/aws] >= 2.70.*
├── module.ingress
│   ├── provider[registry.terraform.io/hashicorp/aws]
│   ├── provider[registry.terraform.io/hashicorp/helm]
│   ├── provider[registry.terraform.io/hashicorp/kubernetes]
│   ├── provider[registry.terraform.io/hashicorp/random]
│   ├── module.wildcard-domain-certificate
│       └── module.k8s_manifest_domain_wildcard
│           └── provider[registry.terraform.io/hashicorp/null]
│   ├── module.wildcard-subdomain-certificate
│       └── module.k8s_manifest_domain_wildcard
│           └── provider[registry.terraform.io/hashicorp/null]
│   └── module.letsencrypt_issuers
│       └── provider[registry.terraform.io/hashicorp/null]
├── module.k8s_cluster_config
│   ├── module.k8s_pod_network_policies
│       └── provider[registry.terraform.io/hashicorp/null]
│   └── module.k8s_pod_security_policy
│       └── provider[registry.terraform.io/hashicorp/null]
├── module.kubed
│   ├── provider[registry.terraform.io/hashicorp/helm]
│   └── provider[registry.terraform.io/hashicorp/kubernetes]
├── module.monitoring
│   ├── provider[registry.terraform.io/hashicorp/aws]
│   ├── provider[registry.terraform.io/hashicorp/helm]
│   └── provider[registry.terraform.io/hashicorp/kubernetes]
└── module.aws-services
    ├── provider[registry.terraform.io/hashicorp/kubernetes]
    ├── provider[registry.terraform.io/hashicorp/helm]
    └── provider[registry.terraform.io/hashicorp/aws]

Providers required by state:

    provider[registry.terraform.io/hashicorp/aws]

    provider[registry.terraform.io/hashicorp/external]

    provider[registry.terraform.io/hashicorp/helm]

    provider[registry.terraform.io/hashicorp/kubernetes]

    provider[registry.terraform.io/hashicorp/null]

    provider[registry.terraform.io/hashicorp/random]

Being rookie , I am not sure how to fix these issue.

Thanks for your help !

Hi @Snehil03,

I think since you already have an existing object in state for null_resource.kubernetes_manifest this is getting stuck, because Terraform needs to run that destroy-time provisioner once in order to replace the object as part of adding provisioner_script, but it fails there because your existing object doesn’t have that map element set yet.

I think the easiest way to move forward from this blocked situation would be to tell Terraform to “forget” the existing object, which will then cause Terraform to plan to just create it on the next plan, rather than planning to destroy it first and then recreate it.

terraform state rm module.ingress.module.wildcard-domain-certificate.module.k8s_manifest_domain_wildcard.null_resource.kubernetes_manifest
terraform state rm module.ingress.module.wildcard-subdomain-certificate.module.k8s_manifest_domain_wildcard.null_resource.kubernetes_manifest

An important thing to note about this solution is that Terraform won’t run the destroy time provisioner for this one particular run, because once Terraform has “forgotten” the object it will no longer see it as existing and needing to be destroyed. Therefore if this kubectl_apply.sh script normally does something crucial you may need to take those steps manually just this one time in order to compensate for Terraform not running the script.


Another possible alternative answer, which will work only if you’ve not already run a successful terraform apply or other state-modifying command with Terraform v0.13, would be to apply the change of adding that provisioner_script element to triggers only first on v0.12, where Terraform v0.12 should still be able to run your destroy-time provisioner as before, and then change the two provisioner command arguments as you switch to Terraform v0.13.

2 Likes

Hello @apparentlymart , Finally I am able to see green with respect to terraform apply. Thanks for your guidance to fix this issue.

Cheers !!