I’m faced with strange situation that terraform plan rise error when trying parse next code:
template_id = flatten([ for xx in module.tmpl: [ for v in xx: v.id if v.label == each.value.template_label]])[0]
error:
Error: Invalid index
on cloud_configure.tf line 304, in module "virtual_machine":
304: template_id = flatten([ for xx in module.tmpl: [ for x in xx: x.id if x.label == each.value.template_label ]])[0]
|----------------
| each.value.template_label is "CentOS 7.7 x64"
| module.tmpl is object with 1 attribute "centos7.7x64"
The given key does not identify an element in this collection value.
But when I run this code from terraform console all looks fine:
I can’t directly answer your question but I can say that a reason why terraform console might differ in behavior from terraform plan is that console evaluates expressions against the result of your previous terraform apply, not against the planned results caused by any current configuration changes you might’ve made. So if you’ve changed your configuration since you last ran terraform apply, it’s possible that your changes have caused this error and that’s why terraform console doesn’t reproduce it.
In order to have some ideas about what might address that error though, I’d need to see the configuration change that you made which caused it to appear. I suspect the general problem here is two different objects disagreeing about their for_each values, but it’s hard to say without seeing the configuration you’re trying to plan.
Taking another guess, are you using Terraform 0.13 or older?
This looks similar to what happens in older versions when the state and the configuration do not match during refresh. If that is the case, using -refresh=false is another workaround before upgrading to a more recent version.
Looks like problem not in terraform (I use v0.13.5) but in our custom provider and type of some resource.
Seems I found where is problem, I have resource which work as “one time” resource,
the task of this resource are install template into system from template repository,
after template installed into system this template removing from repository. If template removed from system it again accessible from repository.
So when something failed during first run of terraform apply, the second run return empty resource for data.remote_template.availabletmp. And of course all next depended resources failed too.
I can rewrite provider but don’t know hot to catch this type of resources in terraform. Or how to check if resource are empty, and don’t call dependent resources.
PS. I rewrite some code without loops, but idea the same
The typical design for a data source in a provider is for it to fail with an error if the requested object doesn’t exist, rather than indicating success and returning an “empty” answer. That way the problem can be reported at the location where it occurred, rather than in the context of some other resource whose expectations were not met.
I understand, but what I can do if it “normal” behavior of our product…
Template which already added to the system gone from repository, and then data source resource during update fail, but template already is in the system.
There is possibility to skip this resource from update?