Different result during terraform plan & console

Hello

I’m faced with strange situation that terraform plan rise error when trying parse next code:

template_id = flatten([ for xx in module.tmpl: [ for v in xx: v.id if v.label == each.value.template_label]])[0]

error:

Error: Invalid index

  on cloud_configure.tf line 304, in module "virtual_machine":
 304:   template_id                      = flatten([ for xx in module.tmpl: [ for x in xx: x.id if x.label == each.value.template_label ]])[0]
    |----------------
    | each.value.template_label is "CentOS 7.7 x64"
    | module.tmpl is object with 1 attribute "centos7.7x64"

The given key does not identify an element in this collection value.

But when I run this code from terraform console all looks fine:

> module.tmpl
{
  "centos7.7x64" = {
    "imgtmpl" = {
      "allow_resize_without_reboot" = true
      "allowed_hot_migrate" = true
      "allowed_swap" = true
      "application_server" = false
      "backup_server_id" = ""
      "baremetal_server" = true
      "cdn" = false
      "checksum" = "3270f1bcc39069f5151e2b87effa301d"
      "created_at" = "2021-02-04T10:24:14.000Z"
      "datacenter_id" = 0
      "disk_target_device" = "---\nxen: xvda\nkvm: hd\n"
      "draas" = false
      "ext4" = true
      "file_name" = "centos-7.7-x64-1.0-xen.kvm.kvm_virtio.tar.gz"
      "id" = "4"
      "identifier" = "uytqypuuoojzty"
      "initial_password" = ""
      "initial_username" = "root"
      "label" = "CentOS 7.7 x64"
      "locked" = false
      "manager_id" = "centos7.7x64"
      "min_disk_size" = 5
      "min_memory_size" = 384
      "openstack_id" = 0
      "operating_system" = "linux"
      "operating_system_arch" = "x64"
      "operating_system_distro" = "rhel"
      "operating_system_edition" = ""
      "operating_system_tail" = ""
      "parent_template_id" = 0
      "properties" = {
        "real_distro" = "centos"
      }
      "remote_id" = ""
      "resize_without_reboot_policy" = {}
      "smart_server" = true
      "state" = "active"
      "template_size" = 474333
      "type" = "ImageTemplate"
      "updated_at" = "2021-02-04T10:25:12.000Z"
      "user_id" = 0
      "version" = "1.0"
      "virtualization" = [
        "xen",
        "kvm",
        "kvm_virtio",
      ]
    }
  }
}

> flatten([ for xx in module.tmpl: [ for v in xx: v.id if v.label == "CentOS 7.7 x64"]])[0]
4

Where can be problem?
Thanks

PS. On clean configuration the same problem… so problem not in-place updates…

Hi @skydion,

I can’t directly answer your question but I can say that a reason why terraform console might differ in behavior from terraform plan is that console evaluates expressions against the result of your previous terraform apply, not against the planned results caused by any current configuration changes you might’ve made. So if you’ve changed your configuration since you last ran terraform apply, it’s possible that your changes have caused this error and that’s why terraform console doesn’t reproduce it.

In order to have some ideas about what might address that error though, I’d need to see the configuration change that you made which caused it to appear. I suspect the general problem here is two different objects disagreeing about their for_each values, but it’s hard to say without seeing the configuration you’re trying to plan.

Hi @skydion

Taking another guess, are you using Terraform 0.13 or older?
This looks similar to what happens in older versions when the state and the configuration do not match during refresh. If that is the case, using -refresh=false is another workaround before upgrading to a more recent version.

Hi, guys

Looks like problem not in terraform (I use v0.13.5) but in our custom provider and type of some resource.

Seems I found where is problem, I have resource which work as “one time” resource,
the task of this resource are install template into system from template repository,
after template installed into system this template removing from repository. If template removed from system it again accessible from repository.

So when something failed during first run of terraform apply, the second run return empty resource for
data.remote_template.availabletmp. And of course all next depended resources failed too.

I can rewrite provider but don’t know hot to catch this type of resources in terraform. Or how to check if resource are empty, and don’t call dependent resources.

PS. I rewrite some code without loops, but idea the same

locals {
  cloud_config = {
    "image_template" = {
      "CentOS 7.7 x64" = {
        backup_server_label        = ""
        image_template_group_label = "tf_image_template_group1"
      },
    }
  }
}

data "remote_template" "availabletmp" {
  for_each = local.cloud_config["image_template"]
  label = each.key
}

module "tmpl" {
  for_each = local.cloud_config["image_template"]
  # PROBLEM
  manager_id = data.remote_template.availabletmp[each.key].manager_id
}

module "virtual_machine" {
  for_each = local.cloud_config["virtual_machine"]
  ...
  # PROBLEM
  template_id = module.tmpl[each.value.template_label].imgtmpl.id
  ...
}

Hi @skydion,

The typical design for a data source in a provider is for it to fail with an error if the requested object doesn’t exist, rather than indicating success and returning an “empty” answer. That way the problem can be reported at the location where it occurred, rather than in the context of some other resource whose expectations were not met.

Hi, @apparentlymart

I understand, but what I can do if it “normal” behavior of our product…
Template which already added to the system gone from repository, and then data source resource during update fail, but template already is in the system.

There is possibility to skip this resource from update?