For_each loop over objects in a map "Error: Cycle:"

Hi,

I am trying to create a bunch of containers with unique names, Though I keep getting the error: Error: Cycle: lxd_container.create_container["grafana"], lxd_container.create_container["unifi"]
Does anyone know to happen what I’m doing wrong? Thank you in advance!

terraform {
  required_providers {
    lxd = {
      source = "sl1pm4t/lxd"
    }
  }
}

locals {
  images = {
    "ubuntu_focal"  = { alias = [ "ubuntu_focal" ],  source_remote = "ubuntu", source_image = "ubuntu/focal/amd64" }
    "debian_buster" = { alias = [ "debian_buster" ], source_remote = "images", source_image = "debian/10/amd64"}
  }
  containers = {
    "unifi"    = { image = local.images.ubuntu_focal.alias },
    "grafana"  = { image = local.images.debian_buster.alias }
  }
}

resource "lxd_cached_image" "image" {
  for_each      = local.images

  aliases       = each.value.alias
  source_remote = each.value.source_remote
  source_image  = each.value.source_image
}

resource "lxd_container" "create_container" {
  for_each  = local.containers

  name      = each.key
  image     = each.value.image
  ephemeral = false

  config = {
    "boot.autostart" = true
  }

  provisioner "local-exec" {
    command = <<EOT
      sudo lxc exec ${lxd_container.create_container[each.key]} -- apt update
      sudo lxc exec ${lxd_container.create_container[each.key]} -- apt install -y curl
      sudo lxc exec ${lxd_container.create_container[each.key]} -- curl -o /tmp/bootstrap-salt.sh -L https://bootstrap.saltstack.com
      sudo lxc exec ${lxd_container.create_container[each.key]} -- sh /tmp/bootstrap-salt.sh -x python3 stable
      sudo lxc exec ${lxd_container.create_container[each.key]} -- systemctl enable --now salt-minion
      EOT
  }
}

resource "null_resource" "delete_minion_key" {
  provisioner "local-exec" {
    when       = destroy
    on_failure = continue
    command    = <<EOT
      sudo salt-key -y -d ${self.triggers.host}
      EOT
    }
}

Hi @Ramshield,

Inside provisioner blocks you must use self to refer to the object that the provisioner is running for. You already have a correct use of self in the null_resource.delete_minion_key provisioner; the principle is the same for lxd_container.create_container.

However, in your existing example you seem to be referring to the entire resource instance object as your string interpolation, which isn’t correct because it’s not possible to directly interpolate an object into a string template. If you just change lxd_container.create_container[each.key] to self then you’ll see a new error about that type mismatch.

If there was a particular attribute foo of that resource type that you were intending to use then you can access it via self like this:

      sudo lxc exec ${self.example} -- apt update
1 Like

Hi,

Thanks, that helps.

This is what I got now:

terraform {
  required_providers {
    lxd = {
      source = "sl1pm4t/lxd"
    }
  }
}

locals {
  images = {
    "ubuntu_focal"  = { alias = [ "ubuntu_focal" ],  source_remote = "ubuntu", source_image = "ubuntu/focal/amd64" }
    "debian_buster" = { alias = [ "debian_buster" ], source_remote = "images", source_image = "debian/10/amd64"}
  }
  containers = {
    "unifi"    = { image = "ubuntu_focal", alias = [ "ubuntu_focal" ],  source_remote = "ubuntu", source_image = "ubuntu/focal/amd64" },
    "grafana"  = { image = "debian_buster", alias = [ "debian_buster" ], source_remote = "images", source_image = "debian/10/amd64" }
  }
}

resource "lxd_cached_image" "image" {
  for_each      = local.containers

  aliases       = each.value.alias
  source_remote = each.value.source_remote
  source_image  = each.value.source_image
}

resource "lxd_container" "create_container" {
  for_each  = local.containers

  name      = each.key
  image     = lxd_cached_image.image.fingerprint
  ephemeral = false

  config = {
    "boot.autostart" = true
  }

  provisioner "local-exec" {
    command = <<EOT
      echo ${self}
      EOT
  }
}

So lxd_cached_image.image.fingerprint isn’t going to work. But can I use self here?
I need it to download the image first before setting up the container.
Is what I’m trying to achieve even possible, I start to wonder?

Hi @Ramshield,

I think you’re asking about how to connect together the container objects and the image objects where each.key matches. If so, the important thing to know here is that a resource with for_each set is represented as a map when you refer to it elsewhere, using the same keys as in the for_each source value. So you can refer to a particular corresponding object like this:

  image = lxd_cached_image.image[each.key].fingerprint

In this case it seems like your lxd_container resource doesn’t use any of the values from local.containers, and so it might be more clear to write it like the following example so that future readers can understand your intent as “create one container for each cached image”:

resource "lxd_container" "create_container" {
  for_each  = lxd_cached_image.image

  name      = each.key
  image     = each.value.fingerprint
  ephemeral = false

  config = {
    "boot.autostart" = true
  }
}

I’m sorry if I’m not understanding correctly what you are asking about. If not, it would be helpful if you could share Terraform’s output from running the configuration you tried, including any error messages you saw, and then explain what result you wanted to see.

1 Like

So I’m trying to achieve to build multiple containers with the same main.tf in Terraform.
It should loop over all the containers, download the images in lxd_cached_image where I get a variable fingerprint which I can use in lxd_container to setup the container.

My code:

terraform {
  required_providers {
    lxd = {
      source = "sl1pm4t/lxd"
    }
  }
}

locals {
  containers = {
    "unifi"    = { name = "unifi", image = "ubuntu_focal", alias = [ "ubuntu_focal" ],  source_remote = "ubuntu", source_image = "focal/amd64" },
    "grafana"  = { name = "grafana", image = "debian_buster", alias = [ "debian_buster" ], source_remote = "images", source_image = "debian/10/amd64" }
  }
}

resource "lxd_cached_image" "image" {
  for_each      = local.containers

  aliases       = each.value.alias
  source_remote = each.value.source_remote
  source_image  = each.value.source_image
}

resource "lxd_container" "create_container" {
  for_each  = local.containers

  name      = each.key
  image     = lxd_cached_image.image[each.key].fingerprint
  ephemeral = false

  config = {
    "boot.autostart" = true
  }

  provisioner "local-exec" {
    command = <<EOT
      echo ${self}
      EOT
  }
}

And I get the following error:

# terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

lxd_cached_image.image["grafana"]: Refreshing state... [id=local/54156cd6a2243b9b059bc693837cfd2750fe6e720c4faa1999b43daa42a0ab81]

Warning: provider set empty string as default value for bool accept_remote_certificate



Warning: provider set empty string as default value for bool generate_client_certificates



Error: Invalid index

  on main.tf line 28, in resource "lxd_container" "create_container":
  28:   image     = lxd_cached_image.image[each.key].fingerprint
    |----------------
    | each.key is "grafana"
    | lxd_cached_image.image is object with 1 attribute "unifi"

The given key does not identify an element in this collection value.

I think I’m doing it correctly, and I really don’t get the error at all.
I appreciate your explanation, as I have been stuck at this for 4 days now.

Hi @Ramshield,

Your configuration looks workable to me. I think there may be something operationally strange going on here.

The warnings about the provider setting an empty string as a default value for a bool are weird and suggest that something strange is happening in the provider implementation. I’m not sure if that’s contributing to the problem or if it’s just an unrelated oddity.

It seems like somehow your set of lxd_cached_image.image instances has got out of sync with the elements of local.containers. One way that can potentially happen is if you delete an existing object outside of Terraform and thus Terraform finds itself in a confusing situation: there is no longer a “unifi” object in the real remote infrastructure, and so Terraform can’t correlate the existing objects with the “unifi” element in your local.containers map.

If the cached image for “unifi” has been deleted outside of Terraform, you could help Terraform understand your intent by asking it explicitly to “forget” the existing object by removing it from the state:

terraform state rm 'lxd_cached_image.image["unifi"]'

This is actually a good example of a situation where my second example of for_each = lxd_cached_image.image would potentially produce a better result, because it gives Terraform a better understanding of your intent: create one container for each image. I think that if you write it the way I did in my final example in my last comment then it would bypass the error because Terraform will be able to see that the “unifi” instance of lxd_container.create_container has been removed and thus not attempt to retrieve it from the map.

1 Like

Hi,

Thanks a lot, you were very close, and definitely pointed me in the right direction:

root@server-01-dev:/lxc-test# terraform state rm 'lxd_cached_image.image["unifi"]'

Error: Invalid target address

No matching objects found. To view the available instances, use "terraform
state list". Please modify the address to reference a specific instance.


root@server-01-dev:/lxc-test# terraform state list
lxd_cached_image.image["grafana"]
root@server-01-dev:/lxc-test# terraform state rm 'lxd_cached_image.image["grafana"]'
Removed lxd_cached_image.image["grafana"]
Successfully removed 1 resource instance(s)

But can I re-use the image somehow?

# terraform apply -auto-approve
lxd_cached_image.image["unifi"]: Creating...
lxd_cached_image.image["grafana"]: Creating...
lxd_cached_image.image["test1"]: Creating...
lxd_cached_image.image["unifi"]: Still creating... [10s elapsed]
lxd_cached_image.image["grafana"]: Still creating... [10s elapsed]
lxd_cached_image.image["test1"]: Still creating... [10s elapsed]
lxd_cached_image.image["unifi"]: Still creating... [20s elapsed]
lxd_cached_image.image["grafana"]: Still creating... [20s elapsed]
lxd_cached_image.image["test1"]: Still creating... [20s elapsed]
lxd_cached_image.image["grafana"]: Still creating... [30s elapsed]
lxd_cached_image.image["unifi"]: Still creating... [30s elapsed]
lxd_cached_image.image["test1"]: Still creating... [30s elapsed]
lxd_cached_image.image["test1"]: Creation complete after 31s [id=local/d320aa0a0cf2f6b21c10709865d28ec4ffd7e930bf15464914c495d29a138a35]
lxd_cached_image.image["unifi"]: Still creating... [40s elapsed]
lxd_cached_image.image["unifi"]: Still creating... [50s elapsed]
lxd_cached_image.image["unifi"]: Still creating... [1m0s elapsed]
lxd_cached_image.image["unifi"]: Still creating... [1m10s elapsed]
lxd_cached_image.image["unifi"]: Still creating... [1m20s elapsed]
lxd_cached_image.image["unifi"]: Still creating... [1m30s elapsed]
lxd_cached_image.image["unifi"]: Still creating... [1m40s elapsed]
lxd_cached_image.image["unifi"]: Creation complete after 1m45s [id=local/c141ba91f766aab428b64f0e2f64b11e583093c2f4e52b4f6c8baa32021d413d]

Warning: provider set empty string as default value for bool accept_remote_certificate



Warning: provider set empty string as default value for bool generate_client_certificates



Error: Failed remote image download: Add new image alias to the database: UNIQUE constraint failed: images_aliases.project_id, images_aliases.name

  on main.tf line 18, in resource "lxd_cached_image" "image":
  18: resource "lxd_cached_image" "image" {

Which makes sense ofcourse. But I will have about 10 containers soon, and it doesn’t make sense to have 10 times the same image, just with a different name.

Thanks again for your help! It at least works now :slight_smile:

Hi @apparentlymart,

Would you mind helping me out one last time?

Please see my previous post. Thank you kindly!

Hi @Ramshield,

Unfortunately I think this part of your question calls from some LXD-specific knowledge I don’t have. I’m not familiar with lxd_cached_image and what the constraints are for it, so I’m not sure how to understand the error message you shared.

If you can explain what you want to achieve in a sort of generic Terraform way, separately from LXD concepts, I may be able to give some pointers for how to do it, but with what you’ve shared so far I’m not sure what to suggest.