Depends_on for a resource with for_each loop

Hello Everyone, Looking for your suggestion . I am struggling to implement depends_on for a resource with for_each and looks they don’t work well together, may be I am missing something here .

Sharing my situation here, The only thing I am looking is to delay the creation of github_team_membership resource until github_team resource is finished .

resource "github_team" "all" {
  for_each = {
    for team in csvdecode(file("teams.csv")) :
    team.name => team
  }

  name                      = each.value.name
  description               = each.value.description
  privacy                   = each.value.privacy
  create_default_maintainer = true
}

resource "null_resource" "wait" {
  provisioner "local-exec" {
    interpreter = ["bash", "-c"]
    command     = "sleep 400"
  }
}

resource "github_team_membership" "members" {
  depends_on = [null_resource.wait]
  for_each = { for tm in local.team_members : tm.name => tm }

  team_id  = each.value.team_id
  username = each.value.username
  role     = each.value.role
}

Error while apply:

Error: Invalid for_each argument
│ 
│   on teams.tf line 22, in resource "github_team_membership" "members":
│   22:   for_each = { for tm in local.team_members : tm.name => tm }
│     ├────────────────
│     │ local.team_members will be known only after apply
│ 
│ The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many
│ instances will be created. To work around this, use the -target argument to first apply only the resources that the for_each
│ depends on.

To fully understand this, would you be able to share the definition of locals.team_members please?

This is what I have in my locals.tf
This is mostly taken from here GitHub - hashicorp/learn-terraform-github-user-teams

# Create local values to retrieve items from CSVs
locals {
  # Parse team member files
  team_members_path = "team-members"
  team_members_files = {
    for file in fileset(local.team_members_path, "*.csv") :
    trimsuffix(file, ".csv") => csvdecode(file("${local.team_members_path}/${file}"))
  }
  # Create temp object that has team ID and CSV contents
  team_members_temp = flatten([
    for team, members in local.team_members_files : [
      for tn, t in github_team.all : {
        name    = t.name
        id      = t.id
        slug    = t.slug
        members = members
      } if t.slug == team
    ]
  ])

  # Create object for each team-user relationship
  team_members = flatten([
    for team in local.team_members_temp : [
      for member in team.members : {
        name     = "${team.slug}-${member.username}"
        team_id  = team.id
        username = member.username
        role     = member.role
      }
    ]
  ])
}

Hi @PrinceAgrawal89,

The error you’ve shown here is not one that I would expect to be directly caused by a depends_on argument.

Have you tried planning this configuration without the depends_on argument to see if you still see the same error? I wonder if there’s a different problem here that is causing this error, separately from the depends_on.

The error message says that local.team_members won’t be known until the apply step, and so I think the crucial question here is why exactly that is true.

Values that can’t be determined until the apply step typically arise when you are deriving data from an attribute of a managed resource (a resource block) that hasn’t been created yet and so its values are not all known yet.

In your case I see that local.team_members is derived from local.team_members_temp and that local.team_members_temp is derived from github_team.all, so I think the most likely cause is that one or more of your github_team.all instances have a name attribute that won’t be known until the apply step.

But that alone isn’t sufficient explanation, because I can see in your first commend that the name for all of your github_team.all instances comes from data in the teams.csv file, and that data must be known during planning both because it’s coming from a static file on disk and because otherwise github_team.all’s for_each expression would itself be invalid with the same error you saw here. :thinking:

I don’t have a good sense of what exactly is going wrong with this configuration, but I do have an idea that might change the outcome. In your team_members_temp local value there is an inner for expression like this:

[
  for tn, t in github_team.all : {
    name    = t.name
    id      = t.id
    slug    = t.slug
    members = members
  } if t.slug == team
]

This is constructing a list of objects where each object has a name attribute whose value matches one instance of the github_team.all resource. The for_each expression of that resource tells me that the instance keys should always match the name attribute, and so in this context tn and t.name should always be equal, but Terraform seems to be unable to prove that for some reason. It might help to use tn instead of t.name here, because the instance keys of a resource with for_each are always known during planning (otherwise it causes the error you saw) and so therefore tn must be known during planning:

[
  for tn, t in github_team.all : {
    name    = tn
    id      = t.id
    slug    = t.slug
    members = members
  } if t.slug == team
]

This was a long comment so I’m going to recap two specific things I’d like you to try and let me know what happens. Please try each of these separately and let me know if either of them changes the outcome – if one helps and the other doesn’t then it’ll be important to know which one helped to try to figure out what to try next.

  1. Remove depends_on = [null_resource.wait] from resource "github_team_membership" "members" and try running terraform plan.

    I understand that you do want this resource to wait until null_resource.wait is complete and so this would not be a suitable final answer, but trying this will help to determine whether the depends_on argument is what’s actually causing this problem, or if the problem is actually elsewhere.

    (Please put the depends_on argument back in your configuration before you try the second item here, because I’d like to see the independent result of each of these trials.)

  2. Change the definition of local value team_members_temp so that it says name = tn instead of name = t.name, and try running terraform plan.

    If you are able to plan without seeing the “Invalid for_each argument” error once you make this change, that would be confusing because tn and t.name ought to be equal here, but at least once we know this we can start thinking about reasons why that might not actually be true, which might help get closer to the root cause.

Thanks!

Hello,

I face the same issue with the same code (Manage GitHub Users, Teams, and Repository Permissions | Terraform | HashiCorp Developer).
I did the change you sugessted (step 2) but it didn’t solve the problem.

Everthing works if I make first the teams (terraform apply -target github_team.all) and then apply but from what I know this is not recomended.

Removed depends on from “github_team_membership” “members”, plan gives:

 test$ terraform plan
data.github_user.self: Reading...
github_membership.all["NeaJohny"]: Refreshing state... [id=terraform-proj:NeaJohny]
data.github_user.self: Read complete after 0s [id=84972004]
╷
│ Error: Invalid for_each argument
│
│   on teams.tf line 14, in resource "github_team_membership" "members":
│   14:   for_each = { for tm in local.team_members : tm.name => tm }
│     ├────────────────
│     │ local.team_members will be known only after apply
│
│ The "for_each" map includes keys derived from resource attributes that cannot be determined until apply, and so Terraform cannot determine the full set of keys that will identify the instances of this
│ resource.
│
│ When working with unknown values in for_each, it's better to define the map keys statically in your configuration and place apply-time results only in the map values.
│
│ Alternatively, you could use the -target planning option to first apply only the resources that the for_each value depends on, and then apply a second time to fully converge.
╵
 test$