Suggestions to implement modules that avoid "keys derived from resource attributes" error

Hi there,

I’m trying to build my terraform modules in such a way that allow them to be applied as part of the same terraform execution. I have run up against a blocker to this and am seeking suggestions to meet my requirements without introducing risk to the modules.

The error that occurs clearly explains the problem, and I understand the issue, namely:

The “for_each” map includes keys derived from resource attributes that cannot be determined until apply, and so Terraform cannot determine the full set of keys that will identify the instances of this resource.

When working with unknown values in for_each, it’s better to define the map keys statically in your configuration and place apply-time results only in the map values.

Alternatively, you could use the -target planning option to first apply only the resources that the for_each value depends on, and then apply a second time to fully converge.

For context:

  • I am maintaining multiple AWS accounts.
  • I have build 1 module (“vpc”) that is responsible for creating a VPC, subnets, routetables.
  • I have another module (“tgw”) that creates routes between the private subnets and another accounts Transit Gateway.
  • The “vpc” module relies upon the output for the “tgw” module.

As these modules can be used to maintain multiple accounts

  • the list of cidrs to direct to the “tgw” are variable
  • the route table ids to apply to are variable

The “route table ids” cannot be known prior to the completion of the “vpc” module’s apply.
The “cidrs” to direct to tgw are known upon execution, but they are passed as a variable as they change based upon the accounts requirements.

The below is an example of what I am currently doing to create “aws_route” resources based on these 2 variable values.

locals {
  pairs =   [for pair in setproduct(var.transit_cidrs, var.route_table_ids) : {
    cidr  = pair[0]
    rt_id = pair[1]
  }]
  map = { for pair in local.pairs : "${pair.cidr}-${pair.rt_id}" => pair }
}

resource "aws_route" "route_tg" {
  for_each               = local.map
  route_table_id         = each.value.rt_id
  destination_cidr_block = each.value.cidr
  transit_gateway_id     = var.tgw_id
}

What this eventually gives me is multiple resources aws_route.route_tg[***]where *** would (for example) looks like this: 0.0.0.0/0-rtb-123b1234fc12f1234.

Given that both the cidr and the rt id can be variable, and given that the VPC creation is part of the same execution as the route creation, this obviously causes the error message.

The reason the keys are not statically defined is because if a new cidr or route table id is added at a later date and thus is passed to the “tgw” module, the order of values has the potential to change. This could result in one of the mappings “moving” keys, and thus causing a destroy action and a create action. Potentially the create could even try actioning before the destroy does which would obviously cause an issue. I don’t want to be in a situation where the route has been removed but failed to create. Ideally I wouldn’t want it to be removed at all.

I’m sure this “issue” isn’t insurmountable, and an adjustment to how my modules have been defined could potentially make this workable, but I’m at a loss at the moment as to how:

  • to create both vpc and tgw routes in the same apply execution
  • to support variability in input without adding risk to the apply

Part of the reason I want to do this is because I want to build a test environment where my pipeline creates an entire environment (vpc, tgw routes, ec2 instances) every morning and destroys again in the evening. Splitting into multiple executions is not practical for this kind of automated implementation.

I hope the examples above clearly describe the challenge, and I welcome any criticism of the implementation that would help me to meet the goals laid out above.

Cheers

Steve