How to set a dependency within iterations of a for_each

I have a bit of a corner-case issue here. I’ve created a module that manages AWS IAM key rotation. This module creates two keys in a for_each loop. Two keys is chosen as AWS has a hard two key per account limit. The for_each key is a timestamp, which is used to rotate the keys. When the rotation time comes around the for_each structure changes so that a new key is created, the newer of the existing keys is kept (for seamless blue/green cutover in the consuming app) and the old existing key is destroyed. The resource code is dead simple:

resource "aws_iam_access_key" "iam_key" {
  # Use a for_each so that new keys rotate in and old-keys expire out
  for_each = {
    (local.current_key_stamp) = null,
    (local.expired_key_stamp) = null
  }

  user = var.iam_user
}

This all works fantastic in theory. In practice I’m running into issues with the hard AWS two key limit I mentioned. There is a race condition where, if the old key is not fully destroyed before the new key create API call is executed, I’ll get a 409 from AWS. Example:

module.rotating_iam_key.aws_iam_access_key.iam_key["1666656000"]: Destroying... [id=<REDACTED>]
module.rotating_iam_key.aws_iam_access_key.iam_key["1671840000"]: Creating...
module.rotating_iam_key.aws_iam_access_key.iam_key["1666656000"]: Destruction complete after 1s

Error: Error creating access key for user <REDACTED>: LimitExceeded: Cannot exceed quota for AccessKeysPerUser: 2
        status code: 409, request id: <REDACTED>

Note that the create is logged before the destruction complete message. I’ve seen this with and without -parallelism=1.

Is there a way to set a dependency within the for_each so that Terraform doesn’t try and create the new key until destruction of the old key completes?

Hi @ag-TJNII,

Dependencies in Terraform are only between resource blocks, not between instances of resource blocks. Resource instances of the same resource are all equal in the dependency graph and so in theory they are handled concurrently, and if constrained by the concurrency limit then the order is undefined.

If you need to fix a particular order then I think you will need to find some way to represent this as two separate resource blocks where one depends on the other. I’m afraid I don’t have an immediate idea about how to achieve that while still keeping this dynamic.

Another option would be to have only a single access key resource instance and tell Terraform to replace it when you need to rotate the access key.

One way to do that is to add an extra option when you run Terraform:

  • terraform apply -replace=aws_iam_access_key.iam_key

If you want to do it entirely within configuration then I think it should be possible to do something with the replace_triggered_by lifecycle argument, although that mechanism is resource-based so you may need to introduce an extra “no-op” resource which will adapt what would normally just be a local value change into a resource action. null_resource in the hashicorp/null provider is a common way to do that:

resource "null_resource" "iam_key_rotate" {
  triggers = {
    key_stamp = local.current_key_stamp
  }
}

resource "aws_iam_access_key" "iam_key" {
  user = var.iam_user

  lifecycle {
    replace_triggered_by = null_resource.iam_key_rotate
  }
}

The meaning of the above is that Terraform should plan to replace aws_iam_access_key any time there’s a change planned for null_resource.iam_key_rotate. Because the null_resource configuration refers to local.current_key_stamp, there will always be a change pending for that resource whenever local.current_key_stamp has changed compared to the previous run.

With this model you’d represent the need to rotate the access key by changing the value of local.current_key_stamp.

By default Terraform will handle a “replace” by expanding it to a destroy followed by a create. There is also a “create before destroy” mode which swaps that ordering, but I think if you used the create-before-destroy order then you’d recreate the same problem you started with, so the default “destroy and then create” ordering is the one you’ll need here. You can see in Terraform’s plan which of the two orderings it is intending to use, so you can check before applying whether it’s using the required ordering.

Okay, thanks for the info. I’ll look into the replace_triggered_by lifecycle rule. I think with that and some slightly more complex timestamp math I can get the behavior I want.

Closing the loop this pattern worked successfully. Using this we were able to resolve the limit issue when rotating keys.

This topic was automatically closed 62 days after the last reply. New replies are no longer allowed.