Getting the "The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform ..." error

I’ve got the idea for a scaling rule define as follows:

#############################################################################################################
# Additional schedules
#############################################################################################################
locals {
  // NOTE : This is a UTC timestamp and so will need to change when outside of DST to T09:45:00Z if still required.
  client_x_scale_date     = "2020-08-03T08:45:00Z"
  client_x_scale_required = (formatdate("YYYYMMDDhhmmss", timestamp()) <= formatdate("YYYYMMDDhhmmss", local.client_x_scale_date)) ? true : false
  client_x_scaling        = var.environment == "production" && local.client_x_scale_required ? {
    app = {
      name    = module.app_asg.autoscaling_group_name
      asg_min = var.app_asg_min
      scale   = 4
    }
    wildcard = {
      name    = module.wildcard_asg.autoscaling_group_name
      asg_min = var.wildcard_asg_min
      scale   = 6
    }
  } : {}
}

resource "aws_autoscaling_schedule" "client_x_scaling_up" {
  for_each               = local.client_x_scaling
  scheduled_action_name  = "${each.key}-scheduled-scaling-up-client_x"
  min_size               = ceil(each.value.asg_min * each.value.client_x_scale)
  max_size               = -1
  desired_capacity       = -1
  start_time             = local.client_x_scale_date
  autoscaling_group_name = each.value.name
}

The goal is to allow me to set the date time for an autoscaling rule without needing to come back after this code once the rule has been activated to remove the code. Essentially, build in an expiry date for the rule.

If the time has passed for when this rule is applicable, then don’t have the resource, thus deleting the rule from the state file.

At the moment I’m getting:

Error: Invalid for_each argument

  on client-scaling.tf line 23, in resource "aws_autoscaling_schedule" "client_x_scaling_up":
  23:   for_each               = local.client_x_scaling

The "for_each" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the for_each depends on.

If I remove the timestamp() check and put a hard-coded value there, things work appropriately. If my fake timestamp is before the due date, I get the resources in the plan. If it is after the due date, I do not get the resources in the plan.

So, with …

  client_x_scale_required = (formatdate("YYYYMMDDhhmmss", "2020-07-29T14:35:32Z") <= formatdate("YYYYMMDDhhmmss", local.client_x_scale_date)) ? true : false 

I get …

  # aws_autoscaling_schedule.client_x_scaling_up["app"] will be created
  + resource "aws_autoscaling_schedule" "client_x_scaling_up" {
      + arn                    = (known after apply)
      + autoscaling_group_name = "asg-app - lc-app-20200729115620634100000002"
      + desired_capacity       = -1
      + end_time               = (known after apply)
      + id                     = (known after apply)
      + max_size               = -1
      + min_size               = 8
      + recurrence             = (known after apply)
      + scheduled_action_name  = "app-scheduled-scaling-up-client_x"
      + start_time             = "2020-08-03T08:45:00Z"
    }

  # aws_autoscaling_schedule.client_x_scaling_up["wildcard"] will be created
  + resource "aws_autoscaling_schedule" "client_x_scaling_up" {
      + arn                    = (known after apply)
      + autoscaling_group_name = "asg-wildcard - lc-wildcard-20200729115620727100000004"
      + desired_capacity       = -1
      + end_time               = (known after apply)
      + id                     = (known after apply)
      + max_size               = -1
      + min_size               = 12
      + recurrence             = (known after apply)
      + scheduled_action_name  = "wildcard-scheduled-scaling-up-client_x"
      + start_time             = "2020-08-03T08:45:00Z"
    }

If I use …

client_x_scale_required = (formatdate("YYYYMMDDhhmmss", "2020-08-03T08:45:01Z") <= formatdate("YYYYMMDDhhmmss", local.client_x_scale_date)) ? true : false

1s after the required datetime, I don’t get any resources in the plan. And if they had existed, I would be expecting them to be deleted.

And so, achieving zero maintenance once the required date has been set.

Is there a way to achieve this sensibly

One horrible option is to put the current datetime into a variable as part of the pre-processing (something I’d rather not do as the pipeline has only terraform in play - no make or other scripting to keep the work down).

The main reason for this is that after the due date, the rule is invalid from AWS’s perspective and so the deployment fails. Which means, I have to be on top of all of these schedules every time one changes.

I’ve not properly mentioned the ongoing frustration of Daylight Savings Time in all of this. Solving that too would be immensely beneficial to my sanity!

Hi @rquadling,

I think the root problem here is that the timestamp() function returns the time that function was called during the apply step, and so during planning its value is not predictable.

Unfortunately that is a general problem with time: it has an annoying habit of continuing to pass between plan and apply. :smiley:

What you’re describing here is a problem I’ve seen before over in this Terraform issue:

That ultimately caused me to make a proposal to the AWS provider team to make this resource type be more “Terraform-friendly”:

Unfortunately it seems that not enough people have been trying to manage autoscaling schedules with Terraform for this to prompt futher work, so as far as I know the situation is still the same as it was then.

In the immediate term I think sadly the best answer is for you to not manage autoscaling schedules with Terraform at all: their design in the remote API is not really compatible with Terraform’s declarative model and the provider is currently just reflecting that same API design into Terraform, making it impossible to use effectively in Terraform.

As you said, there is a possible compromise here of calculating a suitable timestamp outside of Terraform and passing it in, which is definitely not ideal either but you might consider it preferable to creating an entirely separate mechanism to create autoscaling schedules.

1 Like

I don’t have any experience using it, but this seems like it might be a good use case for the time provider. Maybe the time_rotating resource could work for this use case?

Thank you for your input on this.

I did have a little play around with the time provider, but unable to get it to work for me.

Having gone back to the idea of writing to the .tfvars file before planning, I realised there is a better solution in that I can simply set the variable I’m needing as an environment variable.

Adding the following line to the pipeline code has the same effect as editing the .tfvars file, but has the advantage of never needing to edit the .tfvars file - so no additional coding/scripts.

export TF_ENV_current_datetime=$(TZ=UTC date "+%Y-%m-%dT%H:%M:%SZ")

For me this is a “good enough” solution, capturing a datetime outside of terraform to be used in terraform for conditional evaluation.

Thank you for your input!

RQ.

1 Like

Oh yes, the time provider is a good call here and is something that has changed since the previous discussions I linked. Thanks for the reminder!

Unfortunately I think the problem here is that the time provider is aiming to address a different problem with timestamp: that it gets re-evaluated for every new plan. The various time provider resources instead allow generating a timestamp once when the resource is created and then using that same timestamp indefinitely afterwards, but I think the time it retains is still the apply-time plan and so its result is still (known after apply) during planning and so incompatible with for_each. :frowning:

In principle the time provider could offer a mode where it remembers the timestamp when the plan was created rather than the timestamp when the plan was applied, if it did some trickery with the CustomizeDiff callback, but that might just replace one problem with another: if too much time passes between plan and apply then the plan may become invalid (from the remote API’s perspective) by the time it’s applied. :confounded:

2 Likes