Terraform seems to recreate all resources when used for_each on a list/map and if the list is modified

I have 4-6 or may be n number of projects in aws environment that share the base domain. I am trying to create separate cloudfront distribution, api gateway and cognito user pool each for each project.

For api gateway I already had a module that was creating gateway for one project earlier in the environment now I have put a loop in the calling module of modules/api to create api gateway for each project in this list

var.projects= [{"project": "project1", "version": "v1"}, {"project": "project2", "version": "v1"}, {"project": "project3", "version": "v2"}, {"project": "project4", "version": "v3"}]

For first time apply terraform configurations work fine but when I add a new projects (example project5 and project6) to the above var.project list it fails with this error for all projects 1 to 4

Error: creating Route 53 Record: InvalidChangeBatch: [Tried to create resource record set [name='xyz-api.project4.domain.com', type='A'] but it already exists] status code: 400, request id: b59a889a-c920-4fec-a709-e8b489b57613

with module.api["project4"].aws_route53_record.exampleon modules/api/route53.tf line 16, in resource "aws_route53_record" "example":

resource "aws_route53_record" "example" {

Issue is why Terraform tries to apply api module for existing project1 to project4 when they are already there and fails.
Is it the for_each statement that tries to recreate the resource instance with every apply when the list var.projects is updated, even if it is already created earlier by Terraform?
Code for resource aws_route53_record code under modules/api/route53.tf

resource "aws_api_gateway_domain_name" "this" {
  domain_name              = "${var.dns_name}.${var.domain}"
  regional_certificate_arn = var.certificate_validation

  endpoint_configuration {
    types = ["REGIONAL"]

resource "aws_route53_record" "example" {
  name    = aws_api_gateway_domain_name.this.domain_name
  type    = "A"
  zone_id = data.aws_route53_zone.public.id

  alias {
    evaluate_target_health = true
    name                   = aws_api_gateway_domain_name.this.regional_domain_name
    zone_id                = aws_api_gateway_domain_name.this.regional_zone_id

Calling module

locals {
  projectList = {
    for pv in var.project_version : pv.project => {
      project_name = "${pv.project}"
module "api" {
  source = "./modules/api"
  for_each = local.projectList
  env    = var.env
  balancer_uri           = "${var.dns_name}.${var.domain}"
  domain                 = "${var.domain}"
  dns_name               = "xyz-api-${each.value.project_name}"
  sm_api_host            = "sm-portal-api.${var.domain}"
  app                     = each.value.project_name
  subnets                = [module.vpc.private_subnets[0], module.vpc.private_subnets[1], module.vpc.private_subnets[2]]
  vpc_id                 = module.vpc.vpc_id
  internal_alb_arn       = aws_lb.alb_internal.id
  certificate_validation = aws_acm_certificate_validation.example.certificate_arn
  user_identifier_lambda_arn = aws_lambda_function.User_Identifier.invoke_arn
  lambdas = {
    post_confirm = module.lambda.identity_lambda
  cognito = {
    create_auth_challenge          = aws_lambda_function.Cognito_Create_Auth.arn
    define_auth_challenge          = aws_lambda_function.Cognito_Define_Auth.arn
    pre_sign_up                    = aws_lambda_function.Cognito_Pre_Sign_Up.arn
    verify_auth_challenge_response = aws_lambda_function.Cognito_Verify_Auth.arn
    pre_authentication             = aws_lambda_function.Cognito_Pre_Auth.arn
    post_authentication            = aws_lambda_function.Cognito_Post_Auth.arn
  depends_on = [

The issue you’re facing occurs because Terraform detects changes in your configuration that lead it to believe existing resources need to be recreated, even when you’re only adding new projects. This can happen due to how Terraform tracks resource identities and dependencies, especially when using dynamic values in your configurations. To avoid this, ensure your resource configurations are idempotent, review the Terraform plan carefully before applying, use explicit dependencies with depends_on, and consider using the lifecycle block to ignore changes to certain attributes or prevent destruction.

Do lifecycle features directly address the issue of recreating resources when new items are added to the for_each list and ignore ?

I believe, to this point specifically, @rtwolfe was referring to the possibilities opened up by the use of the lifecycle argument ignore_changes. (Reference)

I tried implementing lifecyle block with ignore_changes and it works for adding new projects in the var.project list. However, I started getting cycle error when removing one or more porjects from the list var.projects

H’m. Could you share a bit more about the procedure you’re using to remove projects, and the subsequent errors you’re getting, please?

@devopsEngr, the primary problem I see is your use of depends_on in the module call. There should be no reason to put these depends_on entries in there, and they are probably the cause of your unexpected changes. You are declaring that every single thing within your api module depends on all changes from everything within the depends_on (including every individual resource within the vpc module), which is definitely going to prevent data sources like data.aws_route53_zone.public from being read when there are changes pending.

Something I can’t tell from your configuration is if you have data sources and managed resources representing the same logical resources. If that’s the case, Terraform will not be able to correctly resolve the data sources during plan, because the resources they represent won’t be created or updated until apply.

Finally, cycles could not be related to adding ignore_changes, because ignore_changes can only refer to the containing resources. I would wager fixing the use of depends_on would fix your issue there, but if not you will need to closely inspect the cycle error output to see how you may have created this problem in the configuration.

1 Like

After I cleaned up bunch of depends_on from the modules.tf where all the modules calls are happening, project deletion also worked fine.
However, it looks like there are more depends_on block that are still troublesome because my code works fine when:

  1. Add project to the list var.projects= [{"project": "project1", "version": "v1"}, {"project": "project2", "version": "v1"}, {"project": "project3", "version": "v2"}, {"project": "project4", "version": "v3"}]
  2. Remove project from the list var.projects= [{"project": "project1", "version": "v1"}, {"project": "project2", "version": "v1"}]

Fails with cyclic error I try to replace an item in the list
var.projects= [{"project": "project6", "version": "v6"}]

Error: Cycle: module.squidex-deploy.data.aws_route53_zone.internal (expand), module.squidex-deploy.data.aws_route53_zone.public (expand)

Not sure how to check for this “if data sources and managed resources representing the same logical resources”

That requires manual inspection of your configuration. For example, if you have both

resource `aws_vpc_endpoint` ...


data `aws_vpc_endpoint` ...

in the same configuration for the exact same endpoint, you are likely to have dependency ordering issues between those objects.

1 Like