Terraform Ignore Changes Of Dynamic Block

Hello, I’ve a route table in main module, it creates default gateway, and etc.
I’ve also second module which is creates a peering connection with routes as well.

Problem Is: When for example I run again terraform plan in main module, it’s going to delete all routes which were created by peering module. Only ignore changes comes to my mind. Maybe anyone can help me solve this issue, or can provide a solution for ignore changes block, here is the code:

resource "aws_route_table" "private" {
   count      = length(var.subnet)
   vpc_id     = var.network
   depends_on = [  ]


   dynamic "route" {
       for_each   = var.ipv4
       content {
         # Other content
         vpc_peering_connection_id   = lookup(route.value, "peering",    "" )
       }
   }

   lifecycle { 
       create_before_destroy = true
       ignore_changes        = [ tags, route ]
   }


   tags = merge(
     {
         "Name"  = var.name
     },
     lookup(var.tags[0], "resource", null),
     lookup(var.tags[0], "optional", null)
   )
}

How can I ignore vpc_peering_connection_id and lookup(var.tags[0], "optional", null)

P.S. Also I’m trying to ignore this variable lookup(var.tags[0], "optional", null) can you show an example how can I ignore this as well.

Hi @unity-unity :wave:
I understand there are two modules and maybe two distinct terraform apply cycles which maintain the same resource. I think this isn’t recommended.
Would there be an option to shift the route table management into a third module which can apply the bundle of both module requirements?

1 Like

Hi @tbugfinder :wink: Yes I can move creation of peering route to a different module, in different directory even, but this means that, if anyone will run terraform plan on main module, it will again delete the existing peering route table

When you talk about modules does this also mean that each module has it’s own lifecycle (plan, apply, destroy)?

1 Like

Hi, @tbugfinder sorry for late response. Yes that is correct each module is independent and they run by plan, apply, destroy

As per my understanding that’s a design issue. As a single resource shouldn’t be managed by two different deployments.

1 Like

Hi, @tbugfinder. I can not say that is design issue. So in total I have 3 plan(s) which are running separately.

1 - Peering
2 - Peering Accepter + Route (Only after peer is accepted)
3 - Peering Requester + Route Table (only after peer is accepted, otherwise it will become “blackhole”)

Hi @unity-unity,

I think the situation you’ve described is the one that the “NOTE on Route Tables and Routes” callout in the aws_route_table documentation is talking about. At the time I write this, it says the following:

Terraform currently provides both a standalone Route resource and a Route Table resource with routes defined in-line. At this time you cannot use a Route Table with in-line routes in conjunction with any Route resources. Doing so will cause a conflict of rule settings and will overwrite rules.

This callout doesn’t make a specific suggestion for what to do instead, but I believe what it’s implying is that because you have a separate module using aws_route to add routes, you should remove all of the route blocks from the main aws_route_table and declare those as separate resources too, thus allowing the provider to treat the individual routes as separate resources from the table they belong to.

I think that would look something like this, adapting what you shared in your question here:

resource "aws_route_table" "private" {
  vpc_id = var.network
  tags = merge(
    {
        "Name"  = var.name
    },
    lookup(var.tags[0], "resource", null),
    lookup(var.tags[0], "optional", null)
  )

  lifecycle { 
    create_before_destroy = true
  }
}

resource "aws_route" "private" {
  for_each = var.ipv4

  route_table_id            = aws_route_table.private.id
  vpc_peering_connection_id = try(each.value.peering, null)
  # (other arguments)
}

I believe the trick here is that there’s a special case in the aws_route_table logic where it treats it as a special case when you don’t declare any route blocks. In that case, the provider assumes that you don’t want aws_route_table to manage the routes at all, because they’re going to be managed by possibly many separate aws_route resources elsewhere.

Conversely, as soon as you include at least one route block inside aws_route_table the provider interprets that as you wanting the route table resource to specify the full set of routes, and thus any routes that aren’t written out are understood as needing to be deleted.

1 Like

Hello @apparentlymart! The problem for me is not on declaration of any Route tables, until all routes under one single module, what I mean when every route in one terraform plan there is no problem.

However as I mentioned above, I have 3 separate directories:

1 - Peering
2 - Peering Accepter + Route (Only after peer is accepted)
3 - Peering Requester + Route Table (only after peer is accepted, otherwise it will become “blackhole”)

So within 1st module, where Peering it self is creating, I don’t use any routes related to peering,
After 1st module is applied, terragrunt goes to 2nd module and applies the accepter with its route (as they are in different region), and lastly when peering connection is accepted and established the last 3rd module is applies with route.

My exact problem is, whenever I become to 1st module, and manipulate with them, in every terraform plan it is going to destroy the route and only the route.

Here is the small timelapse how my peering is creating:

1-Module Peering -----> 2-Module Peering Accepter -----> 3-Module Peering Requester