Trigger random_id resource recreation on rds instance destroy and recreate

Folks, am trying to find a way with terraform random_id resource to recreate and provide a new random value when the rds instance destroys and recreates due to a change that went in, say the username on rds has changed.

This random value am trying to attach to final_snapshot_identifier of the aws_db_instance resource so that the snapshot should have a unique value to its id everytime it gets created upon rds instance being destroyed.

Current code:

resource "random_id" "snap_id" {
  byte_length = 8
}

locals {
  inst_id = "test-rds-inst"  
  inst_snap_id = "${local.inst_id}-snap-${format("%.4s", random_id.snap_id.dec)}"
}

resource "aws_db_instance" "rds" {
  .....
  identifier                = local.inst_id  
  final_snapshot_identifier = local.inst_snap_id
  skip_final_snapshot       = false
  username                  = "foo"
  apply_immediately         = true
  .....
}

output "snap_id" {
  value = aws_db_instance.rds.final_snapshot_identifier
}

Output after terraform apply:

snap_id = "test-rds-inst-snap-5553"

Use case am trying out:

#1:
Modify value in rds instance to simulate a destroy & recreate:

  1. Modify username to “foo-tmp”
  2. terraform apply -auto-approve

Output:

snap_id = "test-rds-inst-snap-5553"

I was expecting the random_id to kick in and output a unique id, but it didn’t.

Observation:

  • rds instance in deleting state
  • snapshot “test-rds-inst-snap-5553” in creating state
  • rds instance recreated and in available state
  • snapshot “test-rds-inst-snap-5553” in available state

#2:
Modify value again in rds instance to simulate a destroy & recreate:

  1. Modify username to “foo-new”
  2. terraform apply -auto-approve

Kind of expected below error, coz snap id didn’t get a new value in prior attempt, but tired anyways…

Observation:

  • Error: error deleting DB Instance (test-rds-inst): DBSnapshotAlreadyExists: Cannot create the snapshot because a snapshot with the identifier test-rds-inst-snap-5553 already exists.

Am aware of the keepers{} map for random_id resource, but not sure on what from the rds_instance that I need to put in the map so that the random_id resource will be recreated and it ends up providing a new unique value to the snap_id suffix.

Also I feel using any attribute of rds instance in the random_id keepers, might cause a circular dependency issue. I may be wrong but haven’t tried it though.

Any suggestions will be helpful. Thanks.

Hi @codecorrect,

You are correct that referencing anything in aws_db_instance from random_id is going to cause a cycle. The only way to avoid this is to trigger the replacement for both resources from the same set of inputs. This means you would put the same values which could trigger replacement in the aws_db_instance as values in the keepers map within random_id.

For example, if username and identifier are the attributes which could force replacement of the aws_db_instance, then the random_id config might look like:

resource "random_id" "snap_id" {
  byte_length = 8
  keepers = {
    identifier = local.inst_id
    username = local.rds_username
  }
}

And of course you would reference the local.rds_username value in aws_db_instance as well to ensure they always match.

Thanks @jbardin, apparently the local.inst_id is a static; and am not sure what’s gonna change to the rds instance. I understand about username in the keepers map but it could be technically anything that triggered a change to the instance, so wondering on how to go about by resolving this chicken-egg problem.

The inst_id here was only intended to be an example. Outside of a manual -replace, only changes in specific aws_db_instance argument values are going to cause replacement, and you can track all of those in some way within the random_id resource as well.

In order for the aws_db_instance to use an output from the random_id resource, the random_id must be evaluated before the aws_db_instance. This means that it’s not going to be possible to directly trigger replacement of random_id based on the plan for the aws_db_instance, since that happens after planning the random_id has already completed. The only solution is to trigger both from the same upstream source, which in this case is the complete set of the aws_db_instance inputs which can cause replacement.