RDS restored from snapshot is destroyed on next terraform apply

How can we restore a snapshot to an RDS instance that is managed by terraform? There is a way to restore the snapshot to a new RDS instance, but when we do a terraform apply later, this instance is destroyed.

This is how we do it now:
We’ve written a TF code to restore the RDS from a snapshot using the parameter snapshot_identifier.

  snapshot_identifier         = "${var.db_snapshot_identifier}"

The variable has null as the default value. So, by default, this parameter is ignored and a fresh RDS instance is created. When we pass the snapshot identifier to this variable, the RDS is created using that snapshot. But, when we do a terraform apply again with the default null snapshot identifier, the RDS gets recreated again and hence we lose all the data that was restored earlier. From terraform plan:

      - snapshot_identifier                   = "rds-database-backup" -> null # forces replacement

Is there a way to fix this? Like, can we ignore the snapshot_identifier parameter in the state file? Or is there any other way to achieve this?

@Peter14294 did you find a solution for this? We’re trying to achieve something similar and are bumping into the same issue/behavior.

yes, this quite hot topic. Actually for now search solution also.
I think need to create manually rds from snapshot and then import to your resource, in my case it is module so for me need to verify first how it would work.

Simple do not set it back to null. It will stay there forever once you need to restore again and you change that value again.

Another option is to change its lifecycle to ignore changes in the snapshot_identifier. As suggested in the aws_db_snapshot documentation.

resource "aws_db_instance" "mydb" {
  ....
  snapshot_identifier         = "${var.db_snapshot_identifier}"
  lifecycle {
    ignore_changes = ["snapshot_identifier"]
  }
}

@voiski

Have you tried either of these options?
I tried both and neither worked… Database was destroyed during each apply.

Wish there was solution.
Looking to avoid manual setups.

1 Like

I our case at the company we had the same issue and after create the RDS resource from a snapshot and test the plan, it was keeping trying to destroy/create the resource because a drift on the resource name, so we just added it to the lifecycle like this:

lifecycle {
ignore_changes = [ publicly_accessible, snapshot_identifier, name ]
}

It took as a long time to realize the field that was forcing the replacement when applying the terraform even with lifecycle - ignore_changes - snapshot_identifier.

In our case, when running terraform plan, the command ended like this:

  + timezone                              = (known after apply)
  ~ username                              = "HRAEGOHTwdkjdBb" -> "mYjhgjhjhlilydceCo" # **forces replacement**
    # (28 unchanged attributes hidden)
}

so we added username to the ignore_changes and it stopped trying to replace. Now we’re working on why are we replacing the username and how to replace the user without replacing the RDS.

Hope this will help.