Hello,
I have a Terraform to bring up a simple AWS instance using an AMI as per below - the AMI has a ebs snapshot:
resource "aws_instance" "instance_sql" {
count = var.num_sql
ami = data.aws_ami.ami_sql.id
instance_type = var.instancetype_sql
subnet_id = data.aws_subnet.subnet_sql[count.index % length(data.aws_subnet.subnet_sql)].id
key_name = var.key_name
iam_instance_profile = var.profile_sql
vpc_security_group_ids = var.sg_sql
# root disk
root_block_device {
volume_size = "200"
volume_type = "gp3"
encrypted = true
delete_on_termination = true
iops = 16000
throughput = 1000
}
ebs_block_device {
device_name = "xvdd"
volume_size = "200"
volume_type = "gp3"
encrypted = true
delete_on_termination = true
iops = 16000
throughput = 1000
}
lifecycle {
prevent_destroy = true
ignore_changes = [
ami
]
}
}
On initial resource bring up, its fine, the ebs is attached and works fine - but when i attempt to just do a normal apply (without any changes), it tries to replace the EBS block.
I wonder if there is something im doing wrong or its a bug. I’d like to manage the EBS via terraform i.e extending it etc etc so dynamic blocks wont be useful?
Below is the plan / force replacement:
- ebs_block_device { # forces replacement
- delete_on_termination = true -> null
- device_name = "xvdd" -> null
- encrypted = true -> null
- iops = 16000 -> null
- kms_key_id = "arn:aws:kms:ap-southeast-2:747892816585:key/cc9c6540-c784-4c1b-afa1-28ef40876897" -> null
- snapshot_id = "snap-0332b308b56cb921e" -> null
- tags = {} -> null
- throughput = 1000 -> null
- volume_id = "vol-03d5613f1d96aab5c" -> null
- volume_size = 200 -> null
- volume_type = "gp3" -> null
}
+ ebs_block_device { # forces replacement
+ delete_on_termination = true
+ device_name = "xvdd"
+ encrypted = true
+ iops = 16000
+ kms_key_id = (known after apply)
+ snapshot_id = (known after apply)
+ throughput = 1000
+ volume_id = (known after apply)
+ volume_size = 200
+ volume_type = "gp3"
}