We build our servers eliminating the use of “ebs_block_device” for the reason that we to not force EC2 instance rebuild when increasing the volume size of an ebs device. For this reason we use “aws_ebs_volume” and “aws_volume_attachment” so they are independent to the EC2 machine.
When an EC2 instance is failed over to another Availability Zone i need to update the terraform state file to reflect the change.
Therefore I tried the following:
- List the states
terraform state list:
data.aws_kms_key.kmskey
data.aws_security_groups.sgids
data.aws_subnet.subnet
aws_ebs_volume.d_drive
aws_instance.AWETONYCL99D
aws_volume_attachment.d_drive_attachment
Currently if I do a terraform show there are no “ebs_block_device” in the aws_instance.AWETONYCL99D state
- Remove the aws_instance
terraform state rm aws_instance.AWETONYCL99D - Import the new instance
terraform import -var-file=“xxxxxx” aws_instance.AWETONYCL99D i-xxxxxxxxxxxxxxx
And here is the problem, it not only imports the EC2 instance it also imports the aws_ebs_volume.d_drive as a “ebs_block_device” and this is what we do not want to do. As should we want to change the volume it will force a rebuild of the EC2 instance.
Questions:
a) It seems that the “data.xxx” states can not be updated, will they automatically be updated during the “apply -refresh-only” of new instance?
b) Can we prevent “aws_ebs_volume” being import as “ebs_block_device” for instance import?
c) how do we update the state “aws_volume_attachment”?
If you need example please request.