I try to attach new ebs volumes in a specific order. E.g. in this example I create 3 disks and then attach them. Disks are created in the correct order and also they get correct device names (.e.g xvdf,xvdg and etc.), but when terraform attaches it to the EC2, xvdf can be the second one and xvdg will be the first one. So it happens totally randomly.
No, not really but there’s probably a logical reason for it How is local.lnx_ebs_device_names defined?
Generally speaking, I try to avoid lists and count in favour of maps and for_each. My main reason for doing it is that I can add/remove items without all the others being recreated, so it’s a lot safer for day-two operations.
Making a map-of-maps is overkill right now when there is only the one parameter size, but this way I can add more parameters later on when I find out that I need them without things breaking.
This is a very good comment and I thought about the exactly same thing. I re-designed the approach before creating the post but unfortunately, it has the same behavior I thought somebody had same issue (and people should because it’s not some rare thing to attach multiple EBS volumes to the same instance). It looks like most of the people specify volumes just directly in aws_instance resource.
Maybe I misunderstood you. When you write “first”, and “second”, do you mean the result or the order in which the operations are performed as seen in the log output?
Terraform creates disks (and that seems to be correct: the first disk is 50, the second is 40 and the third one is 30.
Then it’s trying to attach the disks to the EC2 instance.
Expected result (in this case the order is order of the disks that is displayed in AWS EC2 console):
/dev/sdf is attached as first disk with 50 gb
/dev/sdg is attached as second disk with 40 gb /dev/sdh is attached as third disk with 30 gb
What’s happening: /dev/sdh is attached as first disk with 30 gb
/dev/sdg is attached as second disk with 40 gb
/dev/sdf is attached as third disk with 50 gb
It might be that I misunderstood the concept and aws_volume_attachment performs the operation in random order. And disk orders are different every time.
I will trying out for_each once more and see if it can fix it.