Question about AWS ec2_instance conflict with k8s aws-ebs-csi

,

Hi all, I have a general question regarding the conflict between aws k8s ebs csi and the terraform ec2 templates.

The scenario is that I created an AWS ec2 instance through terraform and run it as a kubernetes slave. Later I added the aws-ebs-csi integration, and dynamically created a ebs pv/pvc for a pod located on that node. The tricky thing is that there is an additional ebs mount point shown on the ec2 instance, which terraform template do not recognize. As a result, everytime I need to run the terraform to update the node attribute, there is a forced replacement on that ebs mount point, which is not desired. Is there any best practices on this front? Thanks.

1 Like

I’m having a similar issue with aws wherein because I’ve created a volume and attached it to a job, the aws_instace. block is freaking out and not letting me go forward