I am currently using AWS ECS for my service deployment. For the shared volumes, I am binding some EFS volumes.
Here is my task definition:
resource "aws_ecs_task_definition" "ecs-fargate" {
family = var.ecs_task_definition_name
container_definitions = var.container_definitions
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
cpu = var.ecs_task_cpu
memory = var.ecs_task_memory
execution_role_arn = var.ecs_task_execution_role_arn
task_role_arn = var.ecs_task_role_arn
dynamic "volume" {
for_each = var.volumes
content {
name = volume.value["name"]
efs_volume_configuration {
file_system_id = volume.value["file_system_id"]
}
}
}
}
var "volumes" {
default = [
{
name = "vol1"
file_system_id = "fs-xxxxxxxx"
},
{
name = "vol2"
file_system_id = "fs-xxxxxxxx"
}
]
}
The Terraform code above is working fine as well.
But when I do the terraform apply
every time, the task definition detaches the EFS volume first and re-attach the same again. Here is the screenshot of the issue:
- volume {
- name = "vol1" -> null
- efs_volume_configuration {
- file_system_id = "fs-xxxxxxx" -> null
- root_directory = "/" -> null
}
}
+ volume {
+ name = "vol1"
+ efs_volume_configuration {
+ file_system_id = "fs-xxxxxx"
+ root_directory = "/"
}
}
Am I missing some additional Terraform configuration here for the above issue?