Hi,
I am hoping to get some suggestions on how I can resolve a cycle issue “cleanly” that occurred during terraform plan.
The terraform setup I have currently looks something similar to the following:
locals {
service-a-flyway = {
service-a-flyway = {
ecr_repository_arn = module.service_a_flyway_migration_ecr_repo.ecr_repository_arn
image_url = module.service_a_flyway_migration_ecr_repo.repository_url
// other configurations left out
}
}
service-b-flyway = {
service-b-flyway = {
ecr_repository_arn = module.service_b_flyway_migration_ecr_repo.ecr_repository_arn
image_url = module.service_b_flyway_migration_ecr_repo.repository_url
// other configurations left out
}
standalone_tasks_config = merge(local.service-a-flyway, local.service-b-flyway)
}
module "service_a_flyway_migration_ecr_repo" {
source = "../../modules/ecr/repository"
repository_name = "ecr-service-a-flyway-migration"
alias_name = "kms-service-a-flyway-ecr"
}
module "service_b_flyway_migration_ecr_repo" {
source = "../../modules/ecr/repository"
repository_name = "ecr-service-b-flyway-migration"
alias_name = "kms-service-b-flyway-ecr"
}
module "ecs" {
source = "../../modules/ecs"
standalone_tasks_config = local.standalone_tasks_config
// other configurations left out
}
in the module “…/…/modules/ecs”, this is how the variable standalone_tasks_config is used
// in "../../modules/ecs"
module "ecs_task" {
source = "./task"
for_each = var.standalone_tasks_config
// other configurations left out
}
in the module “./task”, it contains the resource aws_ecs_task_definition
resource "aws_ecs_task_definition" "this" {
// other configurations left out. I don't think the attributes matters
}
// other resources intentionally left out
the module service_b_flyway_migration_ecr_repo was deployed to production by mistake and is just an empty AWS ECR resource in production that serves no purpose. currently, there is no need for the module service_b_flyway_migration_ecr_repo anymore and I am trying to destroy by simply removing the part of the code that provisions this module and updating the variable standalone_tasks_config to just
standalone_tasks_config = local.service-a-flyway
during the plan stage, the resource aws_ecs_task_definition undergoes a replacement (even though there are no changes to the container_definitions, it is highlighted in the output that changes to the container_definitions is causing the replacement. I have however isolated the replacement was actually caused by a change in the default_tags in the provider) and this resulted in a cycle error as follows
Error: Cycle:
module.ecs.module.ecs_task.local.service_task_definition (expand)
module.ecs.module.ecs_task.aws_ecs_task_definition.this (expand)
module.ecs.module.ecs_task.local.otel_collector_task_definition (expand)
module.ecs.module.ecs_task["service-a-flyway"].aws_iam_role.ecs_task_role
module.ecs.module.ecs_task["service-a-flyway"].aws_ecs_task_definition.this
module.ecs.module.ecs_task["service-a-flyway"].aws_iam_role.ecs_tasks_execution_role
module.ecs.module.ecs_task.module.kms_key.output.key_arn (expand)
module.ecs.module.ecs_task.aws_cloudwatch_log_group.task (expand)
module.ecs.module.ecs_task["service-a-flyway"].aws_cloudwatch_log_group.task
module.ecs.module.ecs_task["service-a-flyway"].aws_ecs_task_definition.this (destroy deposed ad6ace7b)
module.service_b_flyway_migration_ecr_repo.aws_ecr_repository.this (destroy)
module.ecs.module.ecs_task["service-a-flyway"].module.kms_key.aws_kms_key.this
when I look at the terraform state file, the module service-a-flyway has a dependency on the module service_b_flyway_migration_ecr_repo.
{
"module": "module.ecs.module.ecs_task[\"service-a-flyway\"]",
"mode": "managed",
"type": "aws_ecs_task_definition",
"name": "this",
"provider": "provider[\"registry.terraform.io/hashicorp/aws\"]",
"instances": [
{
// other stuff
"dependencies": [
// other dependencies
"module.service_b_flyway_migration_ecr_repo.aws_ecr_repository.this",
"module.service_b_flyway_migration_ecr_repo.data.aws_ecr_repository.this",
"module.service_b_flyway_migration_ecr_repo.module.kms_key.aws_kms_key.this",
"module.service_b_flyway_migration_ecr_repo.module.kms_key.data.aws_caller_identity.current"
]
}
]
}
I would think this dependency is caused by the fact the a loop on the variable standalone_tasks_config is used to provision the module ecs_task.
if I understand the cycle issue correctly, since the task_definition undergoes a replacement and depends on the module service_b_flyway_migration_ecr_repo, the replacement will need to be completed first. when this replacement completes, an attempt to destroy the module service_b_flyway_migration_ecr_repo happens. however, the earlier replaced task_definition has the dependency on the module service_b_flyway_migration_ecr_repo and therefore the module service_b_flyway_migration_ecr_repo cannot be destroyed.
here are the solutions I have come acrossed:
- update the state file manually by removing the dependency from
service-a-flywaymodule - using
create_before_destory(did not work)
and some alternatives that I think might work: - ignore_changes on the
container_definitionsattribute temporarily to prevent replacement and allow the moduleservice_b_flyway_migration_ecr_repoto be remove completely first and remove ignore_changes after and trigger anotherterraform apply
I am hoping to avoid updating the state file directly and achieve this in one terraform apply. how can I break this cycle “cleanly”?