I’ve been trying to figure out this issue for months now with no success, I can’t help but feel it is something minor/dumb. Possibly something to do with upgrading TF to a later version? our pipeline did this automatically and I didn’t realize it. I’ve since tried to roll back the TF version with no success though.
I’m thinking its one of these issues…
- A version change bump was increased and could have caused unwanted behavior? (both TF version or possibly module(s))
- The state somehow got unhealthy (I’ve manually removed all new resources since it last ran successfully by using terraform state rm command, then I commented out the code so it does not try to add it back in again.
I can successfully run terraform init
, however running terraform plan
or even terraform apply
will result in the following error.
│ Error: Unsupported attribute
│
│ on ../modules/ecs/output.tf line 2, in output "this_ecs_cluster_id":
│ 2: value = module.ecs.this_ecs_cluster_id
│ ├────────────────
│ │ module.ecs is a object, known only after apply
│
│ This object does not have an attribute named "this_ecs_cluster_id".
╵
╷
│ Error: Unsupported attribute
│
│ on ../modules/ecs/output.tf line 2, in output "this_ecs_cluster_id":
│ 2: value = module.ecs.this_ecs_cluster_id
│ ├────────────────
│ │ module.ecs is a object, known only after apply
│
This is my outputs.tf file in the ecs module.
output "this_ecs_cluster_id" {
value = module.ecs.this_ecs_cluster_id
}
output "ec2_instance_iam_role_id" {
value = aws_iam_role.this.id
}
Then in the main config, I’m setting the cluster_id by calling cluster_id = module.ecs.this_ecs_cluster_id
in multiple locations.
I’m wondering if I have a chicken/egg situation, and need to possibly declare a depends on somewhere after a version change?