When the cluster is destroyed the log group resource is deleted but there are lingering streams that cause the log group to be recreated.
If a new cluster is now plan/apply with same name it complains because the log group already exists.
The “fix” was to apply a Deny policy to the cluster role so it can’t recreate the log group but that doesn’t appear to be working as expected.
I need a way for when the cluster is deleted it actually deletes the log group. Maybe the cluster shouldn’t depend on the log group and the log group should depend on the cluster?
I’m open to work around suggestions but this is definitely frustrating.
Alternatively if there was a way to upon cluster creation (first plan/apply) it could delete log group if found to avoid the apply from failing that would work to circumvent this problem but a true destroy of the stack (not to be recreated) could/would leave rogue log groups around due to these stream recreating the log group.
Issue is with AWSServiceRoleForAmazonEKS actually re-creating the logGroup after it has been deleted so the next time TF runs even though it thought it deleted it it’s already there: