I have the following issue at hand:
Me and my team are creating an AWS EKS cluster using Terraform and we need to be able to start new pods (services) inside the cluster without replacing existing ones.
The idea is to have a separate git repository holding a single .tfvars file and each time someone clones it and pushes a new branch with its edited version, Terraform should create and start a new Kubernetes pod with the updated properties + create an associated SQS queue.
The things is whether I’m passing the variables directly to a resource or using a module I end up with a block with different properties, but the same name, since it can’t be changed dynamically, and the original resource is updated rather than a new one being created.
resource “kubernetes_deployment” “service” {…
One possible solution I thought of is using Terraform Cloud and separating the main config repo and the .tfvars repo into different workspaces where an update and pushing of a new branch of any of the two would trigger a run.
But separate workspaces would mean separate state files, correct? So maybe using a shared state between workspaces could solve it.
I’m quite new to Terraform and have been scratching my head of the problem for a few days now, so any suggestions would be greatly appreciated.