How to manage mutliple stacks/apps with the same configuration

I want to deploy many stacks with the same configuration. What is the best way to achieve this with Terraform?

I’m using AWS.

Thanks

Modules are a very useful feature for reusing chunks of code

Would you have a configuration file per deployment then?

There are various different ways you can set things up depending on how you will be doing deployments (e.g. manually or via a CI/CD system).

You could have a separate repo for the module plus one for each environment, you could have a single repo for everything with different directories containing a root module for each environment (referencing the shared stack module) or you could have a single repo which deploys everything together (so has a single state file with everything). For each option you could have everything defined fully within Terraform code, or you could use variables to the root module(s) with one or more tfvar files.

There are advantages and disadvantages of each approach, so it would be good to understand what you are trying to achieve overall.

Thanks for the reply Stuart.

so it would be good to understand what you are trying to achieve overall

I’m trying to put our current manual deployment set up into Terraform/infrastructure as code. We have multiple deployments/stacks for multiple customers, a deployment/stack/app per customer, so it’d be good to reuse config for DRY.

What I have so far is a set up like:

.
├── README.md
├── deployments
│   └── customer-a
│       └── main.tf
│   └── customer-b
│       └── main.tf
│   └── ...
├── modules
│   └── aws-server-layer
│       ├── README.md
│       ├── main.tf
│       ├── outputs.tf
│       └── variables.tf
│   └── aws-db-layer
│       ├── README.md
│       ├── main.tf
│       ├── outputs.tf
│       └── variables.tf
│   └── ...

customer-a and customer-b use the same infrastructure, they use the same modules in the modules folder.

So far seems okay. Do you have any advice?

Thanks

I’m assuming you’ll be manually running Terraform from within the customer-a or customer-b directories as required.

Would there ever be situations where customer-a and customer-b need different versions of the modules from each other (e.g. we update things in customer-a today but have to leave the update for customer-b for a month, during which time other changes for customer-b might be needed)?

If so, moving the modules into separate repos would allow you to version them (e.g. via git tags) and therefore reference different versions for each customer - a bit of extra complexity so only needed if you also need that additional flexibility.

Otherwise what you have makes sense. There is no “right” way, just different opinions. For example due to the way we use our CI/CD tool and split up code between different GitHub organisations we actually put customer-a and customer-b in totally separate repos, as well as putting each module in its own repo. But that would be overkill in other situations.

Terragrunt

… runs and hides

Could you use workspaces? We did that quite successfully on a project I worked on. It uses the same config and isolates the state from each other within the state file. If required you can also use the terraform.workspace variable to drive differences in behaviour (e.g. terraform.workspace == “x”). We did that sparingly, mostly for implementing slightly different behaviour in our non-production environments e.g. setting the recovery window to 0 on AWS secrets and so on.

Remote modules are also a great idea for version control of stacks and cohesive sets of related resources.