Hi all,
I’m looking for best practices or patterns for testing shared Terraform code in a multi-repo setup
, where our CDE(Central Development Repository)
holds reusable static modules, and each customer-name-repository
provides the dynamic configurations and variable sets.
Our constraints:
- The
CDE
repository contains modules reused across many customer environments. - Each
customer-name-repository
contains its own.tfvars
and logic tailored to the customer. - We are not allowed to re-apply infrastructure just to test changes in shared code.
- Most customer contracts treat both
test
andproduction
environments as production-grade and stable. - We currently do not have a dedicated
development
environment for applying and destroying infrastructure, due to painful customer contract constraints, application licences, politics and associated costs ; although we recognize that having one would be highly valuable. - Therefore, we can rely only on linting, validation, and
terraform plan
, but notterraform apply
orterrform destroy
.
Our current idea:
- Run static checks (
terraform fmt
) and possiblytflint
in theCDE
repository. - When changes are made to
CDE
, trigger CI pipelines in each customer repository (via GitLab). - In each customer repository, run:
terraform init
tflint
terraform validate
terraform plan
(using real customer-specific.tfvars
)- Never run
apply
automatically ; must be done within managed downtime with customer - Modules are version-pinned in customer repos using
CDE
release tags.
Questions:
- Is this a sound and reliable testing strategy given that
apply
is not allowed? - Are there recommended tools or patterns to orchestrate testing across customer repositories?
- Has anyone successfully used
terraform test
in a similar cross-repo CI/CD setup (e.g., GitLab multi-project pipelines)? - Given our constraints, what would be the most effective Terraform testing model?
Any insights, experiences, or ideas would be greatly appreciated!
Thanks.
K.