Dynamic test environments

Hi everyone

I’m curious to know if anyone has ever gone as far as creating dynamic test environments using terraform for a non-trivial system?
i.e. one were the TF is split over several repositories, and the deployments need to be done in a specific order

I’m not asking how to setup this up, I’m just trying to gauge if it’s worth the investment in the time to build to the tools, or if the complex just makes this too difficult to maintain

Has anyone ever made this work in the real world?

Hi @Jmen,

I tried this in a previous job, before I was working at HashiCorp. We built some scripts to set it up, and technically it worked. However, we abandoned that approach for some other reasons:

  • Duplicating all of the necessary supporting infrastructure for each test environment was expensive (in terms of usage fees) and time-consuming (in terms of time waiting for all of the terraform apply and later terraform destroy actions to complete.
  • Because the overall deployment ran in an “unattended” fashion, doing many operations all at once, if anything went wrong it tended to be hard to figure out what exactly had happened. In more typical Terraform usage we’re only applying one thing at a time, so when something fails we have more context about what we were doing.
  • Sometimes the destroy part would fail, for whatever reason, and leave something running and chewing up dollars even though it was no longer needed. Therefore we had to build some additional components to monitor for that and draw attention to it.

After trying this for a few months, we ultimately abandoned it and resorted to just running an additional long-lived “copy” of the infrastructure. Although that additional copy was always running, in terms of total cost (including human costs) it worked out cheaper to just send the cloud provider a little more money.


I do also have an interesting counter-example, although the situation is a little different than what I think you are imagining…

The provider development teams at HashiCorp use automated “acceptance tests” to verify provider behavior against remote APIs. Along with running these tests manually during development, they also run them nightly so that they can get an early heads-up if, for example, a remote API changes in a way that affects the provider’s behavior even though the provider code itself hasn’t changed.

Taking the AWS provider as an example, there’s a pair of AWS accounts (needed because some AWS provider features, like VPC peering, can work between two accounts) and the acceptance tests all include a configuration snippet to apply and some code to make assertions against what was created. The test harness then runs terraform apply on the configuration snippet, runs the assertion code, and then runs terraform destroy.

This therefore means that this harness ends up applying hundreds of distinct Terraform configurations and then destroying them shortly afterwards. However, it diverges from what you’re talking about in that these test configurations are all self-contained, rather than interdependent.

The automation around those tests, including supporting functionality like detecting and destroying leftover objects when the tests fail, is certainly not trivial and has gradually grown over many years. For a team whose primary goal is developing a Terraform provider that investment is worth it, but I think it’d be questionable for most other situations.

Thanks for those examples @apparentlymart .

They are definitely valuable, and very useful to hear the experience of someone who has actually done this.