Context for testing

Really need some way to pass some variables at run time. The testing framework will be used almost exclusively for CI. This means that multiple runs of exact same tests could be running at the same time on different branches, etc.

So, at a very minimum, we need to be able to pass some sort of ‘test-run-id’ which could be used to anoymize the resource names that the test is creating. Defaults just wont cut it.

Thanks for this feedback, @wsmckenz!

I think implied by your statements here is that you intend to have your CI system automatically run terraform test, which would mean giving the CI system some access to credentials for a remote system. Is that right?

I’ve typically heard before that folks don’t want to give their CI system access to any real credentials and instead want to either run the integration tests manually (so that they can use per-developer credentials) or want some way to mock away the real providers to test the module without needing any credentials at all. Since the implication of your feedback is something different, I’d just like to confirm that I’ve understood correctly your use-case.

Thanks!

Had a similar question (and agree we don’t want sensitive stuff in CI). Trying to figure out how to actually take advantage of integration testing with terraform test. We do have a private module registry (TFE) but don’t use the branch based publishing mode (we upload from a monorepo via API).

What environment are we supposed to be doing integration testing with? I was hoping to take advantage of the automated testing env in TFE but seems to be very limited in when it can be used

I’m sure there are a lot of approaches to this, but when I’ve done integration testing in the past (before the current framework existed, so usually using test-kitchen and kitchen-terraform, I’ve normally used a separate project (GCP) or account for those tests.

There are certain types of resources that are hard to integration test well - especially resources that are not deletable on the provider side, but I think you can either
a) Use a random ID for most / all resources, and use a different ID for every test (but making sure to clean up any resources that might get left around from a failed run)
b) Use the same resource name every time for tests and accept that you may sometimes have to do some cleanup.

If you really need to have the tests on multiple branches, you could build in a value based on a git commit hash or a hash of the branch name, and use that as part of the identifier? And, if you have a CI provider that has concurrency control (like GH Actions), you could also use that. State locks may also help somewhat; you can further limit things by adding some filters (say, only test the module(s) that have changed) or manual approval so that the tests are less likely to step on each other – in particular, you probably don’t want terraform integration tests to run while another job is still running on the same branch.

When I’ve done integration tests in the past, I’ve typically done something like having a daily run of all integration tests, and otherwise, only run the tests on PRs if a) the branch name matches a certain pattern and b) there’s some kind of manual approval button pushed, but of course, your mileage may vary depending on your environment.

Even so, there are always risks that something doesn’t get cleaned up properly - you may want to make sure to build in some extra tooling to clean up resources that might not have been cleaned up on a previous run.

I would say that also, there are a lot of benefits to doing unit testing (just against the plan or a mock provider), especially when you’re testing modules that have a lot of complicated logic in them… really, that’s a lot of what can go wrong with Terraform code anyway. So, you can do more and more frequent tests using unit / plan testing, and then use some of the strategies above to make more limited use of integration tests for the areas where they really shine.

One other note: there will of course be some ongoing maintenance with integration tests to reflect changes that have happened at the cloud provider / terraform level, and that’s a good thing, but it does create some maintenance. For example, a default Kubernetes version baked into a module default is no longer available. This is what I found integration tests most useful for.