I posted this question in the PR adding the prototype for the new test command, but was directed here.
One thing I’ve run into using terratest, is tests of modules that use count/for_each, and issues where the test config generates the resources passed to those expressions in a way where the index/label cannot be determined until apply. My workaround has been to support a “prereq” config for each test that is applied first, and to read the outputs via the data source terraform_remote_state .
Has any similar pattern/issue been discussed or explored yet, and how to address it within the context of a builtin test command?
Maybe some kind of “setup” and “teardown” features, common to many test frameworks? I would envision the “setup” running the init/apply on a different config, so the test config(s) could reference its outputs.
Or perhaps, the ability to order and create dependencies between configs? I.e. Run test config “A” first, and in test config “B” use terraform_remote_state to read outputs from test config A’s tfstate?
In the examples I’ve seen so far the inputs to the test have typically been static, hard-coded values rather than generated dynamically, so I think the direct answer to your question is that there is not currently a way to achieve what you’ve described here.
However, I’d like to understand a little more the underlying use-case here. If you’re able to share the test configurations you were already using, or at least to show an illustrative example of how they are structured, I’d love to incorporate that into a collection of examples I’m building so we can think about next steps with this research.
Usually, we are injecting a random_string value as the test identifier into a name attribute. The module takes a list of objects, and creates the for_each expression from the name attribute. (Though with terratest, I should probably use one of its methods to inject that into the test config. I’m not sure how I would do that internal to terraform.)
Or, sometimes I just want to isolate the test config so it only creates the module under test, and put everything else in a “prereq” config regardless whether it is necessary. Sort of reducing the complexity of the test config, and making it easier to understand variations of inputs to the module.
Most every terraform module on our GitHub org ought to have tests that follow this setup. You’re welcome to browse them for more examples. We use the same terratest script everywhere, just invoking the “prereq” config if present, then invoking the test config. Looping through all tests/ directories that contain “.tf” files. The script is in the “tests” directory in each repo. (Plz don’t judge it, we aren’t Golang devs! )
Thanks for sharing those! I’ve added them to my notes for deeper study later.
From just an initial look, it seems like with the current prototype you could potentially follow that pattern with an additional manual step of applying the “prereq” config directly first and then having terraform test run the “real” test suite against the prereq state, but I do of course see that it’s inconvenient to do that particularly with each test having its ownprereq and thus lots of separate configurations to manually apply.
Yes, we could use a wrapper around the workflow. That’s more or less what the terratest script does for us today. Was just thinking that if we were going to use a native terraform command/workflow, then we wouldn’t need the wrapper.
I suppose the “assert” functionality of the test provider would still be beneficial, at least. So I guess we would continue to use terratest to manage the init/apply/test/destroy workflow, but we could move any “assert” logic from the terratest script into the terraform config.
Hmm yes, I suppose with the current API you could potentially run terraform test with the RunTerraformCommandE function! If I understand your workflow correctly though, I think there might be a workflow mismatch there because terraform test wants to run all of the tests at once, rather than one at a time, and so you’d need to terraform applyall the prereq configurations first, rather than doing them one at a time.
The terraform.io/builtin/test provider is built to integrate directly with the terraform test command, so that provider can’t really help if you are just running terraform apply directly, though if you wanted to try this out I might suggest using my previous iteration of this research – the apparentlymart/testing provider – which should allow writing a similar sort of test that the new prototype allows but then run them with a normal terraform apply that Terratest already knows how to orchestrate.
If you try that then I’d love to hear how it works out, since it would be a nice example of what sorts of tests we can write in the Terraform language even if not an example of using the prototype workflow.
The use case we’re investigating right now is testing a lambda package… We know how to package and deploy the lambda, that’s easy. But confirming that the package is actually good requires invoking it with a test event. So figuring that will use the aws_lambda_invocation data source, and then one of the test/testing provider resources to assert something about the response…
I think there might be a workflow mismatch there because terraform test wants to run all of the tests at once, rather than one at a time
One problem with that workflow, is that there are many resources that can only be created one time in an account. Enabling Security Hub for example. A module that tests several variations of managing/configuring Security Hub would not be able to use terraform test, unless that workflow can be adjusted somehow.
I realize that I was imprecise in what I was saying earlier, although I think your point still stands either way, but I just want to be clearer in what I originally said: terraform test doesn’t run all of the tests concurrently, but rather it runs them all sequentially in a single user action.
That means that anything created within an individual test doesn’t need to worry about conflicting with other tests, at least in the current implementation, but it does mean that if you are using something outside of terraform test to set up all of the prereqs then those would all need to be applied at once in order for terraform test to succeed, which I think is a blocker for the setup you’ve been describing here.
I think it’s also an interesting tradeoff to think about whether terraform test can/should run tests concurrently. The sequential test runs right now are honestly more of an implementation simplification for prototype than an intentional design decision, and with lots of test cases it might be nice for them to all run concurrently to save time, but that then creates the problem you’ve described where the test cases must all be carefully written to not conflict with one another, which would be totally impossible for objects that can only exist once per account as you say.
I don’t have a ready answer here but I will note down this feedback/observation so we can consider it while thinking about next steps for the experiment. Thanks!
One more thing I’m finding in regard to testing terraform modules… It seems very useful after running terraform apply to also run terraform plan -detailed-exitcode (or otherwise detect errors and persistent diffs). So far, this has helped me catch latent issues in for_each logic, and in outputs of tags when the API and the Terraform AWS provider seem to differently report/represent null and {}.