Terraform tests with multiple 'command = apply' blocks

Hi there,

I’m using the newly shipped testing framework to craft some tests for a module that I wrote. The module wraps up some functionality around Azure Virtual Networks, Gateways and Peering Links. Depending on the parameters passed in to the module, it can provision either:

  • A standalone network
  • A network with a virtual network gateway (For my purposes, this is termed a ‘Backplane’ network)
  • A network that is peered to another, existing network.

I would like to test all of these scenarios, but I’m running into some strange behaviour when trying to test the third one. For this to be tested end-to-end, I need to first provision a network using the module, and then provision another network that will be peered to the first. Since they’re both using the same module, I wrote two run blocks with different variables specified:

run "backplane_network" {
  command = apply

  variables {
    network_type  = "Backplane"
    networks      = var.backplane_networks
    subnet_layout = var.backplane_subnet_layout

run "peered_network" {
  command = apply

  variables {
    network_type       = "Peered"
    networks           = var.peered_networks
    subnet_layout      = var.peered_subnet_layout
    backplane_vnet_env = "backplane"

However, on executing the second run block, the peered network is not able to find the backplane network to create its peering links with, even though the resource address is completely correct for the network. Does the test framework start tearing down the resources of a run as soon as it has completed (i.e. before all run blocks have completed)?

Does the test framework start tearing down the resources of a run as soon as it has completed (i.e. before all run blocks have completed)?

OK, so the answer to this appears to be “yes”.

I think the only way around this issue would be for me to use a setup module which creates the first network, so that the state is separate and it doesn’t get torn down until the other tests complete.

My next question would be to know whether or not we could have a flag that indicates that a given run block requires the resources created by a previous one. It looks like this would require changes to the way that the framework handles the in-memory state though, since the resource addresses for each run block are not unique.

As an improvement, it would be good to get quicker feedback when a destroy takes place. In the case of my test described above, there is no indication in the console output that any resources are being torn down until the very end of the test run, when in fact resource destruction has actually occurred much earlier.

Hi @DevOpsFu, apologies for the delay in my response.

Does the test framework start tearing down the resources of a run as soon as it has completed (i.e. before all run blocks have completed)?

The answer to this should be no, destroy operations should only happen after all the run blocks have executed. If you are finding something different here then it is a bug which I can investigate.

However, since you are executing the same module twice the second run block will be overwriting the state from the first one - does that explain the behaviour you are seeing? It could be it can’t find the backplane network as it is being explicitly deleted in favour of the new network being created?

You can also run terraform test with the -verbose flag and it will print out the state after each run block so you can inspect exactly what was created. You could also try executing the second run block with a plan and then with the -verbose flag you’d see the exact changes being proposed.

Again, sorry for the delay!

Hi @liamcervante , thanks for the reply!

When I was testing with this setup, I suspected that this would not work due to the way that the way that tests share a single in-memory state, and the resource addresses would be the same for both instances of the module.

However, when I was testing, I did observe that the infrastructure was being torn down between the test blocks. The errors that I was receiving were not related to state, but related to the Azure API not being able to find the network that I was trying to peer with (because it had already gone). The backplane network also includes a VPN Gateway which takes a long time to provision and tear down, so I was able to observe this behaviour as it happened - before the second run block was executed I could see the backplane resources being deleted.

I have since got around this issue by using a setup module to provision the backplane network, and a verification module to run tests against those resources using data sources.

Hope this helps, and please let me know if you would like more info on my tests.


Thanks for extra context - I’m glad you managed to work around the issue.

Are you in a position you can share your configuration and test files so I can see if I can replicate this issue?

@liamcervante I’ll try to get a copy of the module code into a public repo for you.