Terraform Plan doesnt obey count values

I’m not sure whether this is by design or not, but we have all seen the error when using the count property when it depends on a value that cannot be determined until the apply phase. This implies that during the plan phase Terraform has to know the count to be able to determine how many resources etc to create and i full understand this.

However whilst running a plan and having the debug turned on i notice that in fact during the plan phase it seems to ignore at least calls to modules where count = 0. The reason why i can see it ignores this is that in the output in goes on to tranverse inside of a module and output all its resources etc even through the count of the module is set to 0.

Like i say im not sure if this is by design or a bug but it doesn mean the plan phase is spending a lot of time and effort scanning\processing items that dont need to be. Can anyone confirm whether this is correct or not.

You seem to have oddly strong feelings about this one particular degenerate case not being specifically optimised for…

Do you have some specific timings you can share to back up that it really is “spending a lot of time and effort” rather than just doing a quick basic parse of all of the input files?

I’ve never looked into this detail specifically myself, but given count/for_each are processed after the dependency graph between blocks has been constructed, I’m not really surprised to hear of it.

Its not about having strong feelings its about trying to understand what is going on, I accept and understand when it throws the error about values not been known until apply phase and try to work with this. However from looking at the debug output it plaining in fact ignores the value of a count when its 0 and still traverses the tree.

The reason it can cause problems is I have a aws multi region module, this takes as input all the providers for the different regions along with a variable called opt_in regions. The module then calls a sub-module for every single region passing the appropriate provider. Each one of these sub-module calls has a count on to check if the region is in the opt_in value and if it is set count to 1 otherwise set count to 0.

Now running a plan i have the opt_in variable set to 2 regions, yet during the plan terraform will process\scan all 24 sub-module calls and graph all their resources etc when there is in fact no need to.

Thanks, that does help clarify your aim.

Unfortunately for your use-case, the Terraform graph is a graph between resource blocks, not resource instances. Multi-instance (for_each/count) resources only get expanded as the graph is being walked in a later phase of processing.

Would it be possible to elide subgraphs based on count = 0? I don’t know the intricacies of the code well enough, to speculate on whether that’s a plausible change in the current architecture.

Could you do a concrete timing comparison, between having all your extra regions disabled via count, vs. creating an ad-hoc version of your multi-region module with the unnecessary region modules fully deleted? It would be interesting to quantify the actual wallclock impact.

As @maxb noted, the dependency graph is built before Terraform evaluates count or for_each arguments, because those arguments can have their own dependencies and so it would not be possible to evaluate them before building the dependency graph.

When Terraform visits the first graph node representing a module block, it will evaluate count and remember that the result was zero, but modules themselves don’t really exist except as a namespacing construct and so by declaring zero instances of a module you are really declaring zero instances of everything inside the module. Terraform will visit each object in the order described by the dependency graph, and calculate that there are zero instances of that object declared and so skip taking any other action.

It might help to imagine that count = 0 on a module is essentially just a shorthand for writing count = 0 on all of the objects declared inside, though of course that analogy isn’t perfect because not all nested objects support count directly themselves and the resulting tracking addresses end up a little different (with the instance keys tracked on the module rather than on the leaf object).

This is not a bug, and is instead an implementation detail of how dynamic module instances work in Terraform. This implementation detail can also be seen in the trace log in other ways. For example, if you declare two levels of nested module that both have count = 2 and that module contains a resource block then you have effectively declared 2*2=4 instances of that resource, and if you carefully watch the trace log you’ll see that Terraform still only visits that resource block once and at that point calculates all four of those instances at once, with similar effect to having set count = 4 directly on the resource (aside from the addressing and namespacing differences).

In Terraform, during the plan phase, the count attribute is evaluated to determine the number of resources to create. This means that Terraform needs to have knowledge of the count value in advance to plan and allocate the necessary resources accurately. However, when using the debug mode, it may appear that Terraform is traversing inside modules and processing resources even when the count is set to 0. This behavior could be misleading and might seem like unnecessary processing is taking place. Regards