I understand that Terraform needs to re-read data sources whenever they depend on a resource or a module with changes pending.
But is there a way to let Terraform know whether there are changes pending at a granular key-value level when data sources are iterated with a for_each
argument?
Here’s a rough description of my use case:
- I need to read few Microsoft Entra ID groups in my configuration to use them in few permission assignments.
- For this, I use a module that reads the groups with an
azuread_group
data resource that I iterate over a map of the following form, where each key indicates the groups name, and the value is an object indicating the various permissions that should be assigned to it:groups = { group_name_1 = { foo = "bar" ... } ... }
- The module’s output is essentially the
groups
map enriched with each group’s read properties (e.g. its Azure ID) and is used downstream for the actual permission assignments. - To assign permissions in one downstream module, I need to read the groups again with a different provider (with
databricks_group
FWIW), referencing few of the enriched attributes. This is iterated with afor_each
in an entirely analogous way as in 2. above.
My problem: whenever I add an additional key-value pair to my groups
map, Terraform wants to re-read all iterated data sources in 4., not just the ones indexing the new key-value pairs I’ve added. This has the drawback that few resources depending on read attributes of groups need to be redeployed, even when they are completely independent from the new key-value pair.
In a sense, the for_each
meta argument couples all iterated instances, although they should be independent of each other.
My questions:
- Is this perhaps due to the fact that the first data read at 2. encapsulates the module’s output (basically same input but enriched with read group attributes) in a monolithic fashion, in the sense that Terraform doesn’t know about the fine-grained iterated nature of this output when it’s iterated over in 3., and therefore interprets the change in the module output as a potentially full change, as opposed to just the addition of a key-value pair?
If so, would it be possible at some point in the future for Terraform to propagate the map nature of the output an peek into which key-value pairs will be subject to a modifications and which won’t and shouldn’t trigger a modification downstream? - Is this maybe an issue of the Databricks provider exclusively? I say this because only those data resources are re-read, whereas other references to the module’s output in 2. using the
azurerm
provider never induce a full re-read of all iterated data sources.