In our project, root modules are divided by component, and each root module has dependencies on each other.
The order of the pipeline when executing terraform apply in CI also corresponds to this.
For example, if root module A depends on root module B using terraform_remote_state, then a terraform apply is performed on root module B and then on root module A.
It is worth noting that if the outputs in root module B have been modified, it is possible that the root module A, which references those outputs, will also be modified.
However, when CI executes the terraform plan, the root modules are executed independently of each other, so root module A is considered unchanged.
We want to check all possible changes when we run the terraform plan.
Is there a good workaround for this?
That very much depends on your CI system and what dependency capabilities it has.
With some CI systems you can have jobB trigger jobA once it completes.
I think what you are asking for here is essentially a way to ask Terraform to plan one configuration against a not-yet-applied plan for another configuration, to understand how the changes to one will affect the other.
That isn’t something Terraform can support today because a
data block declares a direct dependency on an external object and Terraform itself (as opposed to the providers fetching the data) doesn’t understand anything about what causes that data to be there and so can’t know it would need to look in a different place in this one situation.
One alternative strategy you could consider is to connect your configurations together using input variables instead of remote state, although this will require some more complex glue code around Terraform to make it work because Terraform itself is only aware of one configuration at a time.
The general idea here would be to write a custom automation wrapper that knows how to ask one configuration for its output values (e.g. using
terraform output -json) and then derive from that another configuration’s input variables. This moves the “wiring together” out into your automation instead of within Terraform, but in return for that complexity you can directly control that data flow.
When you want to create this sort of “meta plan” across multiple configurations then you’d have the automation treat that a bit differently: instead of using the current output values directly, you’d instead run
terraform plan -out=tfplan followed by
terraform show -json tfplan to get a JSON representation of the whole plan. Part of that big response is the planned output values, which you can then feed into the input variables of your downstream plan.
The main catch here is that during planning some output values might have unknown values or might have known values that refer to something that hasn’t been created yet. Therefore successful downstream planning might be impossible in some cases because the upstream changes need to be applied first before there’s enough information to build the downstream plan.
Thank you for the detailed reply. I will explore some simple ways.