Way to work with directory structure

Hello everyone.

I’ve been using Terraform for around 6 month now, and it’s been great.
I’m having a directory, with shared directory referenced using remote_state.
Ex:

├── provider.tf
├── shared
│   ├── main.tf
│   ├── outputs.tf
│   └── provider.tf -> ../provider.tf
├── prod
│   ├── a
│   │   ├── data.tf
│   │   ├── main.tf
│   │   ├── provider.tf -> ../../provider.tf
│   │   └── variables.tf
│   └── b
│       ├── data.tf
│       ├── main.tf
│       ├── provider.tf -> ../../provider.tf
│       └── variables.tf
└── dev
    ├── a
    │   ├── data.tf
    │   ├── main.tf
    │   ├── provider.tf -> ../../provider.tf
    │   └── variables.tf
    └── b
        ├── data.tf
        ├── main.tf
        ├── provider.tf -> ../../provider.tf
        └── variables.tf

How do I do it so when shared/ updated, all those depend on it is also updated?

Thanks.

@bentinata,

If you are running Terraform locally on a Unix based machine, it’s possible to symbolic link to the main shared/provider.tf.
ln -s file_path link_path

I also would also suggest looking into Terraform workspace when having two logical environments with similar resources.

This provider.tf actually symlinked to the root one. Since all it does is defining provider with non-sensitive data.

My question would be:
How to run tell dev/a to update when shared is updated? Since inside dev/a/main.tf there’s remote_state data reference to shared.

Hi @bentinata!

Terraform does not currently have any built-in way to cascade runs from one configuration to another. Teams that have architectures like this will tend to run Terraform in automation and then use the automation system to orchestrate the cascading runs.

For example, if you are running Terraform in Jenkins (not a recommendation, just an example) you can configure it such that if one job succeeds it will automatically start a build for one or more downstream jobs.

Some organizations prefer a more ad-hoc model, where updates to the shared configuration are designed to be backward-compatible and downstream configurations gradually adopt those updates as part of plans made for other reasons. Whether this is appropriate for your case will depend on what exactly you’re managing with the shared configuration and thus whether it’s possible to apply a change without tight coordination with consumers of those objects.

I’m using Terraform Cloud/Terraform Enterprise, with single TFE workspaces for each environment subdirectory. Currently experimenting with automatic trigger runs with manual definition of paths. So, shared will only triggered when shared/ directory changes, but dev/a would both run when dev/a/ and shared/ changes.

Do you have any other suggestion? Thanks.

Terraform Cloud does not currently have a built-in mechanism for one run to trigger another. To achieve that, I think you’d need to interact with the Terraform Cloud API and implement automation in terms of that rather than in terms of the terraform CLI program specifically.

The Terraform Cloud API team prepared some docs on API-driven runs that might be helpful. Unfortunately I don’t have direct experience with that myself, so I can’t offer any more specific ideas than that, but perhaps someone else with more Terraform Cloud API experience can chime in with some more thoughts.