I am upgrading from 0.12 to 0.13. I ran the suggested batch command:
$ find . -name '*.tf' | xargs -n1 dirname | uniq | xargs -n1 terraform 0.13upgrade -yes
which created a version.tf
in each of my modules.
The migration doc says:
Each module must declare its own set of provider requirements, so if you have a configuration which calls other modules then you’ll need to run this upgrade command for each module separately.
However it looks like a lot of redundancy (it’s un-DRY) to me to have the same provider specified in each module, given I want to use the same provider for all modules.
Here’s the simplified structure of the repo I am managing:
infrastructure
├── main.tf
├── modules
│ ├── alerting
│ │ ├── main.tf
│ │ └── versions.tf
│ ├── gcs-buckets
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── versions.tf
│ ├── kubernetes-prod
│ │ ├── main.tf
│ │ └── versions.tf
├── outputs.tf
├── variables.tf
└── versions.tf
Almost all version.tf
files are identical. I have three variations ATM with three different providers. Is there any way to specify the required providers on the level of
infrastructure
├── modules
So all modules share it? Or at least share the versions of providers, so I don’t specify the version in each module?