Hello Everyone,
I am trying to remove providers.tf and variables.tf file from my terraform directory. The reason for that is I don’t want to expose these files to the user.
I just want main.tf file to be placed in the directory and then I will run terraform init
, terraform plan
and terraform apply
in an automated fashion through Gitlab CI pipeline.
My current directory structure looks like this:
├── elasticsearch
│ ├── elastic.tf
│ ├── providers.tf
│ └── variables.tf
└── modules
└── elastic_search
├── main.tf
└── variables.tf
Here, elastic.tf contains module blocks like below:
module "index_1" {
source = "../modules/elastic_search"
index_name = "index_1"
}
So, the root module (elasticsearch) will call child modules under modules
directory.
I tried searching for an environment variable in terraform, but couldn’t find anything.
I am quite new to terraform, hence not having much idea.
So, how can I remove providers.tf and variables.tf from the elasticsearch directory and terraform should automatically detect these files when they are placed in some other directory?
I will prefer to keep these files inside my modules directory.
Can anyone explain with an example of how can I achieve this functionality?
Thank you in advance!!
Hi @lakshayarora476,
A Terraform module consists of all .tf
files within a directory, but the filenames do not matter. Because the directory is the container for a module, everything you want in the module must be contained within that directory.
Whatever is in your providers.tf
and variables.tf
files only relates to the module which they are located in, so if your root module requires the configuration in those files, then they must remain within the same directory.
Hello @jbardin
Thanks for your prompt response.
Here, providers.tf
and variables.tf
contains the provider configuration and variables respectively.
I don’t want to expose these files to the end user. Only main.tf
needs to be exposed which contains only the module blocks.
Isn’t there any way in terraform to exclude these 2 files and place them in a different directory so that terraform can reference that other directory during initialisation, planning and apply phase?
In most cases providers should be configured within the root module, and explicitly passed into child modules as required.
If you need external values passed in via input variables, they must reside in the root module, placing their definitions within a child module makes them inputs to that child module and no longer in the root module namespace.
Moving a provider configuration to a child module does not actually hide anything from the user, though it would obfuscate the source slightly, and it works contrary to how Terraform was designed because you would no longer be able to use that provider in any other module.
1 Like
okay so there is no solution to this problem?
I’m not quite sure I understand what the problem is. If this is purely cosmetic, to move the files into another directory but still retain the same functionality, then no there is no solution because that is not within the design of Terraform modules.
1 Like
Okay Thanks for your help.
Hi @lakshayarora476,
From what you’ve described it seems that you are intending to create a new abstraction on top of Terraform, which has some additional requirements that are not part of Terraform’s own scope.
I can think of a couple different ways you could implement parts of this as wrapping software around Terraform, therefore using Terraform only as part of your solution:
-
Design and implement a wrapper around Terraform which takes as input a Terraform module that is written as if it is designed to be called from other modules, rather than as the root module.
Your wrapper would then generate the root module itself, consisting of:
- A
module
block calling the module that the user provided, with a fixed set of input variable values that the called module is required a to accept.
- One or more
provider
blocks declaring provider configurations that will be available for the child module to use.
The module provided by your user would then be a typical Terraform module as one might publish in a module registry: it would still need to include variable
blocks declaring each of the inputs that your automation provides, but it would not need to define any values for those variables or any provider
blocks, since both would be provided by your generated root module.
-
Similar to the above but instead of retaining the shared module abstraction that Terraform provides you instead define your own variant of Terraform module which, as you say, should not include any variable
or provider
blocks at all. Your wrapping automation would then generate a file containing all of the necessary variable
and provider
blocks so that the directory becomes a valid and complete Terraform module “just in time” before you run Terraform.
You will need to generate the additional file in the module directory itself, because that is where Terraform expects to find the entire definition of the module. However, you can choose any arbitrary name for that file to ensure that it won’t overwrite any files provided by the user’s module.
This is the closest to what you described in your initial question, but it does mean that the module provided by your user will be incomplete from Terraform’s perspective.
Of these two options I would recommend the first because it means that the module provided by your user will still be a valid and complete Terraform module and so the authors of that module will be able to use normal Terraform tools to develop and test it, such as using the Visual Studio Code extension to help with editing the module.
If you adopt the second option then the module written by your users will be incomplete and invalid from Terraform’s perspective, and so e.g. if they use the Visual Studio Code extension it will report any var.example
reference as an error, because there is no corresponding variable "example"
block for it to refer to in the configuration that the editor can see.
The first option is a compromise which means that the variable values and the provider configurations get factored out, but the variable declarations still remain in the module as provided by your user and so the module is valid as a standalone shared module, with the variable
declarations effectively acting as placeholders for the values that your system will provide dynamically at runtime.
You might find it interesting that HashiCorp Consul’s network infrastructure automation system “Consul-Terraform-Sync” is effectively doing what I described as option one above.
The Consul team documents the API that a user module is expected to provide and then Consul-Terraform-Sync internally generates its own root module that calls the user’s provided module and runs terraform apply
.
In that system then the Terraform module is effectively acting as a mapping from a standardized Consul data structure to an arbitrary set of infrastructure. The details may be slightly different than yours, but I think the principle is similar enough that studying how that system works might be a good way to think through the details of my option 1.
Thanks for your help.
Since I was using GitLab to automate my entire stuff, I was able to overcome my problem.
I kept the providers and variables file in a separate repository. And I am cloning that repository during the GitLab pipeline run in the terraform directory (root module), so terraform directory has all the required files.