Hi everyone, my name is Simone and this is my first post. I started using Terraform few months ago, so I’m quite a newbie Please forgive me if I make stupid or common questions. I’ll try to do my best to don’t waste space and your time
After some training with Terraform, I was able to deploy my infrastructure on AWS into a VPC for test environments.
My Terraform project is made of some .tf files (one for each AWS component I need, ex ECS, CloudWatch, CloudFront, etc) and a variables.tf file where I stored all the necessary variables.
So, let’s say that my test infrastructure now is fully managed in AWS with Terraform.
Now, if I would like to provision another instance of this infrastructure, for example in the production VPC, or even in the test one, what I have to do? Do I need to copy all the .tf files in another folder of my pc, than manually change all the values in the variables.tf file?
Is this the proper way of working?
Any answer, resource or even “read the manual” hint will be appreciated!
Thank you very much
There are a few ways to do this that might qualify as best practice:
- Use Terraform Workspaces. Workspaces allow you to make multiple instances of a root module.
- Create a meta module that encapsulates your infrastructure, then deploy it multiple times in a root module.
I personally gravitate to the second option because the first can become unclear and confusing with scale.
Note: In order to understand my answer, you’ll have to learn about Terraform Modules
Hi rojopolis, thank you for your suggestions.
The scenario I would like to achieve is an infrastructure that I can use as a solid basis for my production environment.
Then, I would like to replicate this infrastructure to a test environment, if necessary extend it with new services, etc.
For that purpose, I thought to put all the code on a git repo and then clone it on a folder of my pc to work on the production environment, then clone on a different folder to work on the test environment.
Using directories, I will be able to differently manage prod or test env, pull all the changes to the corresponding git repo branches and so on.
The other need I can have, is to replicate for example the production env to a preprod env, so in this case, it would be maybe a good idea to use workspaces.
Do you think that the combination of directories (to separate environments) and workspaces (to separate instances of the same environment (like test1, test2, etc)) can be a good idea?
I don’t know, in this scenario, if I still need to use modules. Actually, I’m a single devops, and the infrastructures I manage with Terraform have less than 100 resources.
I personally agree with @rojopolis’s recommendation to try to use modules over workspaces. Trying to extract reusable modules can make your configuration more maintainable, even if they’re only used once.
The following resources might also be helpful reading:
If I were starting a new infrastructure project, I would probably use Terraform Cloud with the VCS workflow, and follow the HashiCorp recommended structure: one Terraform Cloud workspace per environment per configuration, using
test VCS branches to differentiate.
Terraform Cloud with a VCS workflow can have advantages even if you’re the only practitioner using it at the moment, and is clearly a good direction to go in as your team grows.
(Note that the Terraform CLI concept of “workspaces” is separate from Terraform Cloud “workspaces”, and the clash of names is unfortunate.)