Hey everyone! I’m working on developing a set of terraform modules that I can use for deploying web applications for different environments (ad-hoc per developer, dev, qa, rc, stage, prod, demo, etc.)
I originally learned a lot about Infrastructure as Code by using CloudFormation and then CDK, and I have published a CDK construct library that I can use for the same purpose of deploying web applications, here’s the link: https://www.npmjs.com/package/django-cdk. I’m basically trying to create something similar with Terraform.
I started working on a terraform configuration that lives in the same repo as my Django + Vue.js application mono repo using one big
main.tf file, then broke that into modules and refactored it to find the right abstraction and organization for the modules, and that has been working really well. For the next step in abstraction, I tried creating a new repo called
terraform-aws-django and published it to the AWS registry (link).
I’m now having trouble trying to use this module in a
live repo that will be a 1:1 mapping of what I have deployed in my AWS accounts.
In the Terraform Up and Running book (version 1, which I’m realizing is quite old at this point), I read that the most DRY way to do this is to have only
*.tfvars* files in the live repo and use a special source
parameter in the*.tfvars` files that points to a version git repo where my modules live. Was this an old feature of Terraform that is now removed? I asked about this in more detail in this StackOverflow question and I was told that this is not possible.
This seemed like a great way to organize my
live terraform repo when I read the book, but after asking that question on SO and looking at the updated version 2 of that book, I see that this is no longer supported. In the new and updated book, there is instead a recommendation to use terragrunt and
*.hcl files, but I don’t want to use another wrapper/tool since I’m still trying to figure out a way to do everything with only Terraform.
What I’m trying to do now seems like it might work, but it also feels like it will involve lots of duplicated code where I was hoping to keep things simpler. In my
live repo I have a folder per environment with five files:
- main.tf → this just calls my module from the terraform registry (I’ll call it
versionand parameters are provided with parameters on the module (
param = var.my_param)
- variables.tf → this defines all of the parameters that I’m passing to the single module I want to call from the root module (this is a copy of what I have in my root level module’s code that I’m calling in the child module)
- outputs.tf → The module has some outputs that I need to use in a CI/CD pipeline. I now need to define a new set of outputs on the parent module that repeat the outputs on the root level of the child module (again this seems to me like the wrong way)
- providers.tf → I originally defined a
providers.tfon the root level of my
appmodule, but this won’t work if I am calling the module as a child module, since the
terraformblock needs to be defined in the root module.
env.tfvars→ this is a file that I use to define the inputs for the live environment. Again I was originally hoping to use just this one file and define
sourceparameter in it as described in the last chapter of v1 of the Terraform Up and Running book, but I don’t think this is going to be possible. I could then have any number of other environments defined by other
other-env.tfvarsfiles and then point to the folder when I do
terraform applyin my pipelines.
My main goal here is to use a remote module as the root module for a “live” terraform configuration that I can define minimally with a single
*.tfvars file (if this is even possible).
Thanks for having a look at my question, I’m eager to get the best and most DRY patterns in place for my practice terraform repo so I can get on to the other parts of my CI/CD pipeline that will automate the terraform init/plan/apply. If there is any other information or details I can share, I would be more than happy to do so!