I was working on a project that would create a VPC and a set of resources.
Duplicating the project and changing the contents of tfvars would create another VPC with the same resources but different cidrs and the like.
All created VPCs would also have a transit gateway attachment allowing our devs to access the resources via a single VPN.
The problem is the cost of the transit gateway attachment is prohibitive.
I now plan to use the same model as our manually created resources where instead of a VPC per set of resources I will create a single VPC with a 16bit cidr split into 24bit cidrs. Each 24bit cidr will contain the resource sets that were going to be in individual VPCs.
My question is - Is it ok to have âsubâ projects separate from each other working within the same space?
For example, a project with its own state file that creates a basic VPC.
Another project with itâs own state files that create subnets, routes and resources with in the VPC created by the âvpc projectâ
Another project that creates its own set of subnets, routes and such.
The end result being multiple terraform projects maintaining their own sets of resources, adding and deleting them within a single shared VPC.
Terraform itself has no awareness of VPCs and subnets, so it just tracks individual objects typically by their ID as assigned in the remote system. For example, the hashicorp/aws provider tracks instances of aws_vpc using the EC2-assigned ID shaped like vpc-NNNNNN, and likewise aws_subnet using subnet-NNNNN.
The important thing to keep in mind is that Terraform expects that any resource block has exclusive control over the object(s) that are bound to it. That means that you should not have two resource "aws_vpc" blocks that represent the same VPC, but itâs fine to declare other separate objects that live inside that VPC because they will be tracked separately.
Exactly what it means for a remote object to be âbound toâ a resource instance depends on the provider and resource type so itâs hard to give a general rule thatâs more specific than the above, but for aws_vpc and aws_subnet in particular itâs safe and common to declare a single network structure in one configuration and then have many other configurations declare other objects that live inside those networks. If you do this, you should typically rely exclusively on dynamic IP address assignment within the subnets so that EC2 can be responsible for assigning each object its own unique IP addresses, and use security groups to limit the interactions between objects that belong to different subsystems.
You also need to be careful around things like naming. As you will have multiple root modules running within the same VPC you wonât be able to have a full understanding of what is going on from looking at a single code repository - so Iâd suggest thinking about how you name things, so you donât accidentally cause name clashes
Ahh yes, this is a good point which I just glossed over.
When I was using an architecture similar to what this topic is asking about in a previous role we devised a systematic way to use AWS tags to keep things relatively orderly, which included designated a particular tag key (in our case âComponentâ) which represents which subsystemâs Terraform configuration each object belongs to, which then allowed us to filter those objects in data blocks and in the AWS console when needed.
In EC2 land most object types donât require unique names and instead use server-generated opaque IDs like the vpc-NNNN examples I was using above, but there are a few that do so itâs important to check the documentation carefully and understand what each object type requires. If you do need to declare objects of types that require unique names then youâll need to choose an unambiguous naming scheme for those, such as adding the âcomponentâ name (whatever you mightâve put in your equivalent of my âComponentâ tag) as a prefix to the names.
Outside of EC2/VPC unfortunately AWS services do tend to require unique names, so the prefix scheme becomes more important there. One annoyance is that different AWS services have different conventions for naming and requirements about what characters are allowed, so for example in some services itâs conventional to use TitleCase names and others itâs conventional to use dash-separated-words and others underscore_separated_words. This does unfortunately tend get a bit messy, but if youâre willing to deviate a bit from the documented naming conventions then I think you can typically make do by making sure your prefixes consist entirely of lowercase ASCII letters and are relatively short, which should then make them valid across most AWS services, even if perhaps not idiomatic (for services that typically use title case).