I was working on a project that would create a VPC and a set of resources.
Duplicating the project and changing the contents of tfvars would create another VPC with the same resources but different cidrs and the like.
All created VPCs would also have a transit gateway attachment allowing our devs to access the resources via a single VPN.
The problem is the cost of the transit gateway attachment is prohibitive.
I now plan to use the same model as our manually created resources where instead of a VPC per set of resources I will create a single VPC with a 16bit cidr split into 24bit cidrs. Each 24bit cidr will contain the resource sets that were going to be in individual VPCs.
My question is - Is it ok to have ‘sub’ projects separate from each other working within the same space?
For example, a project with its own state file that creates a basic VPC.
Another project with it’s own state files that create subnets, routes and resources with in the VPC created by the ‘vpc project’
Another project that creates its own set of subnets, routes and such.
The end result being multiple terraform projects maintaining their own sets of resources, adding and deleting them within a single shared VPC.
Terraform itself has no awareness of VPCs and subnets, so it just tracks individual objects typically by their ID as assigned in the remote system. For example, the hashicorp/aws provider tracks instances of aws_vpc using the EC2-assigned ID shaped like vpc-NNNNNN, and likewise aws_subnet using subnet-NNNNN.
The important thing to keep in mind is that Terraform expects that any resource block has exclusive control over the object(s) that are bound to it. That means that you should not have two resource "aws_vpc" blocks that represent the same VPC, but it’s fine to declare other separate objects that live inside that VPC because they will be tracked separately.
Exactly what it means for a remote object to be “bound to” a resource instance depends on the provider and resource type so it’s hard to give a general rule that’s more specific than the above, but for aws_vpc and aws_subnet in particular it’s safe and common to declare a single network structure in one configuration and then have many other configurations declare other objects that live inside those networks. If you do this, you should typically rely exclusively on dynamic IP address assignment within the subnets so that EC2 can be responsible for assigning each object its own unique IP addresses, and use security groups to limit the interactions between objects that belong to different subsystems.
You also need to be careful around things like naming. As you will have multiple root modules running within the same VPC you won’t be able to have a full understanding of what is going on from looking at a single code repository - so I’d suggest thinking about how you name things, so you don’t accidentally cause name clashes
Ahh yes, this is a good point which I just glossed over.
When I was using an architecture similar to what this topic is asking about in a previous role we devised a systematic way to use AWS tags to keep things relatively orderly, which included designated a particular tag key (in our case “Component”) which represents which subsystem’s Terraform configuration each object belongs to, which then allowed us to filter those objects in data blocks and in the AWS console when needed.
In EC2 land most object types don’t require unique names and instead use server-generated opaque IDs like the vpc-NNNN examples I was using above, but there are a few that do so it’s important to check the documentation carefully and understand what each object type requires. If you do need to declare objects of types that require unique names then you’ll need to choose an unambiguous naming scheme for those, such as adding the “component” name (whatever you might’ve put in your equivalent of my “Component” tag) as a prefix to the names.
Outside of EC2/VPC unfortunately AWS services do tend to require unique names, so the prefix scheme becomes more important there. One annoyance is that different AWS services have different conventions for naming and requirements about what characters are allowed, so for example in some services it’s conventional to use TitleCase names and others it’s conventional to use dash-separated-words and others underscore_separated_words. This does unfortunately tend get a bit messy, but if you’re willing to deviate a bit from the documented naming conventions then I think you can typically make do by making sure your prefixes consist entirely of lowercase ASCII letters and are relatively short, which should then make them valid across most AWS services, even if perhaps not idiomatic (for services that typically use title case).