Hello everybody,
I am currently porting an infrastructure creation and management from custom system using AWS SDK to terraform. Mostly things are going well but I have some doubts. When creating an EC2 instance and RDS instance it is important that apart from being in the same region they also use same subnet and availability zone. Appropriate Availability Zone is determined by following conditions:
- Subnet is part of default vpc for the region
- Subnet is available
- Subnet has more than 20 available ip addresses
Fetch all such subnets and choose the first one available. As long as both EC2 and RDS are running this setting should remain unchanged.
Based on all these criteria I wrote a simple terraform module
data "aws_vpc" "main" {
default = true
}
data "aws_subnets" "vpcsubnets" {
filter {
name = "vpc-id"
values = [data.aws_vpc.main.id]
}
filter {
name = "default-for-az"
values = [true]
}
filter {
name = "state"
values = ["available"]
}
}
data "aws_subnet" "vpcsubnet" {
for_each = toset(data.aws_subnets.vpcsubnets.ids)
id = each.value
}
locals {
filtered_subnets = [
for v in data.aws_subnet.vpcsubnet : v if v.available_ip_address_count > 20
]
}
output "default-subnet" {
description = "Filtered default subnet"
value = element(local.filtered_subnets, 0)
}
It works as expected. I do have one doubt that I would like to get clarified. As instances are being added to the same region number of available ip’s will drop and it may bring a used subnet ip count to number below the threshold. Now my question is if that happens and one changes something on existing EC2 or RDS instance and execute apply, would that force recalucation of the custom module for subnet and force subnet modification as well? I would like to prevent this if it does happen.
Thanks