I have a subnet declared as follows which creates 4 subnets, one in each of the 4 AZs in us-west-2 in AWS
const largePublicSubnets = new aws.vpc.Subnet(this, "large_pub_subnet", {
availabilityZone,
cidrBlock:
"${cidrsubnet(aws_vpc.vpc.cidr_block, 4, count.index + 2 * length(data.aws_availability_zones.all.names))}",
mapPublicIpOnLaunch: true,
tags: {
name: environment + "-large-public-subnet-${(count.index + 1)}",
["kubernetes.io/cluster/" + eksClusterName]: "shared",
"kubernetes.io/role/elb": "1",
environment: environment,
public: "true",
},
vpcId: "${aws_vpc.vpc.id}",
});
largePublicSubnets.addOverride("count", Fn.lengthOf(zoneNames));
In addition to this, I have a nodegroup declared as follows, which uses all 4 of the above subnets:
const firehoseNodeGroup = new aws.eks.EksNodeGroup(
this,
"firehoseNodeGroup",
{
clusterName: eksCluster.name,
nodeGroupName: `eksNodeGroup-${environment}-firehose`,
nodeRoleArn: nodeGroupRole.arn,
subnetIds: [],
instanceTypes: ["m5.4xlarge"],
labels: {
app: "firehose",
},
scalingConfig: {
minSize: 0,
maxSize: 4,
desiredSize: 1,
},
dependsOn: eksNodeGroupRoleAttachments,
tags: {
Name: "firehose_eks_node",
},
lifecycle: {
ignoreChanges: ["scaling_config[0].desired_size"],
},
taint: [
{
key: "app",
value: "firehose",
effect: "NO_SCHEDULE",
},
],
}
);
firehoseNodeGroup.addOverride(
"subnet_ids",
"${ [ aws_subnet.large_pub_subnet[0].id, aws_subnet.large_pub_subnet[1].id, aws_subnet.large_pub_subnet[2].id, aws_subnet.large_pub_subnet[3].id ]}"
);
I’d like to restrict my node group to only run in one specific AZ, us-west-2c
, but I’m not sure how to correctly determine which subnet corresponds to that AZ. How can I do that? I’m open to redesigning my subnet declaration if needed since the current structure is some tech debt we’ve accumulated.