I am having this code built for example purposes, but however I configure it, the EC2 instance does not become SSHable via the settings provided by EC2 itself albeit it says it should be able to connect. Using tf v0.13.1
This is my TF code:
resource "aws_instance" "live" {
ami = "ami-060e472760062f83f"
associate_public_ip_address = false
instance_type = "t2.nano"
key_name = "xxx"
subnet_id = aws_subnet.default.id
vpc_security_group_ids = [
aws_security_group.http-group.id,
aws_security_group.https-group.id,
aws_security_group.ssh-group.id,
aws_security_group.all-outbound-traffic.id,
]
depends_on = [
aws_internet_gateway.gw
]
lifecycle {
ignore_changes = [
user_data,
associate_public_ip_address
]
}
}
Networking setup:
resource "aws_vpc" "default" {
cidr_block = "10.0.0.0/16"
enable_dns_support = true
}
resource "aws_subnet" "default" {
vpc_id = aws_vpc.default.id
cidr_block = "10.0.10.0/24"
}
resource "aws_internet_gateway" "gw" {
vpc_id = aws_vpc.default.id
}
resource "aws_security_group" "https-group" {
name = "https-access-group"
description = "Allow traffic on port 443 (HTTPS)"
tags = {
Name = "HTTPS Inbound Traffic Security Group"
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "http-group" {
name = "http-access-group"
description = "Allow traffic on port 80 (HTTP)"
tags = {
Name = "HTTP Inbound Traffic Security Group"
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "all-outbound-traffic" {
name = "all-outbound-traffic-group"
description = "Allow traffic to leave the AWS instance"
tags = {
Name = "Outbound Traffic Security Group"
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "ssh-group" {
name = "ssh-access-group"
description = "Allow traffic to port 22 (SSH)"
tags = {
Name = "SSH Access Security Group"
}
ingress {
description = "SSH to VPC"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_eip_association" "eip_assoc" {
depends_on = [
aws_instance.live,
aws_eip.lb
]
instance_id = aws_instance.live.id
allocation_id = aws_eip.lb.id
#network_interface_id = aws_network_interface.multi-ip.id
}
resource "aws_eip" "lb" {
vpc = true
instance = aws_instance.live.id
depends_on = [
aws_instance.live
]
}
And Route53 Setup:
resource "aws_route53_zone" "default" {
name = "xxx.io."
vpc {
vpc_id = aws_vpc.default.id
}
}
resource "aws_route53_record" "x-www" {
zone_id = aws_route53_zone.default.zone_id
name = "www.xxx.io"
type = "A"
ttl = "300"
records = [aws_eip.lb.public_ip]
}
resource "aws_route53_record" "x-sub-www" {
zone_id = aws_route53_zone.default.zone_id
name = "*.xxx.io"
type = "A"
ttl = "300"
records = [aws_eip.lb.public_ip]
}
In AWS console, everything seems connected well together, however the local exec that is in the tf as well is not able to SSH into the EC2 instance and neither am I. What I am doing wrong?