We have recently gone through all the steps to bring our own IPv6 to AWS. We have IPv6 IPAM pools setup in our account, and so the next step was to create VPCs using our IPv6. We updated the terraform to use ipv6_ipam_pool_id when creating the VPC, and the initial plan and apply work great. HOWEVER, when I a do a second plan/apply (without changing any of the terrform code) I get an error. Here’s some minimal terraform code to reproduce the issue:
resource "aws_vpc" "brittest" {
cidr_block = "10.155.224.0/22"
ipv6_ipam_pool_id = "ipam-pool-<REDACTED>"
tags = { "Name" = "Terrabritt" }
}
When I do the second plan I get:
# aws_vpc.brittest will be updated in-place
~ resource "aws_vpc" "brittest" {
id = "vpc-<REDACTED>"
~ ipv6_ipam_pool_id = "IPAM Managed" -> "ipam-pool-<REDACTED>"
tags = {
"Name" = "Terrabritt"
}
# (19 unchanged attributes hidden)
}
If I check my terraform state, I see the pool_id is indeed “IPAM Managed” and not the pool I provided:
% grep ipv6 terraform.tfstate
"assign_generated_ipv6_cidr_block": false,
"ipv6_association_id": "vpc-cidr-assoc-<REDACTED>",
"ipv6_cidr_block": "<REDACTED>",
"ipv6_cidr_block_network_border_group": "us-east-1",
"ipv6_ipam_pool_id": "IPAM Managed",
"ipv6_netmask_length": 0,
If I check the WebUI, it also displays, “IPAM Managed”. And if I do an apply, it errors out. So I think there is some kind of bug here, either with aws_vpc or on the AWS side. A second apply should not be changing the pool id. I realize that ipv6_ipam_pool_id is hard to test b/c it does require bringing your own IPv6 pool to AWS. Is anyone using this successfully?
I filed a github issue as well.
opened 08:01PM - 01 Feb 23 UTC
bug
needs-triage
service/vpc
### Terraform Core Version
1.3.6
### AWS Provider Version
4.52.0
### Affecte… d Resource(s)
aws_ec2_managed_prefix_list with address_family set to IPv6
### Expected Behavior
When the terraform apply is run initially, the prefix list is created just as expected. A subsequent terraform apply of the same resource should result in a no-op.
### Actual Behavior
The terraform plan sees each IPv6 entry as different from what is already configured. It tries to delete all the existing entries, and then create identical issues. The apply fails b/c the entry actually already exists.
This appears to only affect IPv6 prefix-lists.
### Relevant Error/Panic Output Snippet
```shell
aws_ec2_managed_prefix_list.IPv6_prefix_list: Modifying... [id=pl-07fa1eed8528c3f75]
╷
│ Error: updating EC2 Managed Prefix List (pl-07fa1eed8528c3f75): InvalidPrefixListModification: An entry with the CIDR (fd00:FEED:BEEF::/48) and description (Test IPv6 entry) already exists.
│ status code: 400, request id: fd45b551-fb49-4983-807b-0c75ed0bffbb
│
│ with aws_ec2_managed_prefix_list.IPv6_prefix_list,
│ on main.tf line 26, in resource "aws_ec2_managed_prefix_list" "IPv6_prefix_list":
│ 26: resource "aws_ec2_managed_prefix_list" "IPv6_prefix_list" {
│
╵
```
### Terraform Configuration Files
```
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
# Configure the AWS Provider
provider "aws" {
region = "us-east-1"
}
resource "aws_ec2_managed_prefix_list" "IPv4_prefix_list" {
name = "Prefix List IPv4"
address_family = "IPv4"
max_entries = 20
entry {
description = "Test IPv4 entry"
cidr = "10.148.0.0/18"
}
}
resource "aws_ec2_managed_prefix_list" "IPv6_prefix_list" {
name = "Prefix List IPv6"
address_family = "IPv6"
max_entries = 20
entry {
description = "Test IPv6 entry"
cidr = "fd00:FEED:BEEF::/48"
}
}
```
### Steps to Reproduce
Run terraform apply twice in a row without changing the resource.
### Debug Output
[ipv6_prefix_lists_debug.txt](https://github.com/hashicorp/terraform-provider-aws/files/10561179/ipv6_prefix_lists_debug.txt)
### Panic Output
_No response_
### Important Factoids
_No response_
### References
_No response_
### Would you like to implement a fix?
None
Oops… wrong issue. This is the right one:
opened 11:38AM - 01 Nov 22 UTC
bug
service/vpc
### Terraform Core Version
1.3.3
### AWS Provider Version
4.37.0
### Affecte… d Resource(s)
aws_vpc created with ipv6_ipam_pool_id argument
### Expected Behavior
The initial VPC creation happens successfully. VPC is created, and an IPv6 CIDR is allocated from the specified IPv6 pool. A second apply of the terraform with no changes should result in a no-op.
### Actual Behavior
When the terraform plan is run a second time, terraform thinks its needs to change the `ipv6_ipam_pool_id` argument from `IPAM managed` to the pool_id in the terraform file. A check of the tfstate reveals that the `ipv6_ipam_pool_id` has been set to 'IPAM Managed`. This value is also see when the VPC is viewed in the WebUI. A terraform apply of this plan fails.
### Relevant Error/Panic Output Snippet
```shell
terraform plan
data.aws_region.current: Reading...
data.aws_region.current: Read complete after 0s [id=us-east-1]
aws_vpc.brittest: Refreshing state... [id=vpc-<REDACTED>]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated
with the following symbols:
~ update in-place
Terraform will perform the following actions:
# aws_vpc.brittest will be updated in-place
~ resource "aws_vpc" "brittest" {
id = "vpc-<REDACTED>"
+ ipv6_ipam_pool_id = "ipam-pool-<REDACTED>"
tags = {
"Name" = "Terrabritt"
}
# (16 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
```
### Terraform Configuration Files
With an IPAM pool with an allocated IPv6 CIDR block....
```
provider "aws" {
region = "us-east-1"
}
resource "aws_vpc" "brittest" {
cidr_block = "10.155.224.0/22"
ipv6_ipam_pool_id = "ipam-pool-<REDACTED>"
tags = { "Name" = "Terrabritt" }
}
```
### Steps to Reproduce
terraform apply
terraform apply
### Debug Output
[aws_vpc_ipv6_ipam.log](https://github.com/hashicorp/terraform-provider-aws/files/9909427/aws_vpc_ipv6_ipam.log)
### Panic Output
_No response_
### Important Factoids
_No response_
### References
_No response_
### Would you like to implement a fix?
_No response_