Error on second apply of aws_vpc with ipv6_ipam_pool_id

We have recently gone through all the steps to bring our own IPv6 to AWS. We have IPv6 IPAM pools setup in our account, and so the next step was to create VPCs using our IPv6. We updated the terraform to use ipv6_ipam_pool_id when creating the VPC, and the initial plan and apply work great. HOWEVER, when I a do a second plan/apply (without changing any of the terrform code) I get an error. Here’s some minimal terraform code to reproduce the issue:

resource "aws_vpc" "brittest" {
  cidr_block = "10.155.224.0/22"
  ipv6_ipam_pool_id = "ipam-pool-<REDACTED>"
  tags = { "Name" = "Terrabritt" }     
}

When I do the second plan I get:

  # aws_vpc.brittest will be updated in-place
  ~ resource "aws_vpc" "brittest" {
        id                                   = "vpc-<REDACTED>"
      ~ ipv6_ipam_pool_id                    = "IPAM Managed" -> "ipam-pool-<REDACTED>"
        tags                                 = {
            "Name" = "Terrabritt"
        }
        # (19 unchanged attributes hidden)
    }

If I check my terraform state, I see the pool_id is indeed “IPAM Managed” and not the pool I provided:

% grep ipv6 terraform.tfstate
            "assign_generated_ipv6_cidr_block": false,
            "ipv6_association_id": "vpc-cidr-assoc-<REDACTED>",
            "ipv6_cidr_block": "<REDACTED>",
            "ipv6_cidr_block_network_border_group": "us-east-1",
            "ipv6_ipam_pool_id": "IPAM Managed",
            "ipv6_netmask_length": 0,

If I check the WebUI, it also displays, “IPAM Managed”. And if I do an apply, it errors out. So I think there is some kind of bug here, either with aws_vpc or on the AWS side. A second apply should not be changing the pool id. I realize that ipv6_ipam_pool_id is hard to test b/c it does require bringing your own IPv6 pool to AWS. Is anyone using this successfully?

I have the same issue

I filed a github issue as well.

Oops… wrong issue. This is the right one: