How to run `terraform validate` on a module that is supplied the providers via an alias

We have a private Terraform module that operates on 2 AWS accounts.

The module has:

terraform {
  required_providers {
    aws = {
      configuration_aliases = [
        aws.current_account,
        aws.management_account
      ]
    }
  }
}

This then is called from as follows:

module "tailscale_connection" {
  source  = "app.terraform.io/digitickets/tailscale-connection/aws"
  version = "0.0.1"

  vpc_id      = module.vpc.vpc_id
  vpc_cidr    = var.aws_vpc_cidr
  cluster     = var.cluster
  environment = var.environment

  providers = {
    aws.current_account    = aws
    aws.management_account = aws.management
  }
}

This all works great (i.e. everything is where it should be and everyone’s happy).

I’ve tried incorporating our usual tooling for terraform modules (pre-commit, pipelines, etc.) and come across an issue with terraform validation:

$ terraform validate
β•·
β”‚ Error: Provider configuration not present
β”‚ 
β”‚ To work with aws_route.tailscale_route its original provider configuration at provider["registry.terraform.io/hashicorp/aws"].management_account is required, but it has been removed. This occurs
β”‚ when a provider configuration is removed while objects created by that provider still exist in the state. Re-add the provider configuration to destroy aws_route.tailscale_route, after which you can
β”‚ remove the provider configuration again.
β•΅
β•·
β”‚ Error: Provider configuration not present
β”‚ 
β”‚ To work with data.aws_vpc.tailscale_vpc its original provider configuration at provider["registry.terraform.io/hashicorp/aws"].management_account is required, but it has been removed. This occurs
β”‚ when a provider configuration is removed while objects created by that provider still exist in the state. Re-add the provider configuration to destroy data.aws_vpc.tailscale_vpc, after which you
β”‚ can remove the provider configuration again.
β•΅
β•·
β”‚ Error: Provider configuration not present
β”‚ 
β”‚ To work with data.aws_route_table.tailscale_rt its original provider configuration at provider["registry.terraform.io/hashicorp/aws"].management_account is required, but it has been removed. This
β”‚ occurs when a provider configuration is removed while objects created by that provider still exist in the state. Re-add the provider configuration to destroy data.aws_route_table.tailscale_rt,
β”‚ after which you can remove the provider configuration again.
β•΅
β•·
β”‚ Error: Provider configuration not present
β”‚ 
β”‚ To work with aws_vpc_peering_connection.tailscale_current its original provider configuration at provider["registry.terraform.io/hashicorp/aws"].current_account is required, but it has been
β”‚ removed. This occurs when a provider configuration is removed while objects created by that provider still exist in the state. Re-add the provider configuration to destroy
β”‚ aws_vpc_peering_connection.tailscale_current, after which you can remove the provider configuration again.
β•΅
β•·
β”‚ Error: Provider configuration not present
β”‚ 
β”‚ To work with aws_vpc_peering_connection_accepter.tailscale_management its original provider configuration at provider["registry.terraform.io/hashicorp/aws"].management_account is required, but it
β”‚ has been removed. This occurs when a provider configuration is removed while objects created by that provider still exist in the state. Re-add the provider configuration to destroy
β”‚ aws_vpc_peering_connection_accepter.tailscale_management, after which you can remove the provider configuration again.
β•΅

Is there a way around this such that the validation takes place with the rest of the code (not that validation really seems to do much sometimes … misses a LOT that should be picked up before the apply stage, but that’s down to the provider, not terraform itself).

Hi @rquadling,

As you’ve seen, there are some constructs that only make sense in a called module rather than a root module, and terraform validate today (similar to most other commands) is designed to work with a root module and thus it gets tripped up by these features.

My usual technique for validating and otherwise testing shared modules in isolation is to create a subdirectory under the module which contains a root module that serves only to be a valid call into the module being tested. This is analogous to writing a simple main program to test a library in a general-purpose language.

For example, you could make a subdirectory validate containing validate.tf with the following content:

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
}

provider "aws" {
  alias = "a"

  # ...
}

provider "aws" {
  alias = "b"

  # ...
}

module "test" {
  source = "../"

  # ...

  providers = {
    aws.current_account    = aws.a
    aws_management_account = aws.b
  }
}

If you then run terraform init and then terraform validate in that subdirectory to get feedback on the validity of the entire configuration, which will include your child module.

This is essentially a subset of the conventions of the Module Testing Experiment, so if you name your subdirectory tests/simple instead, and you make sure it has sufficient supporting infrastructure that it can be used with terraform plan and terraform apply, then you can optionally also use terraform test to automate the validate/plan/apply sequence against this test configuration, and thus your module.

2 Likes