How to create hierarchical module structures

Thank you for explaining that. Once you’ve passed the configuration_aliases into the nf_cis_benchmark module.

module nf_cis_benchmark {
    source = "./modules/nf_cis_benchmark"
    name        = local.name
    environment = "${local.environments[terraform.workspace]}"
    region      = data.aws_region.current.name
    organization_id = data.aws_organizations_organization.org.id
    account_id  = data.aws_caller_identity.current.account_id
    workspace   = "${terraform.workspace}"
    workspace_iam_roles = var.workspace_iam_roles[terraform.workspace]
    providers = {
        aws.us-east-1 = aws
        aws.af-south-1      = aws.af-south-1
        aws.ap-east-1       = aws.ap-east-1
        aws.ap-northeast-1  = aws.ap-northeast-1
        aws.ap-northeast-2  = aws.ap-northeast-2
        aws.ap-south-1      = aws.ap-south-1
        aws.ap-southeast-1 = aws.ap-southeast-1,
        aws.ap-southeast-2 = aws.ap-southeast-2
        aws.ca-central-1 = aws.ca-central-1,
        aws.eu-central-1 = aws.eu-central-1,
        aws.eu-north-1 = aws.eu-north-1,
        aws.eu-south-1 = aws.eu-south-1,
        aws.eu-west-1  = aws.eu-west-1,
        aws.eu-west-2  = aws.eu-west-2,
        aws.eu-west-3  = aws.eu-west-3,
        aws.me-south-1 = aws.me-south-1,
        aws.sa-east-1  = aws.sa-east-1,
        aws.us-east-2  = aws.us-east-2,
        aws.us-west-1  = aws.us-west-1,
        aws.us-west-2  = aws.us-west-2
    }
}

Then the nf_cis_benchmark module should call the vpc module which uses the providers like so

module vpc {
    count = var.environment != "logging" || var.environment != "billing" ? 1 : 0 
    source = "./modules/vpc"
    aws_s3_bucket = "${aws_s3_bucket.vpc_flow_log.id}"
    aws_s3_bucket_arn = "${aws_s3_bucket.vpc_flow_log.arn}"
    workspace_iam_roles= var.workspace_iam_roles
    providers = {
        aws.us-east-1 = aws
        aws.af-south-1      = aws.af-south-1
        aws.ap-east-1       = aws.ap-east-1
        aws.ap-northeast-1  = aws.ap-northeast-1
        aws.ap-northeast-2  = aws.ap-northeast-2
        aws.ap-south-1      = aws.ap-south-1
        aws.ap-southeast-1 = aws.ap-southeast-1,
        aws.ap-southeast-2 = aws.ap-southeast-2
        aws.ca-central-1 = aws.ca-central-1,
        aws.eu-central-1 = aws.eu-central-1,
        aws.eu-north-1 = aws.eu-north-1,
        aws.eu-south-1 = aws.eu-south-1,
        aws.eu-west-1  = aws.eu-west-1,
        aws.eu-west-2  = aws.eu-west-2,
        aws.eu-west-3  = aws.eu-west-3,
        aws.me-south-1 = aws.me-south-1,
        aws.sa-east-1  = aws.sa-east-1,
        aws.us-east-2  = aws.us-east-2,
        aws.us-west-1  = aws.us-west-1,
        aws.us-west-2  = aws.us-west-2
    }
}

The vpc module then call a flow_log module like so


module af-south-1 {
    
    source = "./modules/flow_log"
    providers = {
        aws = aws.af-south-1
    }
    log_destination = "${module.nf_cis_benchmark.aws_s3_bucket_vpc_flow_log}"
    log_destination_type = "s3"
    traffic_type = "REJECT"
    aws_vpc_ids = data.aws_vpcs.af-south-1.ids
    
}

The main.tf flow log module looks like so

resource aws_flow_log flow_log{
    count = length(var.aws_vpc_ids)
    log_destination = var.log_destination
    log_destination_type = var.log_destination_type
    traffic_type = var.traffic_type
    vpc_id = var.aws_vpc_ids[count.index]
    depends_on = [ var.log_destination ]

      // Tags 
  tags = {
    Name              = "${var.aws_vpc_ids[count.index]}"
    cost_environment  = "${local.environments[terraform.workspace] == "production" ? "production" : "non-production"}" 
    cost_category     = "SEC"
    cost_team_owner   = "MOPRAV"
  }
}

The question is how to pass the providers from the root module to nf_cis_benchmark to vpc so that each flow_log module is created in the appropriate aws account.

The error that I get is

 Error: Cannot override provider configuration
│ 
│   on modules/nf_cis_benchmark/vpc.tf line 26, in module "vpc":
│   26:         aws.us-west-1  = aws.us-west-1,
│ 
│ Provider aws.us-west-1 is configured within the module module.nf_cis_benchmark.module.vpc and cannot be overridden.

I am trying to follow the documentation listed here: Providers Within Modules - Configuration Language - Terraform by HashiCorp.

Hi @EvanGertis,

I think that message is saying that there is still a provider "aws" block inside the module with alias = "us-west-1" and so the explicitly-configured one is conflicting with the passed-in one. Does that seem like a plausible explanation?

After removing the providers with alias = "us-west-1" from the main block and rerunning terraform init I get the following error

│ Error: Failed to query available provider packages
│ 
│ Could not retrieve the list of available versions for provider hashicorp/aws: no available releases match the given constraints >= 13.7.0

Hmm this message is from the provider installer, which only cares about providers themselves, not about provider configurations. That is, it only uses the fact that your configuration depends on hashicorp/aws, and doesn’t make any use of the different configurations you’ve written out for it.

So it seems like something else is going on here which perhaps the other error was just masking before. Can you run terraform providers and see if the provider dependencies in there seen reasonable? Reasonable here means that all of the modules which use the AWS provider have a valid version constraint or no constraints at all.

If you have any custom provider installation settings in your CLI Configuration then that could be relevant too, because it might make your Terraform have a narrower view of which versions are available than it would have if it contacted the origin registry directly.

 Warning: Provider aws.us-west-1 is undefined
│ 
│   on modules/nf_cis_benchmark/vpc.tf line 26, in module "vpc":
│   26:         aws.us-west-1  = aws.us-west-1,
│ 
│ Module module.nf_cis_benchmark.module.vpc does not declare a provider
│ named aws.us-west-1.
│ If you wish to specify a provider configuration for the module, add an
│ entry for aws.us-west-1 in the required_providers block within the
│ module.

The goal is too pass the providers from the top module nf_cis_benchmark down to vpc then down to flow_log. Within the vpc module I have a main.tf configuration file that includes the following config block

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 13.7.0"
    }
  }
}

Does this need to be modified?

The main.tf configuration file for the nf_cis_benchmark is constructed like so

provider aws {
  region = "us-east-1"

  assume_role {
    role_arn = var.workspace_iam_roles[terraform.workspace]
  }
}

provider aws {
  alias = "af-south-1"
  region = "af-south-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "ap-east-1"
  region = "ap-east-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "ap-northeast-1"
  region = "ap-northeast-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "ap-northeast-2"
  region = "ap-northeast-2"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "ap-south-1"
  region = "ap-south-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "ap-southeast-1"
  region = "ap-southeast-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "ap-southeast-2"
  region = "ap-southeast-2"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "ca-central-1"
  region = "ca-central-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "eu-central-1"
  region = "eu-central-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "eu-north-1"
  region = "eu-north-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "eu-south-1"
  region = "eu-south-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "eu-west-1"
  region = "eu-west-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "eu-west-2"
  region = "eu-west-2"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "eu-west-3"
  region = "eu-west-3"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "me-south-1"
  region = "me-south-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "sa-east-1"
  region = "sa-east-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "us-east-2"
  region = "us-east-2"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "us-west-1"
  region = "us-west-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "us-west-2"
  region = "us-west-2"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}




terraform {
  required_version = ">= 0.13.7"

  backend "s3" {
    bucket         = "nf-mop-tf-state"
    key            = "security/terraform.tfstate"
    region         = "us-east-1"
    encrypt        = true
    dynamodb_table = "nf-terraform-state-lock"
  }
}

# Current Account ID
data aws_caller_identity current {
}

data aws_region current {
}

The main.tf configuration file for the vpc module is constructed like so

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 13.7.0"
      configuration_aliases = [ 
        aws.af-south-1, 
        aws.ap-east-1, 
        aws.ap-northeast-1,
        aws.ap-northeast-2,
        aws.ap-south-1,
        aws.ap-southeast-1,
        aws.ap-southeast-2,
        aws.ca-central-1,
        aws.eu-central-1,
        aws.eu-north-1,
        aws.eu-south-1,
        aws.eu-west-1,
        aws.eu-west-2,
        aws.eu-west-3,
        aws.me-south-1,
        aws.sa-east-1,
        aws.us-east-2,
        aws.us-west-1,
        aws.us-west-2
        ]
    }
  }
}

I am passing the providers from the nf_cis_benchmark module down to the vpc module like so

module vpc {
    count = var.environment != "logging" || var.environment != "billing" ? 1 : 0 
    source = "./modules/vpc"
    aws_s3_bucket = "${aws_s3_bucket.vpc_flow_log.id}"
    aws_s3_bucket_arn = "${aws_s3_bucket.vpc_flow_log.arn}"
    workspace_iam_roles= var.workspace_iam_roles
    providers = {
        aws.us-east-1 = aws
        aws.af-south-1      = aws.af-south-1
        aws.ap-east-1       = aws.ap-east-1
        aws.ap-northeast-1  = aws.ap-northeast-1
        aws.ap-northeast-2  = aws.ap-northeast-2
        aws.ap-south-1      = aws.ap-south-1
        aws.ap-southeast-1 = aws.ap-southeast-1,
        aws.ap-southeast-2 = aws.ap-southeast-2
        aws.ca-central-1 = aws.ca-central-1,
        aws.eu-central-1 = aws.eu-central-1,
        aws.eu-north-1 = aws.eu-north-1,
        aws.eu-south-1 = aws.eu-south-1,
        aws.eu-west-1  = aws.eu-west-1,
        aws.eu-west-2  = aws.eu-west-2,
        aws.eu-west-3  = aws.eu-west-3,
        aws.me-south-1 = aws.me-south-1,
        aws.sa-east-1  = aws.sa-east-1,
        aws.us-east-2  = aws.us-east-2,
        aws.us-west-1  = aws.us-west-1,
        aws.us-west-2  = aws.us-west-2
    }
}

The unexpected error that I receive when I run terraform init is

│ Error: Failed to query available provider packages
│ 
│ Could not retrieve the list of available versions for provider hashicorp/aws: locked provider registry.terraform.io/hashicorp/aws 3.57.0 does not match configured version constraint >= 13.7.0; must use terraform init -upgrade to allow
│ selection of new versions

After deleting the .lock file now I get

│ Error: Failed to query available provider packages
│ 
│ Could not retrieve the list of available versions for provider hashicorp/aws: no available releases match the given constraints >= 13.7.0

Hi @EvanGertis,

Looking a bit closer at exactly what that error message is reporting, it seems to be correct that there aren’t any released versions of hashicorp/aws that match that constraint: the current newest version I see in the registry is 3.65.0.

Do you think that constraint might’ve been intended to be >= 3.7.0 instead, which would then admit the latest version of the provider currently published in the registry?

1 Like

@apparentlymart Thank you, I’ve moved past that.

I’m now running into an issue configuring the provider

#sa-east-1
data aws_vpcs sa-east-1 {
    provider = aws.sa-east-1
}


module sa-east-1 {
    
    source = "./modules/flow_log"
    providers = {
        aws.sa-east-1 = aws.sa-east-1
    }
    log_destination = "${var.aws_s3_bucket}"
    log_destination_type = "s3"
    traffic_type = "REJECT"
    
    aws_vpc_ids = data.aws_vpcs.sa-east-1.ids
}

In the the vpc module.

│ Warning: Provider aws.sa-east-1 is undefined
│ 
│   on modules/nf_cis_benchmark/modules/vpc/vpc.tf line 201, in module "sa-east-1":
│  201:         aws.sa-east-1 = aws.sa-east-1
│ 
│ Module module.nf_cis_benchmark.module.vpc.module.sa-east-1 does not declare a provider named aws.sa-east-1.
│ If you wish to specify a provider configuration for the module, add an entry for aws.sa-east-1 in the required_providers block within the module.

This message is saying that inside your ./modules/flow_log there isn’t a declaration that this module expects an aliased provider configuration named sa-east-1.

That is, the configuration_aliases argument we were talking about earlier:

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"

      configuration_aliases = [ aws.sa-east-1 ]
    }
  }
}

This is how Terraform will know that this module expects a configuration with that alias and thus resolve the providers argument in the module call.

Thank you, I believe that you bring up a good point. However, the module nf_cis_benchmark calls vpc which then calls flow_log. We need to create a flow_log per aws region i.e.

#sa-east-1
data aws_vpcs sa-east-1 {
    provider = aws.sa-east-1
}


module sa-east-1 {
    
    source = "./modules/flow_log"
    providers = {
        aws = aws.sa-east-1
    }
    log_destination = "${var.aws_s3_bucket}"
    log_destination_type = "s3"
    traffic_type = "REJECT"
    
    aws_vpc_ids = data.aws_vpcs.sa-east-1.ids
}

notice I changed

aws.sa-east-1 = aws.sa-east-1

to

aws = aws.sa-east-1

I fixed it by adding a main.tf configuration file to the flow_log module like so


terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
      configuration_aliases = [ aws ]
    }
  }
}

The issue that I am running into now is related to AWS permissions. The question is does the root module pass aws permissions down to the sub-modules? For example,

module nf_cis_benchmark {
    source = "./modules/nf_cis_benchmark"
    name        = local.name
    environment = "${local.environments[terraform.workspace]}"
    region      = data.aws_region.current.name
    organization_id = data.aws_organizations_organization.org.id
    account_id  = data.aws_caller_identity.current.account_id
    workspace   = "${terraform.workspace}"
    workspace_iam_role = var.workspace_iam_roles[terraform.workspace]

    providers = {
        aws.us-east-1       = aws.us-east-1,
        aws.af-south-1      = aws.af-south-1,
        aws.ap-east-1       = aws.ap-east-1,
        aws.ap-northeast-1  = aws.ap-northeast-1,
        aws.ap-northeast-2  = aws.ap-northeast-2,
        aws.ap-south-1      = aws.ap-south-1,
        aws.ap-southeast-1  = aws.ap-southeast-1,
        aws.ap-southeast-2  = aws.ap-southeast-2
        aws.ca-central-1    = aws.ca-central-1,
        aws.eu-central-1    = aws.eu-central-1,
        aws.eu-north-1      = aws.eu-north-1,
        aws.eu-south-1      = aws.eu-south-1,
        aws.eu-west-1       = aws.eu-west-1,
        aws.eu-west-2       = aws.eu-west-2,
        aws.eu-west-3       = aws.eu-west-3,
        aws.me-south-1      = aws.me-south-1,
        aws.sa-east-1       = aws.sa-east-1,
        aws.us-east-2       = aws.us-east-2,
        aws.us-west-1       = aws.us-west-1,
        aws.us-west-2       = aws.us-west-2
    }
}

In this case is my user account id

account_id  = data.aws_caller_identity.current.account_id

I haven’t had issues provisioning resources before and I haven’t changed my ~/.aws/credentials file, but now I’m getting

│ Error: error reading SQS Queue (https://sqs.us-east-1.amazonaws.com/{ENV-ACCOUNT-ID}/nf-cisbenchmark-cloudwatch-alerts-nf-sandbox): AccessDenied: Access to the resource https://sqs.us-east-1.amazonaws.com/ is denied.
│       status code: 403, request id: db812f03-1d4d-55c0-9e75-750c48210c83

Where ENV-ACCOUNT-ID is the account id for the environment.

The question I am trying to answer is how to pass the aws account id down from the root configuration to the nf_cis_benchmark module.

Hi @EvanGertis,

Terraform Core doesn’t directly interact with credentials itself, but passing a provider configuration should effectively pass in whatever credentials that configuration is using.

A way to think about it is that Terraform is going to create one instance of the provider plugin per distinct provider configuration, and so if you pass a provider configuration from the root module into a child module then it’s making its requests against exactly the same plugin process – configured with credentials, etc just once for each action – and so the remote AWS API can’t see any difference between requests made in the root vs. requests made in the module as long as both of the resources are associated with the same provider configuration and thus the same plugin process.

1 Like

I’m running into an unusual issue. Previously, I was able to run terraform apply from the root dir, but since I’ve changed the structure to a module hierarchy I am getting a permissions denied error.

Error: error reading S3 bucket Public Access Block (nf-cisbenchmark-nf-sandbox-cloudtrail): AccessDenied: Access Denied
│       status code: 403, request id: xxxxx, host id: xxxxxx=

The issue was in the list of providers. I need to add a default provider.

provider aws {
  region = "us-east-1"

  assume_role {
    role_arn = var.workspace_iam_roles[terraform.workspace]
  }
}

In addition to the original list of providers


provider aws {
  alias = "us-east-1"
  region = "us-east-1"

  assume_role {
    role_arn = var.workspace_iam_roles[terraform.workspace]
  }
}

provider aws {
  alias = "af-south-1"
  region = "af-south-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "ap-east-1"
  region = "ap-east-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "ap-northeast-1"
  region = "ap-northeast-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "ap-northeast-2"
  region = "ap-northeast-2"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "ap-south-1"
  region = "ap-south-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "ap-southeast-1"
  region = "ap-southeast-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "ap-southeast-2"
  region = "ap-southeast-2"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "ca-central-1"
  region = "ca-central-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "eu-central-1"
  region = "eu-central-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "eu-north-1"
  region = "eu-north-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "eu-south-1"
  region = "eu-south-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "eu-west-1"
  region = "eu-west-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "eu-west-2"
  region = "eu-west-2"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "eu-west-3"
  region = "eu-west-3"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "me-south-1"
  region = "me-south-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "sa-east-1"
  region = "sa-east-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "us-east-2"
  region = "us-east-2"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "us-west-1"
  region = "us-west-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "us-west-2"
  region = "us-west-2"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}




terraform {
  required_version = ">= 0.13.7"

  backend "s3" {
    bucket         = "nf-mop-tf-state"
    key            = "security/terraform.tfstate"
    region         = "us-east-1"
    encrypt        = true
    dynamodb_table = "nf-terraform-state-lock"
  }
}

# Current Account ID
data aws_caller_identity current {
}

data aws_region current {
}