How to create hierarchical module structures

I am working on a project to leverage the CIS foundations benchmark with terraform. I’ve already implemented the frame work in terraform. However, there are issues when trying to deal with multiple AWS accounts. My goal is to remove 9 AWS s3 buckets that are associated with cloudtrail logs. The benefit of removing these buckets is that it will reduce our AWS bill by roughly 10k.

My goal is to create a modular hierarchy like the one shown in the figure below.

My plan is to design a module that will conditionally provision s3 buckets and cloudtrails based on the aws environment. My method for approaching this has been to use a statement like so

for_each = var.environment == "billing" ? toset(["this"]) : [] 

For example,

resource "aws_cloudtrail" "nfcisbenchmark" {
  for_each = var.environment == "billing" ? toset(["this"]) : [] 
  name           = "${var.name}"
  s3_bucket_name                = aws_s3_bucket.nfcisbenchmark_cloudtrail[each.key].id
  enable_logging                = true
  # 3.2 Ensure CloudTrail log file validation is enabled (Automated)
  enable_log_file_validation    = true
  # 3.1 Ensure CloudTrail is enabled in all regions (Automated)
  is_multi_region_trail         = true
  # CIS Benchmark 3.1 Ensure CloudTrail is enabled in all regions
  # ensuring that a multi-regions trail exists will ensure that Global Service Logging
  # is enabled for a trail by default to capture recording of events generated on AWS
  # global services
  include_global_service_events = true
  is_organization_trail         = "${var.environment == "billing"? true : false}"
  # 3.7 Ensure CloudTrail logs are encrypted at rest using KMS CMKs (Automated)
  kms_key_id                    = aws_kms_key.nfcisbenchmark.arn
  depends_on                    = [aws_s3_bucket.nfcisbenchmark_cloudtrail]
  cloud_watch_logs_role_arn     = aws_iam_role.cloudwatch.arn
  cloud_watch_logs_group_arn    = "${aws_cloudwatch_log_group.nfcisbenchmark.arn}:*"

  event_selector {
    # 3.11 Ensure that Object-level logging for read events is enabled for S3 bucket (Automated)
    read_write_type           = "All"
    include_management_events = true
  }

  // Tags
  tags = {
    Name              = "${var.name}-cloudtrail"
    cost_environment  = "${var.environment == "production"? "production" : "non-production"}"
    cost_category     = "SEC"
    cost_team_owner   = "MOPRAV"
  }
}

In this scenario the cloudtrail should only be created if the environment is “billing”. I am facing issues with creating the s3 buckets conditionally. Given,

resource aws_s3_bucket nfcisbenchmark_cloudtrail {
  for_each = var.environment == "logging" ? toset(["this"]) : [] 
  bucket        = var.nf_logging_bucket_name
  acl           = "private"
  force_destroy = true
  # 3.6 Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket (Automated)
  logging {
    target_bucket = aws_s3_bucket.log_bucket_cloudtrail[each.key]
    target_prefix = "log/"
  }
}

Every single instance of aws_s3_bucket.log_bucket_cloudtrail needs to use each.key.

Expected:
run terraform apply and the resources are provisioned

Actual:

on modules/nf_cis_benchmark/s3.tf line 45, in resource "aws_s3_bucket" "nfcisbenchmark_cloudtrail":
│   45:     target_bucket = aws_s3_bucket.log_bucket_cloudtrail[each.key]
│     ├────────────────
│     │ aws_s3_bucket.log_bucket_cloudtrail is object with 1 attribute "this"
│     │ each.key is "this"

Any advice on how to remedy this issue would be greatly appreciated.

Hi @EvanGertis,

It looks like you’ve only shared part of the error message here, and unfortunately it’s only the part which tells us which part of the configuration caused the error, not the parts which say what the error actually is.

Could you share the full error message? The vertical bar characters on the far left of the messages delimit the error, and I’d like to see all of the lines that are grouped together in that way in order to understand what exactly Terraform was reporting.

@apparentlymart I’ve modified the style for conditionally provisioning the resource. The example above was for a cloudtrail. However, I’m trying to repeat the same pattern for creating vpc flow logs. In the code below,

module ap-east-1 {
    count = local.environments[terraform.workspace] != "logging" || local.environments[terraform.workspace] != "billiing" ? 1 : 0 
    source = "./modules/flow_log"
    providers = {
        aws = aws.ap-east-1
    }
    log_destination = "${module.nf_cis_benchmark.aws_s3_bucket_vpc_flow_log[count.index]}"
    log_destination_type = "s3"
    traffic_type = "REJECT"
    
    aws_vpc_ids = data.aws_vpcs.ap-east-1.ids
}

I am trying to conditionally create a vpc_flow_log based on the environment that is being used. I’ve defined the output from the nf_cis_benchmark.aws_s3_bucket_vpc_flow_log module as

output "aws_s3_bucket_vpc_flow_log" {
    value = "${aws_s3_bucket.vpc_flow_log}"
}

However, when I run plan I get the following error message back

────────────────
│     │ count.index is 0
│     │ module.nf_cis_benchmark.aws_s3_bucket_vpc_flow_log is object with 26 attributes
│ 
│ The given key does not identify an element in this collection value. An object only supports looking up attributes by name, not by numeric index.

I am making progress on the restructuring. At this point I have created a root module

nf_cis_benchmark

this configures cloudtrail, cloudwatchmetrics, config, iam, kms, s3, snss, sqs, an organization, and vpc resources. This module then calls another module

vpc

The vpc module the takes inputs from the nf_cis_benchmark module

module vpc {
    count = var.environment != "logging" || var.environment != "billing" ? 1 : 0 
    source = "./modules/vpc"
    aws_s3_bucket = "${aws_s3_bucket.vpc_flow_log.id}"
    aws_s3_bucket_arn = "${aws_s3_bucket.vpc_flow_log.arn}"
    workspace_iam_roles= var.workspace_iam_roles
}

Which the calls a flow_log module for each respective aws region like so

module af-south-1 {
    
    source = "./modules/flow_log"
    providers = {
        aws = aws.af-south-1
    }
    log_destination = "${module.nf_cis_benchmark.aws_s3_bucket_vpc_flow_log}"
    log_destination_type = "s3"
    traffic_type = "REJECT"
    aws_vpc_ids = data.aws_vpcs.af-south-1.ids
    
}

Each provider is declared in main.tf of the vpc module like so

provider aws {
  region = "us-east-1"

  assume_role {
    role_arn = var.workspace_iam_roles
  }
}

provider aws {
  alias = "af-south-1"
  region = "af-south-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles
 }
}

However, when I run terraform plan I get the unexpected error.

 Error: Module module.nf_cis_benchmark.module.vpc contains provider configuration
│ 
│ Providers cannot be configured within modules using count, for_each or depends_on.

Hi @EvanGertis,

It sounds like you have a provider block inside your VPC module, which is a legacy pattern no longer recommended, and indeed explicitly not supported for a module you intend to use with count or for_each.

Provider configurations belong in your root module, but you can pass them in to a child module in order to give the child module access to additional (that is, “aliased”) provider configurations.

Since the VPC module you might declare the following to say that this module expects two AWS provider configurations with the given aliases:

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"

      configuration_aliases = [ aws.us-east-1, aws.af-south-1 ]
    }
  }
}

(note: this configuration_aliases mechanism is a relatively new syntax in Terraform. If you’re using a Terraform version prior to v1.0 then you may need to write this differently; if you let me know which version you are using then I can hopefully translate it to the older variant needed for that version.)

Now in your root module you can declare those two provider configurations, and pass them in to the VPC module explicitly so that the VPC module can use both of them:

provider "aws" {
  region = "us-east-1"

  assume_role {
    role_arn = var.workspace_iam_roles
  }
}

provider "aws" {
  alias = "af-south-1"

  region = "af-south-1"
 
  assume_role {
    role_arn = var.workspace_iam_roles
  }
}

module "vpc" {
  source = "./modules/vpoc"
  count  = var.environment != "logging" || var.environment != "billing" ? 1 : 0 
  providers = {
    aws.us-east-1 = aws
    aws.af-south-1 = aws.af-south-1
  }

  aws_s3_bucket       = aws_s3_bucket.vpc_flow_log.id
  aws_s3_bucket_arn   = aws_s3_bucket.vpc_flow_log.arn
  workspace_iam_roles = var.workspace_iam_roles
}

The special providers argument in a module block overrides Terraform’s default behavior of just passing the default (unaliased) configuration of each provider into the child module, so you can specify exactly which configurations the module should use. In this case, I passed the root module’s default AWS provider configuration as the child module’s aws.us-east-1 configuration, and the root module’s aws.af-south-1 configuration as the child module’s aws.af-south-1 configuration.

Inside the module then you can use provider = aws.us-east-1 or provider = aws.af-south-1 in each of the resource or data blocks to specify which of the two configurations each resource should belong to.

Thank you for explaining that. Once you’ve passed the configuration_aliases into the nf_cis_benchmark module.

module nf_cis_benchmark {
    source = "./modules/nf_cis_benchmark"
    name        = local.name
    environment = "${local.environments[terraform.workspace]}"
    region      = data.aws_region.current.name
    organization_id = data.aws_organizations_organization.org.id
    account_id  = data.aws_caller_identity.current.account_id
    workspace   = "${terraform.workspace}"
    workspace_iam_roles = var.workspace_iam_roles[terraform.workspace]
    providers = {
        aws.us-east-1 = aws
        aws.af-south-1      = aws.af-south-1
        aws.ap-east-1       = aws.ap-east-1
        aws.ap-northeast-1  = aws.ap-northeast-1
        aws.ap-northeast-2  = aws.ap-northeast-2
        aws.ap-south-1      = aws.ap-south-1
        aws.ap-southeast-1 = aws.ap-southeast-1,
        aws.ap-southeast-2 = aws.ap-southeast-2
        aws.ca-central-1 = aws.ca-central-1,
        aws.eu-central-1 = aws.eu-central-1,
        aws.eu-north-1 = aws.eu-north-1,
        aws.eu-south-1 = aws.eu-south-1,
        aws.eu-west-1  = aws.eu-west-1,
        aws.eu-west-2  = aws.eu-west-2,
        aws.eu-west-3  = aws.eu-west-3,
        aws.me-south-1 = aws.me-south-1,
        aws.sa-east-1  = aws.sa-east-1,
        aws.us-east-2  = aws.us-east-2,
        aws.us-west-1  = aws.us-west-1,
        aws.us-west-2  = aws.us-west-2
    }
}

Then the nf_cis_benchmark module should call the vpc module which uses the providers like so

module vpc {
    count = var.environment != "logging" || var.environment != "billing" ? 1 : 0 
    source = "./modules/vpc"
    aws_s3_bucket = "${aws_s3_bucket.vpc_flow_log.id}"
    aws_s3_bucket_arn = "${aws_s3_bucket.vpc_flow_log.arn}"
    workspace_iam_roles= var.workspace_iam_roles
    providers = {
        aws.us-east-1 = aws
        aws.af-south-1      = aws.af-south-1
        aws.ap-east-1       = aws.ap-east-1
        aws.ap-northeast-1  = aws.ap-northeast-1
        aws.ap-northeast-2  = aws.ap-northeast-2
        aws.ap-south-1      = aws.ap-south-1
        aws.ap-southeast-1 = aws.ap-southeast-1,
        aws.ap-southeast-2 = aws.ap-southeast-2
        aws.ca-central-1 = aws.ca-central-1,
        aws.eu-central-1 = aws.eu-central-1,
        aws.eu-north-1 = aws.eu-north-1,
        aws.eu-south-1 = aws.eu-south-1,
        aws.eu-west-1  = aws.eu-west-1,
        aws.eu-west-2  = aws.eu-west-2,
        aws.eu-west-3  = aws.eu-west-3,
        aws.me-south-1 = aws.me-south-1,
        aws.sa-east-1  = aws.sa-east-1,
        aws.us-east-2  = aws.us-east-2,
        aws.us-west-1  = aws.us-west-1,
        aws.us-west-2  = aws.us-west-2
    }
}

The vpc module then call a flow_log module like so


module af-south-1 {
    
    source = "./modules/flow_log"
    providers = {
        aws = aws.af-south-1
    }
    log_destination = "${module.nf_cis_benchmark.aws_s3_bucket_vpc_flow_log}"
    log_destination_type = "s3"
    traffic_type = "REJECT"
    aws_vpc_ids = data.aws_vpcs.af-south-1.ids
    
}

The main.tf flow log module looks like so

resource aws_flow_log flow_log{
    count = length(var.aws_vpc_ids)
    log_destination = var.log_destination
    log_destination_type = var.log_destination_type
    traffic_type = var.traffic_type
    vpc_id = var.aws_vpc_ids[count.index]
    depends_on = [ var.log_destination ]

      // Tags 
  tags = {
    Name              = "${var.aws_vpc_ids[count.index]}"
    cost_environment  = "${local.environments[terraform.workspace] == "production" ? "production" : "non-production"}" 
    cost_category     = "SEC"
    cost_team_owner   = "MOPRAV"
  }
}

The question is how to pass the providers from the root module to nf_cis_benchmark to vpc so that each flow_log module is created in the appropriate aws account.

The error that I get is

 Error: Cannot override provider configuration
│ 
│   on modules/nf_cis_benchmark/vpc.tf line 26, in module "vpc":
│   26:         aws.us-west-1  = aws.us-west-1,
│ 
│ Provider aws.us-west-1 is configured within the module module.nf_cis_benchmark.module.vpc and cannot be overridden.

I am trying to follow the documentation listed here: Providers Within Modules - Configuration Language - Terraform by HashiCorp.

Hi @EvanGertis,

I think that message is saying that there is still a provider "aws" block inside the module with alias = "us-west-1" and so the explicitly-configured one is conflicting with the passed-in one. Does that seem like a plausible explanation?

After removing the providers with alias = "us-west-1" from the main block and rerunning terraform init I get the following error

│ Error: Failed to query available provider packages
│ 
│ Could not retrieve the list of available versions for provider hashicorp/aws: no available releases match the given constraints >= 13.7.0

Hmm this message is from the provider installer, which only cares about providers themselves, not about provider configurations. That is, it only uses the fact that your configuration depends on hashicorp/aws, and doesn’t make any use of the different configurations you’ve written out for it.

So it seems like something else is going on here which perhaps the other error was just masking before. Can you run terraform providers and see if the provider dependencies in there seen reasonable? Reasonable here means that all of the modules which use the AWS provider have a valid version constraint or no constraints at all.

If you have any custom provider installation settings in your CLI Configuration then that could be relevant too, because it might make your Terraform have a narrower view of which versions are available than it would have if it contacted the origin registry directly.

 Warning: Provider aws.us-west-1 is undefined
│ 
│   on modules/nf_cis_benchmark/vpc.tf line 26, in module "vpc":
│   26:         aws.us-west-1  = aws.us-west-1,
│ 
│ Module module.nf_cis_benchmark.module.vpc does not declare a provider
│ named aws.us-west-1.
│ If you wish to specify a provider configuration for the module, add an
│ entry for aws.us-west-1 in the required_providers block within the
│ module.

The goal is too pass the providers from the top module nf_cis_benchmark down to vpc then down to flow_log. Within the vpc module I have a main.tf configuration file that includes the following config block

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 13.7.0"
    }
  }
}

Does this need to be modified?

The main.tf configuration file for the nf_cis_benchmark is constructed like so

provider aws {
  region = "us-east-1"

  assume_role {
    role_arn = var.workspace_iam_roles[terraform.workspace]
  }
}

provider aws {
  alias = "af-south-1"
  region = "af-south-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "ap-east-1"
  region = "ap-east-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "ap-northeast-1"
  region = "ap-northeast-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "ap-northeast-2"
  region = "ap-northeast-2"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "ap-south-1"
  region = "ap-south-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "ap-southeast-1"
  region = "ap-southeast-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "ap-southeast-2"
  region = "ap-southeast-2"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "ca-central-1"
  region = "ca-central-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "eu-central-1"
  region = "eu-central-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "eu-north-1"
  region = "eu-north-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "eu-south-1"
  region = "eu-south-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "eu-west-1"
  region = "eu-west-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "eu-west-2"
  region = "eu-west-2"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "eu-west-3"
  region = "eu-west-3"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "me-south-1"
  region = "me-south-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "sa-east-1"
  region = "sa-east-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "us-east-2"
  region = "us-east-2"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "us-west-1"
  region = "us-west-1"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}

provider aws {
  alias = "us-west-2"
  region = "us-west-2"
 
 assume_role {
   role_arn = var.workspace_iam_roles[terraform.workspace]
 }
}




terraform {
  required_version = ">= 0.13.7"

  backend "s3" {
    bucket         = "nf-mop-tf-state"
    key            = "security/terraform.tfstate"
    region         = "us-east-1"
    encrypt        = true
    dynamodb_table = "nf-terraform-state-lock"
  }
}

# Current Account ID
data aws_caller_identity current {
}

data aws_region current {
}

The main.tf configuration file for the vpc module is constructed like so

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 13.7.0"
      configuration_aliases = [ 
        aws.af-south-1, 
        aws.ap-east-1, 
        aws.ap-northeast-1,
        aws.ap-northeast-2,
        aws.ap-south-1,
        aws.ap-southeast-1,
        aws.ap-southeast-2,
        aws.ca-central-1,
        aws.eu-central-1,
        aws.eu-north-1,
        aws.eu-south-1,
        aws.eu-west-1,
        aws.eu-west-2,
        aws.eu-west-3,
        aws.me-south-1,
        aws.sa-east-1,
        aws.us-east-2,
        aws.us-west-1,
        aws.us-west-2
        ]
    }
  }
}

I am passing the providers from the nf_cis_benchmark module down to the vpc module like so

module vpc {
    count = var.environment != "logging" || var.environment != "billing" ? 1 : 0 
    source = "./modules/vpc"
    aws_s3_bucket = "${aws_s3_bucket.vpc_flow_log.id}"
    aws_s3_bucket_arn = "${aws_s3_bucket.vpc_flow_log.arn}"
    workspace_iam_roles= var.workspace_iam_roles
    providers = {
        aws.us-east-1 = aws
        aws.af-south-1      = aws.af-south-1
        aws.ap-east-1       = aws.ap-east-1
        aws.ap-northeast-1  = aws.ap-northeast-1
        aws.ap-northeast-2  = aws.ap-northeast-2
        aws.ap-south-1      = aws.ap-south-1
        aws.ap-southeast-1 = aws.ap-southeast-1,
        aws.ap-southeast-2 = aws.ap-southeast-2
        aws.ca-central-1 = aws.ca-central-1,
        aws.eu-central-1 = aws.eu-central-1,
        aws.eu-north-1 = aws.eu-north-1,
        aws.eu-south-1 = aws.eu-south-1,
        aws.eu-west-1  = aws.eu-west-1,
        aws.eu-west-2  = aws.eu-west-2,
        aws.eu-west-3  = aws.eu-west-3,
        aws.me-south-1 = aws.me-south-1,
        aws.sa-east-1  = aws.sa-east-1,
        aws.us-east-2  = aws.us-east-2,
        aws.us-west-1  = aws.us-west-1,
        aws.us-west-2  = aws.us-west-2
    }
}

The unexpected error that I receive when I run terraform init is

│ Error: Failed to query available provider packages
│ 
│ Could not retrieve the list of available versions for provider hashicorp/aws: locked provider registry.terraform.io/hashicorp/aws 3.57.0 does not match configured version constraint >= 13.7.0; must use terraform init -upgrade to allow
│ selection of new versions

After deleting the .lock file now I get

│ Error: Failed to query available provider packages
│ 
│ Could not retrieve the list of available versions for provider hashicorp/aws: no available releases match the given constraints >= 13.7.0

Hi @EvanGertis,

Looking a bit closer at exactly what that error message is reporting, it seems to be correct that there aren’t any released versions of hashicorp/aws that match that constraint: the current newest version I see in the registry is 3.65.0.

Do you think that constraint might’ve been intended to be >= 3.7.0 instead, which would then admit the latest version of the provider currently published in the registry?

1 Like

@apparentlymart Thank you, I’ve moved past that.

I’m now running into an issue configuring the provider

#sa-east-1
data aws_vpcs sa-east-1 {
    provider = aws.sa-east-1
}


module sa-east-1 {
    
    source = "./modules/flow_log"
    providers = {
        aws.sa-east-1 = aws.sa-east-1
    }
    log_destination = "${var.aws_s3_bucket}"
    log_destination_type = "s3"
    traffic_type = "REJECT"
    
    aws_vpc_ids = data.aws_vpcs.sa-east-1.ids
}

In the the vpc module.

│ Warning: Provider aws.sa-east-1 is undefined
│ 
│   on modules/nf_cis_benchmark/modules/vpc/vpc.tf line 201, in module "sa-east-1":
│  201:         aws.sa-east-1 = aws.sa-east-1
│ 
│ Module module.nf_cis_benchmark.module.vpc.module.sa-east-1 does not declare a provider named aws.sa-east-1.
│ If you wish to specify a provider configuration for the module, add an entry for aws.sa-east-1 in the required_providers block within the module.

This message is saying that inside your ./modules/flow_log there isn’t a declaration that this module expects an aliased provider configuration named sa-east-1.

That is, the configuration_aliases argument we were talking about earlier:

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"

      configuration_aliases = [ aws.sa-east-1 ]
    }
  }
}

This is how Terraform will know that this module expects a configuration with that alias and thus resolve the providers argument in the module call.

Thank you, I believe that you bring up a good point. However, the module nf_cis_benchmark calls vpc which then calls flow_log. We need to create a flow_log per aws region i.e.

#sa-east-1
data aws_vpcs sa-east-1 {
    provider = aws.sa-east-1
}


module sa-east-1 {
    
    source = "./modules/flow_log"
    providers = {
        aws = aws.sa-east-1
    }
    log_destination = "${var.aws_s3_bucket}"
    log_destination_type = "s3"
    traffic_type = "REJECT"
    
    aws_vpc_ids = data.aws_vpcs.sa-east-1.ids
}

notice I changed

aws.sa-east-1 = aws.sa-east-1

to

aws = aws.sa-east-1

I fixed it by adding a main.tf configuration file to the flow_log module like so


terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
      configuration_aliases = [ aws ]
    }
  }
}