Terraform code with multiple state files and Bitbucket pipelines

Hello all.

I’m slowly moving infrastructure from CloudFormation to Terraform, and I ended up with a problem. Everything is getting too big and too complicated!

The way I decided to structure everything is:

├── CHANGELOG.md
├── README.md
├── bitbucket-pipelines.yml
├── data.tf
├── env.tf
├── main.tf
├── modules
│   ├── acm
│   │   ├── outputs.tf
│   │   └── variables.tf
│   ├── application
│   │   ├── README.md
│   │   ├── alb-listener-rules.tf
│   │   ├── alb.tf
│   │   ├── data.tf
│   │   ├── env.tf
│   │   ├── outputs.tf
│   │   ├── route53.tf
│   │   ├── s3.tf
│   │   ├── security-groups.tf
│   │   └── variables.tf
│   ├── database
│   │   ├── README.md
│   │   ├── cloudwatch.tf
│   │   ├── data.tf
│   │   ├── ebs-awsbau-masterdb.tf
│   │   ├── ebs-prod-master.tf
│   │   ├── ec2-awsbau-masterdb..tf
│   │   ├── ec2-prod-master.tf
│   │   ├── efs.tf
│   │   ├── env.tf
│   │   ├── iam.tf
│   │   ├── kms.tf
│   │   ├── outputs.tf
│   │   ├── route53.tf
│   │   ├── s3.tf
│   │   ├── scripts
│   │   │   ├── user_data.sh
│   │   │   ├── user_data_pgpool.sh
│   │   │   └── user_data_slave01.sh
│   │   ├── security-groups-rules.tf
│   │   ├── security-groups.tf
│   │   ├── ssh-key.tf
│   │   └── variables.tf
│   ├── monitoring
│   │   ├── README.md
│   │   ├── cloudwatch.tf
│   │   ├── data.tf
│   │   ├── ec2.tf
│   │   ├── env.tf
│   │   ├── iam.tf
│   │   ├── kms.tf
│   │   ├── outputs.tf
│   │   ├── rds.tf
│   │   ├── route53.tf
│   │   ├── scripts
│   │   │   ├── user_data.sh
│   │   │   └── user_data_zabbix.sh
│   │   ├── security-groups.tf
│   │   ├── ssm.tf
│   │   └── variables.tf
│   ├── network
│   │   ├── README.md
│   │   ├── bastion.tf
│   │   ├── data.tf
│   │   ├── env.tf
│   │   ├── iam.tf
│   │   ├── kms.tf
│   │   ├── outputs.tf
│   │   ├── peering.tf
│   │   ├── provider.tf
│   │   ├── route53.tf
│   │   ├── scripts
│   │   │   └── user_data.sh
│   │   ├── security-group.tf
│   │   ├── variables.tf
│   │   ├── versions.tf
│   │   └── vpc.tf
│   ├── route53
│   │   ├── README.md
│   │   ├── data.tf
│   │   ├── env.tf
│   │   ├── iam.tf
│   │   ├── outputs.tf
│   │   ├── private-hosted-zones.tf
│   │   ├── provider.tf
│   │   ├── public-hosted-zones.tf
│   │   ├── variables.tf
│   │   └── versions.tf
│   └── route53-redirect
│       ├── README.md
│       ├── acm_certificate.tf
│       ├── alb-listener-rule.tf
│       ├── alb-listener.tf
│       ├── bitbucket-pipelines.yml
│       ├── cloudfront.tf
│       ├── data.tf
│       ├── env.tf
│       ├── route53.tf
│       ├── s3.tf
│       ├── scripts
│       │   └── s3_bucket_objects
│       │       ├── 76636065.png
│       │       ├── error.html
│       │       └── index.html
│       └── variables.tf
├── notify-slack-requirements.tf
├── outputs.tf
├── provider.tf
├── variables.tf
└── versions.tf

As you can see, even though route53 and network are under Modules, they have their own Terraform State File, therefore, there is a provider.tf there.

My biggest issue is with the Bitbucket Pipelines. For these different modules, I need to specify a branch name, like in the example below:

pipelines:
  branches:
    prod/*:
      - step:
          name: Security Scan
          script:
            # Run a security scan for sensitive data.
            # See more security tools at https://bitbucket.org/product/features/pipelines/integrations?&category=security
            - pipe: atlassian/git-secrets-scan:0.5.1
      - step:
          name: Terraform init
          script:
            - ./terraform init
      - step:
          name: Terraform validate
          script:
            - ./terraform workspace select prod || terraform workspace new prod
            - ./terraform validate
      - step:
          name: Terraform format
          script:
            - ./terraform fmt -check -recursive
      - step:
          name: Terraform plan
          oidc: true
          script:
            - ./terraform workspace select prod || terraform workspace new prod
            - ./terraform plan -out plan.tfplan
      - step:
          # https://github.com/antonbabenko/terraform-cost-estimation
          name: Terraform Cost Estimation
          oidc: true
          script:
            - ./terraform workspace select prod || ./terraform workspace new prod
            - ./terraform plan -out=plan.tfplan > /dev/null && ./terraform show -json plan.tfplan | curl -s -X POST -H "Content-Type:\ application/json" -d @- https://cost.modules.tf/
      - step:
          name: Deploy to Production
          trigger: manual
          deployment: Production
          oidc: true
          script:
            - ./terraform workspace select prod || terraform workspace new prod
            - ./terraform plan -out plan.tfplan
            - ./terraform apply -input=false -auto-approve plan.tfplan

Because I am dealing with multiple Terraform State files, I have work like this:

pipelines:
  branches:
    prod/*:
      - step:
    prod-network/*:
      - step:
            script:
              - cd modules/network
              - terraform init
              - terraform plan
              - terraform apply
    prod-route53/*:
      - step:
            script:
              - cd modules/route53
              - terraform init
              - terraform plan
              - terraform apply

Do you guys have a suggestion? Am I doing it “right”?