Multiple provider aliases calling a single module

We’re trying to deploy a single module onto multiple AWS accounts that creates an EC2 instance running HashiCorp Boundary as a worker.

This is ran inside a CICD pipeline inside Docker and the problem is that the container gets killed because of oom-killer. As I understand calling multiple provider aliases will cause the memory usage to multiply thus killing the container.

We tried settings parallelism=1 which made no difference.

Any suggestions on how to deploy a single module onto multiple AWS accounts?

There are currently 12 provider blocks in total at the moment that we are trying to deploy, but have been able to deploy only 7 of them successfully.
Our current providers.tf configuration example:

provider "aws" {
  alias  = "account_a"
  region = local.region
  assume_role {
    role_arn    = var.cicd_role_a
    external_id = var.external_id
  }
}

module "worker-a" {
  source = "./modules"
  providers = {
    aws = aws.account_a
  }
  vpc_id                            = "vpc-0cf8db565b93419"
  subnet_id                         = "subnet-0df6b0c846553da"
  ec2_target_security_group_id      = ["sg-005b59f7e8531da"]
}

provider "aws" {
  alias  = "account_b"
  region = local.region
  assume_role {
    role_arn    = var.cicd_role_b
    external_id = var.external_id
  }
}

module "worker-b" {
  source = "./modules"
  providers = {
    aws = aws.account_b
  }
  vpc_id                       = "vpc-03a09510fdf9015"
  subnet_id                    = "subnet-013994fddbbfd0e"
  ec2_target_security_group_id = ["sg-0debe481da6b8a2"]
}
...
9 more similar blocks

Realistically? Give it more memory, or split it into 12 separate Terraform configurations that run independently.