So I am split AWS DMS service into 3 submodules:
dms-endpoint
dms-instance
dms-task
Now, endpoint and instances can be created independently in parallel. Byt DMS task need outputs from the modules.
Here is task hcl file:
locals {
# Automatically load environment-level variables
environment_vars = read_terragrunt_config(find_in_parent_folders("env.hcl"))
# Extract out common variables for reuse
env = local.environment_vars.locals.environment
}
# Terragrunt will copy the Terraform configurations specified by the source parameter, along with any files in the
# working directory, into a temporary folder, and execute your Terraform commands in that folder.
terraform {
source = "../../../../../modules/aws-dms-task/"
// source = "git::git@github.com:gruntwork-io/terragrunt-infrastructure-modules-example.git//mysql?ref=v0.4.0"
// source = "terraform-aws-modules/rds/aws"
// version = "~> 3.0"
}
# Include all settings from the root backupterragrunt.hcl file
include {
path = find_in_parent_folders()
}
dependencies {
paths = ["../aws-dms-instance", "../aws-dms-endpoint"]
}
dependency "aws_dms_instance" {
config_path = "../aws-dms-instance"
// config_path = find_in_parent_folders("aws_dms_instance")
skip_outputs = false
// https://myshittycode.com/2019/10/30/terragrunt-plan-all-while-passing-outputs-between-modules/
// If infra is completely fresh, target module is not able to obtain any dependency outputs as they don't exist
mock_outputs = {
replication_instance_small_arn = {
replication_instance_arn = "sample-arn"
}
}
}
dependency "aws_dms_endpoint" {
config_path = "../aws-dms-endpoint"
// config_path = find_in_parent_folders("aws_dms_endpoint")
skip_outputs = false
// https://myshittycode.com/2019/10/30/terragrunt-plan-all-while-passing-outputs-between-modules/
// If infra is completely fresh, target module is not able to obtain any dependency outputs as they don't exist
mock_outputs = {
aws_dms_endpoint_source = {
source_endpoint_arn = "sample-arn"
}
aws_dms_endpoint_target = {
target_endpoint_arn = "sample-arn"
}
}
}
# These are the variables we have to pass in to use the module specified in the terragrunt configuration above
inputs = {
task_configuration_small = {
small = {
migration_type = "full-load-and-cdc"
replication_instance_arn = lookup(dependency.aws_dms_instance.outputs.replication_instance_small_arn, "small1", "")
replication_task_id = "testing-task"
// Here the fwgodba1_sql2k12_dickies_bi is the pointer which can be constructed/generated from Excel control table
source_endpoint_arn = lookup(dependency.aws_dms_endpoint.outputs.aws_dms_endpoint_source, "fwgodba1_sql2k12_dickies_bi", "")
target_endpoint_arn = lookup(dependency.aws_dms_endpoint.outputs.aws_dms_endpoint_target, "aws_redshift", "")
table_mappings = file("fwgodba1_sql2k12/dickies_bi/small/table_mapping/dms.table_mapping.fwgodba1_sql2k12.dickies_bi.dbo.small.json")
replication_task_settings = file("fwgodba1_sql2k12/dickies_bi/small/task_settings/dms.taks_settings.fwgodba1_sql2k12.dickies_bi.dbo.small.json")
}
}
}
Now when run in root directory terraform run-all apply tfplan for first time when no previous infra exists, I get exception:
aws_dms_replication_task.replication_task_small["small"]: Creating...
╷
│ Error: InvalidParameterValueException: The parameter EndpointArn must be provided and must not be blank.
│ status code: 400, request id: 6fc79681-929b-44d0-8a0e-0073cb9d8e38
│
│ with aws_dms_replication_task.replication_task_small["small"],
│ on main.tf line 2, in resource "aws_dms_replication_task" "replication_task_small":
│ 2: resource "aws_dms_replication_task" "replication_task_small" {
│
╵
But when running for the second time, all is fine because the endpoint returns the output without issues. Is it issue with my config, terragrunt or terraform was provider?