Terraform overwrite the first trigger issue in lambda deployment

Hi Team

I have deploy lambda function with s3 as a trigger using terraform . But I am facing the problem .

I am adding multiple bucket as trigger with prefix like below . But the problem is if you see bucket-list.json file,

In my case same bucket with multiple prefix .

So when I ran the terraform code only two trigger adding the first trigger( “bucket”: “bucket_test123”,“prefix”: “inte/” ) getting overwrite by terraform .

So can you help me that how I can preserve the first trigger while running dynamically terraform script .

This is my bucktlist file which I am calling in my code .

cat bucket-list.json
{
“dev2722”: [
{
“bucket”: “bucket_test123”,
“prefix”: “inte/”,
“suffix”: “”
},
{
“bucket”: “allservices-cicd”,
“prefix”: “/”,
“suffix”: “”
},
{
“bucket”: “bucket_test123”,
“prefix”: “test/”,
“suffix”: “”
}
]

}

This is my main.tf file

data “archive_file” “zip2” {
type = “zip”
source_file = var.lambda_zip_file_name
output_path = “${var.lambda_zip_file_name}.zip”
}

resource “aws_lambda_function” “test_lambda” {
filename = data.archive_file.zip2.output_path
function_name = var.lambda_function_name
role = var.lambda_role
handler = “test.lambda_handler”
runtime = var.runtime_env
source_code_hash = filebase64sha256(data.archive_file.zip2.output_path) ### if the lambda function change so new updated file will get updated in lambda

}

############ s3 bucket notification to lambda trigger

resource “aws_lambda_permission” “allow_bucket” {
count = length(var.bucket_name)
statement_id = “AllowExecutionFromS3Bucket-{count.index}" action = "lambda:InvokeFunction" function_name = aws_lambda_function.test_lambda.function_name principal = "s3.amazonaws.com" source_arn = "arn:aws:s3:::{var.bucket_name[count.index]}”
}

resource “aws_s3_bucket_notification” “bucket_notification” {
count = length(var.bucket_name)
bucket = var.bucket_name[count.index]

lambda_function {
lambda_function_arn = aws_lambda_function.test_lambda.arn
events = [“s3:ObjectCreated:*”]
filter_prefix = (var.bucket_prefix[count.index])
filter_suffix = (var.bucket_suffix[count.index])
}

lifecycle {

ignore_changes = all

}

}

This is my variable file .

locals {
bucket_data = jsondecode(file("…/…/bucket-list.json"))

get all dev bucket info

bucket = [for buc in local.bucket_data.dev2722 : buc.bucket]
value = [for val in local.bucket_data.dev2722 : val.prefix]
key = [for key in local.bucket_data.dev2722 : key.suffix]
}

module “my_lambda” {
source = “…/modules/lambda”
lambda_function_name = “da-airflow-lambda-trigger-dag”
lambda_zip_file_name = “…/…/test.py”
runtime_env = “python3.8”
lambda_role = “arn:aws:iam::xxxxxxxx:role/xxxxxxxxx”
bucket_name = local.bucket
bucket_prefix = local.value
bucket_suffix = local.key
}

I really appriciate if you can provide me any solution to overcome this issue .