Redshift cluster logging setup works in console, not in Terraform

I’ve been troubleshooting this for too long, and could use some help from someone who has done this. I’ve created an off-shoot of the AWS Redshift module and added my own bits to build out an S3 bucket for logging, the right roles for Redshift access to S3, key grants in a backup snapshot region, etc. The module has these relevant areas for the issue:

data "aws_redshift_service_account" "this" {
  provider = aws.primary
}

resource "aws_s3_bucket" "logs" {
  provider = aws.primary
  bucket = "${var.environment_prefix}-redshift-logs"

  force_destroy = true

  tags = merge(
    local.tags,
    var.tags,
    { "Name" = "${var.environment_prefix}-redshift-logs" }
  )
}

resource "aws_s3_bucket_policy" "logs" {
  provider = aws.primary
  bucket = aws_s3_bucket.logs.id
  policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": [
          "${data.aws_redshift_service_account.this.arn}"
        ]
      },
      "Action": [
        "s3:PutObject"
      ],
      "Resource": [
        "arn:aws:s3:::${var.environment_prefix}-redshift-logs/*"
      ]
    },
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": [
          "${data.aws_redshift_service_account.this.arn}"
        ]
      },
      "Action": [
        "s3:GetBucketAcl"
      ],
      "Resource": [
        "arn:aws:s3:::${var.environment_prefix}-redshift-logs"
      ]
    }
  ]
}
EOF
}

I’m not convinced this is needed, but I created a role to allow RedShift to assume a role that has S3 access:

data "aws_iam_policy" "s3" {
  provider = aws.primary
  name = "AmazonS3FullAccess"
}

data "aws_iam_policy_document" "assume_role" {
  provider = aws.primary

  statement {
    actions = [
      "sts:AssumeRole",
    ]

    principals {
      type = "Service"
      identifiers = ["redshift.amazonaws.com"]
    }

    effect = "Allow"
  }
}

resource "aws_iam_role" "assume_role" {
  provider = aws.primary
  name = "RedshiftS3LogAccess"
  assume_role_policy = data.aws_iam_policy_document.assume_role.json
  managed_policy_arns = [
    data.aws_iam_policy.s3.arn
  ]

  tags = merge(
    local.tags,
    var.tags,
    { "Name" = "RedshiftS3LogAccess" }
  )
}

And then tied the role and turned on logging in the aws_redshift_cluster resource:

resource "aws_redshift_cluster" "this" {
  provider = aws.primary

  ...

  # Logging
  logging {
    enable = var.enable_logging
    bucket_name = aws_s3_bucket.logs.arn
    s3_key_prefix = local.s3_log_prefix
  }

  ...

   iam_roles = concat(
    [ aws_iam_role.assume_role.arn ],
    var.cluster_iam_roles
  )

  ...
}

I’m running this Terraform under an assumed role, like this:

provider "aws" {
  alias = "primary"
  region = local.region

  # Assumes the role in the warehouse account that allows full access
  # to create and destroy RedShift clusters in production
  assume_role {
    role_arn = "arn:aws:iam::123456789012:role/WarehouseFullAccess"
  }
}

And it always fails with a InsufficientS3BucketPolicyFault and to check IAM permissions. Yet, when I sign in to the console as the IAM user that I am running this same Terraform as, and then switch to the same WarehouseFullAccess role, I can easily go into the Redshift properties and enable logging with the exact same bucket and key prefix and there are no errors. Debugging doesn’t seem to tell me the exact CLI commands being run, so I can’t tell if this is a Terraform bug, or am I missing something?

Thanks for any help anyone has on this one - I’m stumped.

Jon

1 Like

Hi @linczakjw ,

did you manage to solve your issue? We are running into excatly the same problem using terraform aws ~>4.8.
It works to set up the logging bucket in the console, but with terraform we get the InsufficientS3BucketPolicyFault.

Apologies for not seeing this earlier, but yes. This was because we are passing the ARN for the bucket_name parameter in the logging {} section rather than the ID of the bucket. This solved our issue.