Block_device_mapping fails in eks module's managed node group

I am trying to launch eks cluster using terraform, i want that each node of a nodegroup must have a extra ebs volume attached to it.

My code:

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  for_each = {
    for keyx, valuex in var.eks_creation : keyx => valuex
    if valuex.create_cluster && var.enable_eks_cluster_creation_flag
  }
  cluster_name                    = each.value.cluster_name
  cluster_version                 = "1.22"
  vpc_id                          = module.vpc["${each.value.attach_to_vpc_name}"].vpc_id
  subnet_ids                      = module.vpc["${each.value.attach_to_vpc_name}"].private_subnets
  cluster_endpoint_public_access  = each.value.cluster_endpoint_public_access
  cluster_endpoint_private_access = each.value.cluster_endpoint_private_access
  cluster_enabled_log_types       = each.value.cluster_enabled_log_types
  # enable_irsa                   = each.value.enable_irsa
  cluster_addons = { for key, value in each.value.cluster_addons : key => value }
  create_kms_key = true
  cluster_encryption_config = [{
    resources = ["secrets"]
  }]
  eks_managed_node_group_defaults = {
    ami_type  = "AL2_x86_64"
    disk_size = 20
    vpc_security_group_ids = [aws_security_group.node_group_one[each.key].id]
    attach_cluster_primary_security_group = true
  }
  eks_managed_node_groups = {
    one = {
      name           = "node-group-1"
      instance_types = ["t2.small"]
      min_size       = 1
      max_size       = 3
      desired_size   = 1
      disk_size      = 30
      labels = {
        "node" = "node-group-1"
        "app"  = "testing"
      }
      block_device_mappings = {
      sdc = {
        device_name = "/dev/sdc"
        ebs = {
          volume_size           = 50
          volume_type           = "gp2"
          encrypted             = true
          kms_key_id            = aws_kms_key.eks.arn
          delete_on_termination = false
        }
      }
    }
      disk_size = 30
      pre_bootstrap_user_data = <<-EOT
      echo 'DevOps- Because Developers Need Heroes'
      EOT
    }
    two = {
      name           = "node-group-2"
      instance_types = ["t2.small"]
      min_size       = 1
      max_size       = 2
      desired_size   = 1
      disk_size      = 30
      labels = {
        "node" = "node-group-2"
        "app"  = "testing"
      }
      pre_bootstrap_user_data = <<-EOT
      echo 'foo bar'
      EOT
      taints  = [{
        key = "dedicated"
        value  = "testing"
        effect = "NO_SCHEDULE"
      }]
    }
  }
}

If i do not write the block_device_mapping object in nodegroup then the whole code works fine with successful creation of nodegroup, asg, launch templates etc.
But if, i mention block_device_mapping section, like i did in eks_managed_node_groups[one] then the code fails after executing $terraform apply with error

The custom security group attached to cluster is

resource "aws_security_group" "node_group_two" {
  for_each = {
    for keyx, valuex in var.eks_creation : keyx => valuex
    if valuex.create_cluster && var.enable_eks_cluster_creation_flag
  }
  name_prefix = "node_group_two"
  vpc_id      = module.vpc["${each.value.attach_to_vpc_name}"].vpc_id
  ingress {
    from_port = 22
    to_port   = 22
    protocol  = "tcp"
    cidr_blocks = [
      "0.0.0.0/0",
    ]
  }
  egress {
    from_port = 0
    to_port   = 0
    protocol  = "tcp"
    cidr_blocks = [
      "0.0.0.0/0",
    ]
  }
}

In aws console the nodegroup-1 creation fails (which had block_device_mapping object)
and nodegroup-2 creation is successfull (which was without the block_device_mapping object)

What could be the possible reason?
what are those things which i must add in eks module section to make it work?
Please helm me.

Hi sharma0202udit,
I was running across the same issue, the way I resolved was by creating the KMS key and ensuring it had the correct roles assigned to it. Then specify the kms_key_id as this specified key.
Example below:

module "ebs_kms_key" {
  source  = "terraform-aws-modules/kms/aws"
  version = "~> 1.5"

  description = "Customer managed key to encrypt EKS managed node group volumes"

  # Policy
  key_administrators = [
    data.aws_caller_identity.current.arn
  ]

  key_service_roles_for_autoscaling = [
    # required for the ASG to manage encrypted volumes for nodes
    "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling",
    # required for the cluster / persistentvolume-controller to create encrypted PVCs
    module.eks.cluster_iam_role_arn,
  ]

  # Aliases
  aliases = ["eks/new-cluster/ebs"]
}