Hi,
I want to provision cloud watch alarms for Redis. I have the following code.
resource "aws_elasticache_replication_group" "this" {
replication_group_id = var.replication_group_id
description = var.description
port = var.port
node_type = var.node_type
parameter_group_name = var.parameter_group_name
security_group_ids = var.security_group_ids
subnet_group_name = var.subnet_group_name
at_rest_encryption_enabled = true
automatic_failover_enabled = true
multi_az_enabled = true
num_node_groups = var.num_node_groups
replicas_per_node_group = var.replicas_per_node_group
}
resource "aws_cloudwatch_metric_alarm" "cache_memory" {
for_each = toset(tolist(flatten(aws_elasticache_replication_group.this.member_clusters)))
alarm_name = "${each.key}-freeable-memory"
alarm_description = "Elasticache ${each.key} average freeable memory is less than ${var.alarm_memory_threshold_bytes} bytes"
comparison_operator = "LessThanThreshold"
evaluation_periods = 1
metric_name = "FreeableMemory"
namespace = "AWS/ElastiCache"
period = 600
statistic = "Average"
threshold = var.alarm_memory_threshold_bytes
alarm_actions = var.alarm_actions
dimensions = {
CacheClusterId = each.key
}
}
I am getting the following error. I tried adding depends on but it does not fix it,.
╷
│ Error: Invalid for_each argument
│
│ on modules/redis/alerts.tf line 3, in resource "aws_cloudwatch_metric_alarm" "cache_memory":
│ 3: for_each = toset(tolist(flatten(aws_elasticache_replication_group.this.member_clusters)))
│ ├────────────────
│ │ aws_elasticache_replication_group.this.member_clusters is a set of string, known only after apply
│
│ The "for_each" set includes values derived from resource attributes that
│ cannot be determined until apply, and so Terraform cannot determine the
│ full set of keys that will identify the instances of this resource.
│
│ When working with unknown values in for_each, it's better to use a map
│ value where the keys are defined statically in your configuration and where
│ only the values contain apply-time results.
│
│ Alternatively, you could use the -target planning option to first apply
│ only the resources that the for_each value depends on, and then apply a
│ second time to fully converge.
╵
Thanks