Error: Invalid for_each argument

I am getting this error in TF plan

Error: Invalid for_each argument
18:52:19 on antivirus.tf line 50, in module “antivirus_incremental_scan”:
18:52:19 50: for_each = { for idx, instance in local.ec2_instances : instance.name => instance }
18:52:19
18:52:19 The “for_each” value depends on resource attributes that cannot be determined
18:52:19 until apply, so Terraform cannot predict how many instances will be created.
18:52:19 To work around this, use the -target argument to first apply only the
18:52:19 resources that the for_each depends on.
18:52:19
18:52:19
18:52:19 Error: Invalid for_each argument
18:52:19 on antivirus.tf line 68, in module “antivirus_weekly_scan”:
18:52:19 68: for_each = { for idx, instance in local.ec2_instances : instance.name => instance }
18:52:19
18:52:19 The “for_each” value depends on resource attributes that cannot be determined
18:52:19 until apply, so Terraform cannot predict how many instances will be created.
18:52:19 To work around this, use the -target argument to first apply only the
18:52:19 resources that the for_each depends on.

The Tf script I am using is this one

data “aws_instances” “virus_scan_instances” {
instance_tags = {
Client_Prefix = var.client_prefix
Capability = var.capability
Environment = var.environment
}
}

data “aws_instance” “all” {
count = length(data.aws_instances.virus_scan_instances.ids)
instance_id = data.aws_instances.virus_scan_instances.ids[count.index]
}

locals {
ec2_instances = [
for instance in data.aws_instance.all : {
id = instance.id
name = lookup(instance.tags, “Name”, “No Name”)
}
]
}

Use a conditional to create maintenance windows only if instances exist

resource “aws_ssm_maintenance_window” “daily_virus_scan” {
for_each = length(local.ec2_instances) > 0 ? { for idx, instance in local.ec2_instances : instance.id => instance } : {}

name = “{var.client_prefix}-{var.environment}-${var.capability}-maintenance-window-antivirus-daily”
schedule = “cron(0 3 ? * MON-SAT)”
duration = 1
cutoff = 0
description = “A window for virus scans to run once per day”
tags = local.common_tags
}

resource “aws_ssm_maintenance_window” “weekly_virus_scan” {
for_each = length(local.ec2_instances) > 0 ? { for idx, instance in local.ec2_instances : instance.id => instance } : {}

name = “{var.client_prefix}-{var.environment}-${var.capability}-maintenance-window-antivirus-weekly”
schedule = “cron(0 3 ? * SUN *)”
duration = 1
cutoff = 0
description = “A window for virus scans to run once per week”
tags = local.common_tags
}

module “antivirus_incremental_scan” {
for_each = { for idx, instance in local.ec2_instances : instance.name => instance }

source = “…/…/…/modules/aws_services/systems_manager/maintenance_task_shell_script”

enabled = true
node_name = “{each.value.name}_incremental_antivirus_task" maintenance_window_id = aws_ssm_maintenance_window.daily_virus_scan[each.key].id node_instance_id = each.value.id log_output_bucket = "{var.bucket_prefix}-antivirus-{var.region}" log_output_bucket_key = "{var.client_prefix}/ec2-antivirus-logs/{var.environment}/weekly_scan/{each.value.name}”
description = “A job to run an incremental antivirus scan on target instance on files that are under 24 hours old”

commands_to_run = [
“sudo /usr/local/bin/virusscan --environment ${var.environment} -i 1440 -a quarantine-move -e /opt/eipaas/quarantine,/sys,/proc,/dev,/var/run/docker,/var/lib/docker,/etc/puppetlabs,/var/lib/clamav,/opt/nomad,/var/log,/opt/graphite/storage/whisper”
]
}

module “antivirus_weekly_scan” {
for_each = { for idx, instance in local.ec2_instances : instance.name => instance }

source = “…/…/…/modules/aws_services/systems_manager/maintenance_task_shell_script”

enabled = true
node_name = “${each.value.name}_weekly_antivirus_task”

maintenance_window_id = aws_ssm_maintenance_window.weekly_virus_scan[each.key].id
node_instance_id = each.value.id
log_output_bucket = “{var.bucket_prefix}-antivirus-{var.region}”

log_output_bucket_key = “{var.client_prefix}/ec2-antivirus-logs/{var.environment}/full_scan/${each.value.name}”

description = “A job to run a full antivirus scan on target instance”

commands_to_run = [
“sudo /usr/local/bin/virusscan --environment ${var.environment} -a quarantine-move -e /opt/eipaas/quarantine,/sys,/proc,/dev,/var/run/docker,/var/lib/docker,/etc/puppetlabs,/var/lib/clamav,/opt/nomad,/var/log,/opt/graphite/storage/whisper”
]
}

Can anybody help what I am doing wrong in the code.

FYR: I am using Terraform version 0.13.5

Can you edit your post and wrap the whole code block in triple backticks and get rid of the paired quotes? That will make it much easier to read.

At a high level, I think the error is telling you what the issue is: you’re trying to do a for_each on a value that Terraform thinks it can’t use until plan time.

18:52:19 Error: Invalid for_each argument
18:52:19 on antivirus.tf line 68, in module “antivirus_weekly_scan”:
18:52:19 68: for_each = { for idx, instance in local.ec2_instances : instance.name => instance }
18:52:19 The “for_each” value depends on resource attributes that cannot be determined
18:52:19 until apply, so Terraform cannot predict how many instances will be created.
18:52:19 To work around this, use the -target argument to first apply only the
18:52:19 resources that the for_each depends on.

Maybe try refreshing or target applying the data sources first? A simplified test of something similar seemed to work ok for me, even before having applied.

data "aws_instances" "virus_scan_instances" {}
data "aws_instance" "all" {
  count = length(data.aws_instances.virus_scan_instances.ids)
  instance_id = data.aws_instances.virus_scan_instances.ids[count.index]
}

locals {
  ec2_instances = [
    for idx, id in data.aws_instances.virus_scan_instances.ids : {
      id   = id
      name = lookup(data.aws_instance.all[idx].tags, "Name", "No Name")
    }
  ]
}

resource "null_resource" "example" {
  # Assuming you only need the ID _or_ name in a given resource, simpler to do it this way
  for_each = length(local.ec2_instances) > 0 ? toset([for item in local.ec2_instances : item.id]) : []
  provisioner "local-exec" {
    command = "echo This command will execute whenever the configuration changes"
  }
}

A couple more notes:

If your data is coming from data sources, the plan output will show which of those is not able to be read along with a hint as to why.

I don’t think it’s causing the issue, but why have data.aws_instance.all lookup the exact same data which was already returned by data.aws_instances.virus_scan_instances?

I don’t see any use of depends_on here, but if you are using that in a parent module call it could be adding unnecessary dependencies.

I think most importantly here, you are using a very old version of Terraform which may not be behaving correctly in this situation. It’s hard to for others to diagnose configuration problems when it may be Terraform itself, so I would suggest working on upgrading to a supported version as soon as possible.

1 Like

I was confused by this in the OP’s question at first too (especially because the naming was a little confusing to me), and was going to suggest the same thing, but when I looked at the actual resource docs and played with it a bit locally, it made more sense.

I believe data.aws_instances.virus_scan_instances is returning a single object for the results of the search, with id as the region, and ids as a list of instance IDs, but it doesn’t seem to include the instance name, which the OP wants (and the instance_tags attribute doesn’t contain what you’d expect it to – it’s tags provided as search criteria, vs. a data structure with tags from the found instances). So they’re building a list of instance IDs matching the search from data.aws_instances.virus_scan_instances, and then using data.aws_instance.all to retrieve the tags (including Name tags) for instances that have them.

Thanks @wyardley I will try your solution.

Hi @wyardley @jbardin I tried below code in some environments. It is not giving me Error related to for-each argument anymore. But I have to test on other environments. Is this logic correct or does it need some improvement.

data “aws_instances” “virus_scan_instances” {
filter {
name = “tag:Client_Prefix”
values = [var.client_prefix]
}
filter {
name = “tag:Environment”
values = [var.environment]
}
filter {
name = “tag:Capability”
values = [var.capability]
}
}

locals {
ec2_instances = [
for instance in data.aws_instances.virus_scan_instances.ids : {
id = instance
name = lookup(data.aws_instances.virus_scan_instances[instance].tags, “Name”, “No Name”)
}
]

instance_count = length(local.ec2_instances)

instance_map = { for instance in local.ec2_instances : instance.name => instance }
}

resource “null_resource” “antivirus_incremental_scan” {
count = local.instance_count > 0 ? 1 : 0 # Only create if instances exist

provisioner “local-exec” {
command = “echo ‘Creating incremental antivirus scans for instances.’”
}
}

Define the Maintenance Windows

resource “aws_ssm_maintenance_window” “daily_virus_scan” {
name = “{var.client_prefix}-{var.environment}-${var.capability}-maintenance-window-antivirus-daily”
schedule = “cron(0 3 ? * MON-SAT)”
duration = 1
cutoff = 0
description = “A window for virus scans to run once per day”
tags = local.common_tags
}

Use for_each to create the antivirus tasks for incremental scans

module “antivirus_incremental_scan” {
count = local.instance_count > 0 ? local.instance_count : 0

source = “…/…/…/modules/aws_services/systems_manager/maintenance_task_shell_script”

enabled = true
node_name = “{local.ec2_instances[count.index].name}_incremental_antivirus_task" maintenance_window_id = aws_ssm_maintenance_window.daily_virus_scan.id node_instance_id = local.ec2_instances[count.index].id log_output_bucket = "{var.bucket_prefix}-antivirus-{var.region}" log_output_bucket_key = "{var.client_prefix}/ec2-antivirus-logs/{var.environment}/weekly_scan/{each.value.name}”
description = “A job to run an incremental antivirus scan on target instance”
commands_to_run = [
“sudo /usr/local/bin/virusscan --environment ${var.environment} -i 1440 -a quarantine-move -e /opt/eipaas/quarantine,/sys,/proc,/dev,/var/run/docker,/var/lib/docker,/etc/puppetlabs,/var/lib/clamav,/opt/nomad,/var/log,/opt/graphite/storage/whisper”
]
}

Weekly Virus Scan

resource “aws_ssm_maintenance_window” “weekly_virus_scan” {
name = “{var.client_prefix}-{var.environment}-${var.capability}-maintenance-window-antivirus-weekly”
schedule = “cron(0 3 ? * SUN *)”
duration = 1
cutoff = 0
description = “A window for virus scans to run once per week”
tags = local.common_tags
}

module “antivirus_weekly_scan” {
for_each = local.instance_count > 0 ? { for instance in local.ec2_instances : instance.id => instance } : {}

source = “…/…/…/modules/aws_services/systems_manager/maintenance_task_shell_script”

enabled = true
node_name = “{each.value.name}_weekly_antivirus_task" maintenance_window_id = aws_ssm_maintenance_window.weekly_virus_scan.id node_instance_id = each.value.id log_output_bucket = "{var.bucket_prefix}-antivirus-{var.region}" log_output_bucket_key = "{var.client_prefix}/ec2-antivirus-logs/{var.environment}/full_scan/{each.value.name}”
description = “A job to run a full antivirus scan on target instance”
commands_to_run = [
“sudo /usr/local/bin/virusscan --environment ${var.environment} -a quarantine-move -e /opt/eipaas/quarantine,/sys,/proc,/dev,/var/run/docker,/var/lib/docker,/etc/puppetlabs,/var/lib/clamav,/opt/nomad,/var/log,/opt/graphite/storage/whisper”
]
}

oh yes thanks, I missed aws_instance vs aws_instances there!

@saurabh71669, If you want others to review what you’ve posted you really need to take wyardley’s advice and fix the formatting on your posts. As it stands a lot of what you’ve presented is unreadable.