Hello,
I’m testing the new terraform test feature of the 1.6.x branch!
This is a very handy tool for what I was able to test until now but I’m facing a very frustrating corner case which I don’t manage to solve in what I would considered as clean.
I’m basicaly trying to use local variables to hold the content of the expected values of my tests to avoid hardcoding it in several asserts
My module under test has been simplified to its minimal implementation:
output "out" {
description = "Output for test"
value = ["A", "B", "C"]
}
Now I simply would like to declare what I consider as a list and make it evolve depending on the test case:
variables {
expected_value = ["A", "B", "C"]
}
# Test basic configuration with no extra component on several environments
run "run1" {
command = plan
variables {
expected_value = ["B", "C", "D"]
}
assert {
condition = length(output.out) == length(var.expected_value)
error_message = "Fail"
}
}
# Test basic configuration with no extra component on a single environment
run "run2" {
command = plan
variables {
expected_value = ["A", "B"]
}
assert {
condition = length(output.out)+1 == length(var.expected_value)
error_message = "Fail"
}
}
The first run “run1” executes successfully because size of the “list” (in fact tuple) is the same, but “run2” fails…
$ terraform test
tests\config1_fail.tftest.hcl... in progress
run "run1"... pass
run "run2"... fail
╷
│ Error: Invalid value for input variable
│
│ on tests\config1_fail.tftest.hcl line 10, in run "run1":
│ 10: expected_value = ["B", "C", "D"]
│
│ The given value is not suitable for var.expected_value declared at
│ tests\config1_fail.tftest.hcl:24,22-32: tuple required.
╵
tests\config1_fail.tftest.hcl... tearing down
tests\config1_fail.tftest.hcl... fail
tests\config1_fail_workaround_fail.tftest.hcl... in progress
run "run1"... fail
I tried to use tolist()
as a workaround but faced a Functions may not be called here.
Any help to have it working without having as many variable names as runs ?
Hi @sebastien.latre, I think you’ve found a bug here. And it seems to have been something introduced during the 1.6.x series and then fixed in the soon to be released 1.7.x series.
First, a small note that I think there’s a typo in your condition in run2
. I think the condition should be:
condition = length(output.out) == length(var.expected_value)+1
Instead of
condition = length(output.out)+1 == length(var.expected_value)
With that fix in the condition, I can see the tests pass for Terraform v1.6.0 and v1.6.1. They also pass in the Terraform v1.7.0-rc1 release candidate that is currently available for the upcoming 1.7.x series. I do see the same error as you see regardless of the condition for all other releases in the v1.6 series though.
Are you in a position you can easily upgrade to the v1.7 series when it releases? That should be later this month.
For the 1.6 series, you can specify an explicit type for this variable by defining it within the main configuration, and then just never referencing it. This is a bit of a hack, and if we hadn’t already fixed this bug for the latest release I’d be encouraging you to file this as an issue within our repository. But it might be able to allow you to hold onto until you can upgrade to v1.7.
# main.tf
output "out" {
description = "Output for test"
value = ["A", "B", "C"]
}
variable "expected_value" {
type = list(string)
}
Once I add the variable definition into the main configuration, the tests pass for me. This is because the tests can use that variable definition to deduce the correct type.
Hello @liamcervante !
Thanks for your quick answer.
Indeed my condition is wrong. It’s only a fail in my minimal reproducible test.
And I forgot to tell that I’m using 1.6.6.
The workaround you’re proposing is annoying me because it modifies the module itself. I prefer going to use separate variable names…
Any chance to get this fixed in the 1.6.x branch?
Unfortunately we don’t currently have any more releases planned for the 1.6.x series. If that changes, I can make sure we include a fix for this but for now it’s better to run with the assumption that we won’t be making any more changes to v1.6.x.
OK, let’s wait for the 1.7.x branch then.
It’s not a big deal honestly
Thanks for your analysis on this one