in terraform, local-exec will march on even if a single ansible playbook fails.
I’d like to ensure a local-exec fails immediately with an appropriate exit code as soon as as that happens. I think this should be default behaviour but if you have any ideas I’d love to know.
From Terraform’s perspective, the command value is just an opaque string to be passed to the shell, so it’s the shell’s responsibility to decide how to handle errors.
If you are using bash then you should be able to arrange for the behavior you want by adding the following to the start of your script:
As you’ve seen, Terraform will halt processing if the result is unsuccessful, but the definition of what is unsuccessful is up to the shell that is processing the commands, not up to Terraform itself. Depending on what you’re running, you might also consider using -o pipefail and other bash options to select behaviors appropriate for what your script is expecting.
I’d like to slightly necro this, as I’m struggling even after following the advice on this thread.
I’ve got the following provisioner:
provisioner "local-exec" {
command = <<EOT
if [[ var.region = 'us-west-2' || var.region = 'us-east-2' ]]
then
echo Region Valid
else
exit 1
fi
EOT
}
Seemingly no matter what I do in the conditional, the terraform run continues on failure. I’ve validated, the conditions, and can conform that the script is definitely exiting 1, yet terraform just keeps on keeping on. Has the default behavior for local-exec changed? I’ve also tried set -e and place a command that will fail in my then block, but still no dice.
On this second example, not just terraform will stop executing, it also will taint the “local-exec” resource, so on the next run you don’t need to recreate the infra, just to re run the ansible against your created infra.
It looks like you already saw that in order for Terraform to consider a local-exec provisioner as “failed” the overall command you run needs to exit with a non-successful status, which you’ve achieved here by chaining the steps together with && so that the first failure will abort the chain.
In a situation where the creation of an object (which includes running provisioners) fails partway through, Terraform can’t tell how far the process got and so as you’ve seen it will plan to destroy the object and create a new one in order to start the process again. Terraform represents the need to do that using the “tainted” status.
If you want Terraform to consider running that command as a separate operation than creating the object the provisioner is embedded within, I think that will mean moving the provisioner out into a separate resource block which can then fail independently of the first one. The hashicorp/null provider has a special resource type null_resource which is intentionally designed to do nothing at all so that you can associate provisioner blocks with it for actions that must happen independently of any “real” resources:
resource "example_virtual_machine" "example" {
# (configuration of your VM, just as a
# placeholder because you didn't share that
# part of your configuration.)
}
resource "null_resource" "example" {
triggers = {
# This resource will be re-created, and thus
# the provisioner re-run, each time the
# VM's IP address changes. Adjust this to
# whatever is a suitable attribute to represent
# the VM being replaced.
instance_ip_addr = example_virtual_machine.example.private_ip
}
provisioner “local-exec” {
command = format(“git clone %s /tmp/%s && cd /tmp/%s && git checkout %s && ansible-playbook %s.yml --extra-vars ‘%s ip=%s’ && rm -rf /tmp/%s”, var.ansible.repo, local.repo_tmp_dir, local.repo_tmp_dir, var.ansible.branch, var.ansible.playbook, var.ansible_extra_vars, local.private_ip, local.repo_tmp_dir)
}
}
With this structure, when the provisioner fails Terraform will record that null_resource.example failed, but it will already have considered example_virtual_machine.example to have succeeded. Therefore a subsequent terraform plan will only plan to “replace” null_resource.example (re-run the provisioner) and will leave example_virtual_machine.example unchanged.