Terminate a Terraform deployment based on output from a Powershell script that was invoked through Terraform

Hello all!
– I am deploying an entire AWS Elastic Beanstalk environment through Terraform.
– I have a Powershell script that fetches some values from within the existing Elastic Beanstalk environment that I am trying to “re-deploy” through Terraform.
– I am able to invoke this Powershell script from within my Terraform code. I can also return certain values or return an exit 0 or an exit 1 back to Terraform code from the script.
– Back in the Terraform code, I need a way to terminate the deployment – or continue it like normal - based on the values returned from my Powershell script.

The last bullet is what I have not been able to achieve. I tried Triggers - order, depends_on. But none worked.

Note: At this time, I am not able to make Terraform itself do what the Powershell script is doing. So I must be able to terminate Terraform based on what I get back from my script.

Please throw all your ideas at me! Any pointers on how to achieve this will hugely help!!

Hi @Vin_Rao,

If you are running powershell through a local-exec provisioner then it should treat an unsuccessful exit status as an error and block any later operations that depends on whichever resource has the provisioner.

If that isn’t working for you, please share code demonstrating what you’ve tried so far and more details about how it behaved and how that was different to what you wanted to have happen. Thanks!

Hi @apparentlymart. Thank you so much for responding!

[I.]
I am trying to return one of two things based on some checking I do in the AWS environment:

  1. exit 0 and string value that says “No errors.”

  2. exit 1 and string value that says “Exception because check failed.”

In both these cases, I want to be able to capture the string value, report it from my Terraform in my TeamCity build log, and, if it is exit 1, then abort the entire terraform deployment immediately.

I am able to get the string value if I call my PS script as a data "external" in Terraform and use the

output "xxx"
{
      value = data.external.xxx
}

But this output value is not helping me in any way to abort the remaining steps in Terraform.

[2.]
Running my Terraform as below, and creating a null_resource to execute the PS script, does not seem to have run the PS script at all. I am currently trying to find the issue.

resource "null_resource" "min_max_values_match" {
  triggers = {
    always_run = "${timestamp()}"
  }

    provisioner "local-exec" {
    command = "powershell.exe -ExecutionPolicy Bypass -Command ^& ${path.module}/scripts/Ensure-EBS-ASG-MinMax-Values-Match.ps1 -ApplicationName  ${var.elastic_beanstalk_application_name} -Deployment ${var.deployment} -AWSRegionName ${var.aws_region} -AB ${var.a_b_environment}"
  }
}

I am still trying different ways to accomplish this as I type this out. But hoping you may have suggestions that might work?

I will await your much needed response!

Thanks,
Vin

Here is my half-baked TF file:

data "external" "min_max_values_match" {
  program = [
    "powershell.exe",
    "-ExecutionPolicy",
    "Bypass",
    "-File",
    "C:/terraform/LocalPSMinMaxMatch-2.ps1"
  ]
}

output "do_values_match" {
  value = data.external.min_max_values_match.result.valuesMatch
}

resource "null_resource" "min_max_values_match" {
  triggers = {
    always_run = "${timestamp()}"
  }
  provisioner "local-exec" {
    command = "echo do_values_match"
  }
}

resource "aws_iam_instance_profile" "vin_test_instance_profile" {
    name = "VinTestProfile"
    role = "$VinTestRole"    
    depends_on = [
    null_resource.min_max_values_match
  ]
}

I want to somehow abort Terraform before it starts to create the “aws_iam_instance_profile” resource, if my PS returns a exit 0, or a specific “do_values_match” value, and/or if my “min_max_values_match” null_resource does not get created.

I hope I am giving you the idea.

Hi @Vin_Rao,

I think first it’s important to think about the differences between data resources (data blocks) and managed resources resource blocks).

During the planning phase Terraform treats these quite differently:

  • For managed resources, Terraform evaluates the configuration and passes it to the relevant provider to ask it to predict what the new state would be if that configuration were applied. If the provider is written correctly then this should make no changes to your real infrastructure.

    Provisioners (provisioner blocks) are declared inside a resource block but are a special concept belonging to Terraform itself, not to any particular provider. Aside from some basic type checking and argument name validation, provisioner blocks have absolutely no effect during the planning phase.

  • For data resources, Terraform evaluates the configuration but then does one of two things depending on the result:

    • If the configuration contains any values that won’t be known until the apply phase – shown as (known after apply) in the plan description – Terraform concludes that it cannot read the data during the plan phase and so it instead just remembers that it will need to read the data during the apply phase.
    • If the configuration is entirely known values and the data resource doesn’t depend on any managed resources that have pending changes, Terraform immediately reads the data source during the plan phase and uses its results for the remainder of the planning work. It will not read the data again during the apply phase.

Failure of either the planning request for a managed resource or the read request for a data resource will halt Terraform’s work and prevent taking any further actions for dependent resources, but this means you must make sure that the action that might fail does actually happen during the planning phase.


The hashicorp/external provider’s external data source implements “reading” by executing the program you specify and then parsing the result as JSON. If the program exits with a non-zero exit status then the provider will consider it to be a planning error.

Your data "external" block uses only static constant values as arguments, so Terraform should always be able to read from this data source during the planning phase.

Based on your example it seems like you’ve implemented C:/terraform/LocalPSMinMaxMatch-2.ps1 as a program that always succeeds but sets a valuesMatch property in its JSON response, which I assume is set either to "true" or "false" depending on the outcome.

That declaration alone will not prevent any further execution because returning "false" is still a successful result. There are two main ways you can make that data resource fail:

  • Change your script to exit with a non-zero status when the values don’t match, instead of printing a JSON object that sets a property to "false". Anything which depends on that data resource will be blocked from being evaluated if the program fails.

  • Keep your program as-is but add a postcondition which tells Terraform that the data source result is only valid if valuesMatch is "true":

    data "external" "min_max_values_match" {
      program = [
        "powershell.exe",
        "-ExecutionPolicy",
        "Bypass",
        "-File",
        "C:/terraform/LocalPSMinMaxMatch-2.ps1",
      ]
    
      lifecycle {
        postcondition {
          condition     = data.external.min_max_values_match.result.valuesMatch
          error_message = "Values do not match."
        }
      }
    }
    

    The lifecycle block and everything inside it is handled by Terraform Core and is valid for all data blocks, regardless of resource type. After reading from this data source, Terraform will evaluate the condition expression. If its result converted to a boolean is false then it will halt and show the error_message.

    You will need to write depends_on = [data.external.min_max_values_match] inside any resource or data block whose evaluation should be blocked by an error. That will then guarantee that those downstream resources cannot be evaluated if the postcondition fails.

    The postcondition is effectively an extra check that can potentially return an error based on custom rules written in your module, without any need to modify the data source implementation itself.

I think using a data block is probably the better option here because it can be dealt with during the planning phase, but you’ll need to make one of the changes I described above to get the effect you want.


If you want to do this with a local-exec provisioner then that’s also valid but because provisioners only run during the apply phase (after the object has actually been created) you can only cause errors during the apply phase by this technique.

To make this work you will need to make sure that the command you run returns a non-zero exit status when it fails. The echo do_values_match command you showed in your example will always succeed, so it cannot possibly block progress.

Again you will need to use depends_on with any resource whose execution should be blocked by the error. Since provisioners are an apply-time feature, a provisioner cannot block another resource from having actions planned, but it can block a resource from having its planned actions executed during the apply phase.

Hi @apparentlymart,

Thank you so much for the detailed information and guidance you are providing me!

From the two ways you suggested, I think I would need to use the 2nd way, because when I abort my Terraform plan, I want to have a way to include the reason for the abortion (which, in this case will be that the ‘values did not match’) in the Build Log in TeamCity.

Also, I am having trouble using the code snippet you gave me. The validation error:

condition = data.external.min_max_values_match.result.valuesMatch

│ Configuration for data.external.min_max_values_match may not refer to itself.

I see that it is indeed referring to itself. Am I supposed to add this condition block in the resource that depends on the external data instead of inside itself? What am I missing?

Hi @Vin_Rao,

Sorry for sharing an incorrect example. I had initially written it a different way and forgot to update it fully when I simplified it.

A postcondition block can refer to the same object it is checking by using the special self symbol, which is aliased to differ objects depending on the context. In this context it refers to the result of reading from the data source.

  lifecycle {
    postcondition {
      condition     = self.result.valuesMatch
      error_message = "Values do not match."
    }
  }

I think the above will work, if you replace the lifecycle block in my original example with this updated one.

Yes, shortly after I posted, I realized I can use self. I am trying that out, but hitting some other road blocks.
I am hoping this will get me ahead. I will let you know!
Thank you so much!

Hi @apparentlymart,

Sorry for the delay in communication here, I had to set this aside to look into an urgent matter.

I tried what you suggested. The problem here is that we are using a very old version of Terraform - version 0.12.31 - and we will unfortunately not be upgrading anytime soon.

And the lifecycle block is not supported in this version.

This was a rather neat solution, which I was hoping would work, but for the old TF version we are on.

Would you have any other suggestions? :neutral_face:

Thank You so mcuh,
Vin

Hi @Vin_Rao,

Unfortunately it has been many years since I last thought about Terraform v0.12, so I don’t have anything else to suggest.