Ahh ha! I can see what the issue is here.
In short - this is an Azure Devops issue, not a Terraform issue:
In Azure DevOps Jobs are run on a pipeline agent. If you are using Microsoft managed pipeline agents then each job will be assigned a different and newly created pipeline agent. The detail is here: Jobs in Azure Pipelines - Azure Pipelines | Microsoft Learn but the pertinent points are:
You can organize your pipeline into jobs. Every pipeline has at least one job. A job is a series of steps that run sequentially as a unit. In other words, a job is the smallest unit of work that can be scheduled to run.
And
Agent pool jobs (eg. Microsoft Hosted Agent Pools)
These are the most common type of jobs and they run on an agent in an agent pool.
- When using Microsoft-hosted agents, each job in a pipeline gets a fresh agent.
If I summarise your YAML pipeline:
stage: Init
- job: TerraformInitJob
steps:
- script (init clean up)
- task: (TerraformInstaller)
- task: TerraformTaskV4@4 (INIT)
stage: Validate
- job: TerraformValidate
steps:
- task: TerraformInstaller@0
- task: TerraformTaskV4@4 (Validate)
stage: Plan
- job: TerraformPlanJob
steps:
- task: TerraformTaskV4@4 (PLAN)
What is currently happening is that your first job (TerraformInitJob) is executing its 3 tasks. The last of which is the init
. And then the job finishes and the agent is disposed of.
Your next job (TerraformValidate) then starts on a new pipeline agent which is back to the baseline config. As this will not include any of the data/files that are created/download by an init
, such as the providers, the command errors. You have, correctly, run an initial installer task here to ensure the required terraform version is installed, but not run an init step.
Your 3rd job, should the pipeline get there, would have exactly the same issue. As there is no installer job it would use whatever version ins pre-installed on the agent (if there is one) and as there is no init
task, it would error with a similar message to your validate.
Excluding the creation and use of self-hosted pipeline agents which can persist files/data across jobs. You can can approach this is a few different ways, two of which are:
- Keep your current pipeline structure and do an install and init in every job
- Combine some/all of your stages and jobs and ensure that each job still has the install and init.
My simple approach is as follows (but there are many different approaches)
In the ‘deployment’ pipeline you have just 2 stages - a plan stage and an apply stage.
The plan stage does (in a single job):
- install
- init
- plan (saving to a plan file)
- Store plan as pipeline artifact
The Apply stage does (in a single job):
- install
- init
- retrieve pipeline artifact
- Apply (using plan file)
A terraform plan
runs an implicit validate
anyway - so if the module does not validate the plan stage will error.
If you don’t want the apply to run without a review of the plan:
The Apply stage could either use a ‘manual validation’ task in an additional job inserted to run first, or use the ‘deployment job’ job type, to associate the run to an environment, with validations and approvals set on the environment.
You could also create a separate CI pipeline to run a validate when the module is checked-in/merged shifting that check ‘left’ and hopefully ensuring the code base is valid prior to any deployment pipeline being run.
Hope that helps
Happy Terraforming