Azure Terrafrom iSsues with the API Publisher

Team,

I am working as a senior software engineer in Delta Dental Of California and we recently migrated to Azure Cloud. We are planning to use Terraform as Infrastructure as a code and started learning the terraform .

We are publishing the API’s in Azure APi manager using the terrform code and we are able to do that. If we are deploying another set of api’s , the terraform is deleting the previously created API’s.

is there any chance in terraform to maintain the previous states of the API’s with out destroying every deployment.

We are currently blocked and i need your support on this.

Below is the issue I am facing:

  1. Published our first API (demo-api) using resource . - Successful
  2. We are able to test this API from API Manager
  3. Publishing our second API (demo2-api) - Second API is published and it dot destroyed the First API

We have created a Pipeline which using Terrafrom and creating the resources based on our tf files. We are using a single pipeline for deploying all the API’s but we are passing the API details at RunTime.

Is there any chance to maintain all the previous version with out deleting it. Appreciate your help on this.

pool:

name: Azure Pipelines

vmImage: windows-latest

workspace:

    clean: all  

trigger:

  • master

stages:

  • stage: Hub_SBOX_Middleware_ITS_Demo_API

    jobs:

    • job:

      displayName: ‘SBOX Hub ITS Demo API’

      pool:

      name: Azure Pipelines
      
      vmImage: windows-2019
      

      variables:

      • group: Azure Platform Non-Prod Secrets

      • group: dev-apigateway

      steps:

      • task: CopyFiles@2

        displayName: ‘Copy Non-Prod main.tf’

        inputs:

        SourceFolder: $(Build.SourcesDirectory)

        Contents: “*”

        TargetFolder: $(Build.SourcesDirectory)

      • task: ms-devlabs.custom-terraform-tasks.custom-terraform-installer-task.TerraformInstaller@0

        displayName: ‘Install Terraform 0.13.3’

        inputs:

        terraformVersion: 0.13.3

        workingDirectory: “$(System.DefaultWorkingDirectory)/NonProd”

      • script: ‘terraform init -backend-config=“access_key=$(tfstateaccesskey)” -backend-config=“key=tfstatehubnonprdapimdemoapi”’

        displayName: ‘terraform init’

      • script: ‘terraform validate’

        displayName: ‘terraform validate’

      • script: terraform plan -var-file ./terraform.tfvars -var=“environment=dev” -var=“subscription_id=(hubnonprodid)" -var="tenant_id=(tenantid)” -var=“client_id=(dd-ado-automation-spn-id)" -var="client_secret=(dd-ado-automation-spn-secret)” -var=“backend_subscr_id=$(informationtechnologynonprodid)” -out tf.plan

        displayName: ‘terraform plan’

      • script: terraform apply -auto-approve -input=false ./tf.plan

        displayName: ‘terraform apply’

      • task: DeleteFiles@1

        displayName: ‘Delete files from $(Build.SourcesDirectory)’

        inputs:

        SourceFolder: ‘$(Build.SourcesDirectory)’

        Contents: ‘**’

Hi @bsudabathula,
do you have different code repositories while deploying API1 and API2 or do you use different parameters for the same code?

It looks like you have different deployments however reuse a single state.

Hi,

We are using the different parameters for the same code and we are passing those at runtime through pipeline.

Without using different state-file locations or properly have all resource definitions within the same code-base, you’re basically applying changes to your current deployment which includes deleting previous deployments.

Thanks for the quick response. You mean to say that don’t destroy the previous state file locations and create a new state file location for each and every deployment. Am i correct?

IMHO, there are two options.
a) Add all your resource configurations to the code-base and still use a single state file.
b) For each deployment use a distinct state-file. In case a specific deployment has to be destroyed or updated use the proper state-file.

Ref: State - Terraform by HashiCorp