Variable management with tfe_variable requires to apply twice

Hi there !

I’m running into a silly issue as I am managing and using a variable in the same terraform project. Here’s how things look like in my project:

resource "tfe_variable" "lambda_timeout" {
  key          = "lambda_timeout"
  value        = 120
  category     = "terraform"
  workspace_id = var.workspace_id
  description  = "Lambda timeout in seconds"
}

variable lambda_timeout {
  type        = string
  description = "Lambda timeout in seconds"
}

resource "aws_lambda_function" "my_lambda_function" {
  function_name = "my_lambda_function"
  timeout = var.lambda_timeout
  ...
}

As you can see above, I provision a lambda function to my AWS environment with a variable execution timeout. I am using terraform cloud, but I want to manage the value of the variable from the code and not directly using TF Cloud UI. To do so, I am using the tfe_variable resource available with the tfe provider.

This actually works as I am able to pilot the value of my lambda_timeout variable from the code while also being able to see the value in the UI, which is great.

Now after applying the above code I am having the following state:

  • my Terraform Cloud workspace has a lambda_timeout variable with a value of 120
  • my lambda function is set up with 120 seconds timeout in my AWS environment

The issue I am now facing is when I want to change the timeout to let’s say 300 seconds:

  • I update my code and set the value to 300 in my tfe_variable.lambda_timeout resource
  • I run terraform apply a first time to update the variable value in the Terraform Cloud
  • I run terraform apply a second time to update my lambda function with the new timeout value

When I apply the first time, terraform only detects the change of the variable value, but does not detect that the change of this value should require an update of the lambda function’s timeout in AWS.

Not the end of the world of course, but that’s actually something that’d be very annoying when running as part of an automation pipeline.

Is there any way to achieve change the variable value and update the lambda timeout in a single terraform apply ? Am I doing anything wrong ? thanks for helping :pray:

Hi!

Is there any way to achieve change the variable value and update the lambda timeout in a single terraform apply ?

There is indeed! You could reference the variable value directly:

resource "aws_lambda_function" "my_lambda_function" {
  function_name = "my_lambda_function"
  timeout = tfe_variable.lambda_timeout.value
  ...
}

The bigger take here is (as you already identified) that you’re attempting to both apply a configuration change to the environment in which this configuration is executed at the same time that you’re using that change to apply something else. In other words, you’re using the same configuration to both provision your Lambda function and the workspace it’s being provisioned from.

Instead, I would recommend the one of the following:

  • Since you’re wanting to configure the function timeout in this same configuration, there’s not much utility in in configuring a variable that you then use as a normal input variable - so perhaps you shouldn’t use a variable at all, and instead use the literal value in your Lambda resource.
  • OR, if you do want to use variables from the TFC workspace, separate these concerns: A common pattern is to use a ‘bootstrap’ or ‘meta’ workspace in Terraform Cloud to use the tfe provider to provision your other workspaces! Provision workspace variables in one, to be used as input variables for this Lambda function in the other. Run Triggers are a way to automate changes from one to be picked up by the other, if that’s something you’re interested in.

Hope that helps!

2 Likes

Hey Chris !

Thanks a lot for the helpful reply. I clearly missed the obvious fact that I can use the value of the tfe_variable resource, which definitely solves my problem as I can now apply my changes in a single run.

The alternative patterns that you suggested are things I have also considered but I believe don’t fit what I am trying to achieve. I have intentionally simplified our use case to expose the problem. Here’s how our setup actually looks like:

Our infrastructure
Amazon Web Services

We are provisioning our infrastructure (essentially lambda / api gateway / eventbridge) to two AWS accounts that represent our DEV and PRO environments. While the infrastructure is very similar in both accounts, it differs on some configuration that is environment-dependent.

As an example, we are provisioning a lambda function with an environment variable that is named RESOURCE_URL whose value depends on the environment.

Terraform-Cloud repository
Our bootstrap/meta workspace

As we are currently trying to migrate from local terraform with shared backend in AWS S3 to using Terraform Cloud instead, we have created a new repository that allows us to manage our workspaces in Terraform Cloud. Here’s how this looks like for now:

// main.tf
provider "tfe" {
  token    = var.tfe_token
  version  = "~> 0.15.0"
}

// state.tf
terraform {
  required_version = ">= 0.13.5"

  backend "remote" {
    organization = "MyOrg"

    workspaces {
      name = "terraform-cloud"
    }
  }
}

// variables.tf
variable tfe_token {
  type        = string
  description = "The Terraform Cloud token used with the tfe provider"
}

variable oauth_token_id {
  type        = string
  description = "The OAuth Token ID for GitHub"
}

variable organization_id {
  type        = string
  description = "The organization id in Terraform Cloud"
}

variable aws_access_key_id_dev {
  type        = string
  description = "AWS_ACCESS_KEY_ID for DEV"
}

variable aws_secret_access_key_dev {
  type        = string
  description = "AWS_SECRET_ACCESS_KEY for DEV"
}

variable aws_access_key_id_prod {
  type        = string
  description = "AWS_ACCESS_KEY_ID for PROD"
}

variable aws_secret_access_key_prod {
  type        = string
  description = "AWS_SECRET_ACCESS_KEY for PROD"
}

variable "repositories" {
  description = "List of GitHub repositories configured with Terraform Cloud"
  default = [
    "my-first-repository",
    "my-second-repository",
    "my-third-repository",
  ]
}

// cloud-dev.tf

resource "tfe_workspace" "tfe_workspaces_dev" {
  for_each                  = toset(var.repositories)
  name                      = "${each.key}-dev"
  organization              = var.organization_id
  terraform_version         = "0.13.5"
}

resource "tfe_variable" "workspace_id_dev" {
  for_each     = toset(var.repositories)
  key          = "workspace_id"
  value        = tfe_workspace.tfe_workspaces_dev[each.key].id
  category     = "terraform"
  workspace_id = tfe_workspace.tfe_workspaces_dev[each.key].id
  description  = "Workspace id"
}

resource "tfe_variable" "environment_dev" {
  for_each     = toset(var.repositories)
  key          = "environment"
  value        = "dev"
  category     = "terraform"
  workspace_id = tfe_workspace.tfe_workspaces_dev[each.key].id
  description  = "Workspace environment"
}

resource "tfe_variable" "tfe_token_dev" {
  for_each     = toset(var.repositories)
  key          = "tfe_token"
  value        = var.tfe_token
  category     = "terraform"
  workspace_id = tfe_workspace.tfe_workspaces_dev[each.key].id
  description  = "The OAuth Token ID for GitHub"
  sensitive    = true
}

resource "tfe_variable" "aws_access_key_id_dev" {
  for_each     = toset(var.repositories)
  key          = "AWS_ACCESS_KEY_ID"
  value        = var.aws_access_key_id_dev
  category     = "env"
  workspace_id = tfe_workspace.tfe_workspaces_dev[each.key].id
  description  = "AWS access key id"
  sensitive    = true
}

resource "tfe_variable" "aws_secret_access_key_dev" {
  for_each     = toset(var.repositories)
  key          = "AWS_SECRET_ACCESS_KEY"
  value        = var.aws_secret_access_key_dev
  category     = "env"
  workspace_id = tfe_workspace.tfe_workspaces_dev[each.key].id
  description  = "AWS secret access id"
  sensitive    = true
}

// cloud-prod.tf

resource "tfe_workspace" "tfe_workspaces_prod" {
  for_each                  = toset(var.repositories)
  name                      = "${each.key}-prod"
  organization              = var.organization_id
  terraform_version         = "0.13.5"

  vcs_repo {
    identifier      = "MyOrg/${each.key}"
    branch          = "master"
    oauth_token_id  = var.oauth_token_id
  }
}

resource "tfe_variable" "workspace_id_prod" {
  for_each     = toset(var.repositories)
  key          = "workspace_id"
  value        = tfe_workspace.tfe_workspaces_prod[each.key].id
  category     = "terraform"
  workspace_id = tfe_workspace.tfe_workspaces_prod[each.key].id
  description  = "Workspace id"
}

resource "tfe_variable" "environment_prod" {
  for_each     = toset(var.repositories)
  key          = "environment"
  value        = "prod"
  category     = "terraform"
  workspace_id = tfe_workspace.tfe_workspaces_prod[each.key].id
  description  = "Workspace environment"
}

resource "tfe_variable" "tfe_token_prod" {
  for_each     = toset(var.repositories)
  key          = "tfe_token"
  value        = var.tfe_token
  category     = "terraform"
  workspace_id = tfe_workspace.tfe_workspaces_prod[each.key].id
  description  = "The OAuth Token ID for GitHub"
  sensitive    = true
}

resource "tfe_variable" "aws_access_key_id_prod" {
  for_each     = toset(var.repositories)
  key          = "AWS_ACCESS_KEY_ID"
  value        = var.aws_access_key_id_prod
  category     = "env"
  workspace_id = tfe_workspace.tfe_workspaces_prod[each.key].id
  description  = "AWS access key id"
  sensitive    = true
}

resource "tfe_variable" "aws_secret_access_key_prod" {
  for_each     = toset(var.repositories)
  key          = "AWS_SECRET_ACCESS_KEY"
  value        = var.aws_secret_access_key_prod
  category     = "env"
  workspace_id = tfe_workspace.tfe_workspaces_prod[each.key].id
  description  = "AWS secret access id"
  sensitive    = true
}

As you may have noticed above, we are using CLI-driven Terraform workflow for our DEV workspaces and VCS-driven Terraform workflow for our PROD workspaces.

Workspace-specific repository
Supports AWS resources for a given business domain

While we are provisioning our workspaces to the Terraform Cloud from the meta workspace as shown above, we only want to configure variables that are used across workspaces from there, while we want to configure workspace-specific variables from workspace-specific repositories.

Cross-Workspaces variables:

  • AWS Credentials
  • Terraform Cloud token
  • Environment name

Workspace-specific variables:

  • RESOURCE_URL
  • LOG_LEVEL

Here is how things now look like in a Workspace-specific repository:

// main.tf

provider "aws" {
  version = "~> 3.0"
  region  = "eu-central-1"
}

provider "tfe" {
  token    = var.tfe_token
  version  = "~> 0.15.0"
}

// state.tf

terraform {
  required_version = ">= 0.13.5"

  backend "remote" {
    hostname      = "app.terraform.io"
    organization  = "MyOrg"

    workspaces {
      prefix = "my-first-repository-"
    }
  }
}

// env.tf

locals {
  environment = tomap({
    dev = {
      log_level = "debug"
      resource_url = "my-dev-domain.com/api/resource"
    }
    prod = {
      log_level = "error"
      resource_url = "my-prod-domain.com/api/resource"
    }
  })

  log_level = local.environment[var.environment].log_level
  resource_url = local.environment[var.environment].resource_url
}

// variables.tf

variable workspace_id {
  type        = string
  description = "Workspace id"
}

variable environment {
  type        = string
  description = "Workspace environment"
}

variable tfe_token {
  type        = string
  description = "The Terraform Cloud token to be used"
}

resource "tfe_variable" "log_level" {
  key          = "log_level"
  value        = local.log_level
  category     = "terraform"
  workspace_id = local.workspace_id
  description  = "Log level"
}

resource "tfe_variable" "resource_url" {
  key          = "resource_url"
  value        = local.resource_url
  category     = "terraform"
  workspace_id = var.workspace_id
  description  = "Resource Url"
}

// my-lambda-function.tf

resource "aws_lambda_function" "my-lambda-function" {
  function_name = "my-lambda-function"

  environment {
    variables = {
      LOG_LEVEL = tfe_variable.log_level.value
      RESOURCE_URL = tfe_variable.resource_url.value
    }
  }
}

This is a snapshot of our current progress in the migration to Terraform Cloud. There are some obvious optimizations to be done and a few pain points, but this seem to be working and covering our requirements.

To be clear and answering your comments, I also realize that having Terraform variables for log_level and resource_url is unnecessary, but we like the idea of being able to see the current value of these variables in the Terraform Cloud UI, especially when more than one engineer may be working on the same DEV environment in the same repository.


We’d be highly interested in having your feedback on that and we’re open to share our experiences. Cheers :slight_smile:

1 Like

Interesting!

Thanks for sharing all this, it’s great to see how users are organizing things in different ways to meet their needs. In the end, if this is what works for you, that’s fantastic :ok_hand:

while we want to configure workspace-specific variables from workspace-specific repositories.

we like the idea of being able to see the current value of these variables in the Terraform Cloud UI

Given these two requirements, I would normally say that just putting the workspace-specific values in the configuration for that component would solve both, as the value would be present in plans, applies, and state in the Terraform Cloud UI for that workspace, for anyone to see.

However - and correct me if I’m wrong - it sounds like you might find a lot of value in seeing the value in something like the Variables tab and not buried in the State tab or go clicking through the latest run.

I’m pleased to report that if that’s the case, there are some improvements we’re planning that may give you exactly the sort of visibility you’re looking for here, and without provisioning a workspace variable just for this purpose. Stay tuned!


As an aside, I noticed that you’re using a very old version (relatively speaking) of the TFC/E provider in your configuration examples (0.15.x); I would wager this is likely just because the project README stated 0.15 in it’s “Installation” example for the longest time and was never updated on each new release. This has since been corrected, and I strongly recommend you use the latest version (0.24.x at this moment) when you can! There are several major enhancements and a few breaking change releases between those two versions.

Cheers! :grinning_face_with_smiling_eyes:

1 Like

Hey @chrisarcand !

Yes that’s exactly what I want and that’s actually what I get by managing the remote variables with tfe_variable.

resource "tfe_variable" "log_level" {
  key          = "log_level"
  value        = local.log_level
  category     = "terraform"
  workspace_id = var.workspace_id
  description  = "Log level"
}

resource "aws_lambda_function" "my_lambda_function" {
  function_name = "my-lambda-function"

  environment {
    variables = {
      LOG_LEVEL = tfe_variable.log_level.value
    }
  }
}

I can indeed see the value of the variable log_level in the Variables tab:

An I can see it being applied to my lambda function’s environment in the plan / apply runs:

Terraform will perform the following actions:

  # aws_lambda_function.my_lambda_function will be updated in-place
  ~ resource "aws_lambda_function" "my_lambda_function" {
       
      ...

      ~ environment {
          ~ variables = {
              ~ "LOG_LEVEL" = "info" -> "debug"
            }
        }

    }

  # tfe_variable.log_level will be updated in-place
  ~ resource "tfe_variable" "log_level" {
        category     = "terraform"
        description  = "Log level"
        hcl          = false
        id           = "xxxxxxxxxxxxxx"
        key          = "log_level"
        sensitive    = false
      ~ value        = (sensitive value)
        workspace_id = "xxxxxxxxxxxxxx"
    }

All working as expected. The only think I’m currently worried about is this warning that started appearing in my runs:

Should I declare the log_level variable to remove that, even if I am never actually using var.log_level ?

// This variable declaration is missing in my Terraform project
variable "log_level" {
  type        = string
  description = "The Terraform Cloud token to be used"
}

Should I declare the log_level variable to remove that, even if I am never actually using var.log_level ?

I personally would, yes - because any warnings or deprecations are noise that I never want to see in my output, but also because the warning holds true: these TFC variables [that you’re using directly from their values in state, and not as true input variables] are provided to the run environment in TFC and will indeed error in a future release as they aren’t declared. :+1:t2:

I’m pleased to report that if that’s the case, there are some improvements we’re planning that may give you exactly the sort of visibility you’re looking for here, and without provisioning a workspace variable just for this purpose. Stay tuned!

@LaurentEsc I happened upon this thread again and want to share that the feature I was hinting at here is now generally available; within the Workspace UI, we now list the workspace’s [non-sensitive] state outputs!

Given this, I’d suggest if visibility in non-sensitive data is still your main motive, I’d now just output the values you’re interested in and view them here.

Cheers!