Conditional creation

Hello,
the old subject of conditional creation of a resource, if another resource already exists.
In my situation, i’m attempting to create a key if none exists, so that it can be used in ec2 resource.

the issue :
i’m getting “No matching key found” and the workflow terminates.

 Error: no matching EC2 Key Pair found
│ 
│   with data.aws_key_pair.proj_key,
│   on main.tf line 10, in data "aws_key_pair" "proj_key":
│   10: data "aws_key_pair" "proj_key" {
│ 

Expectation:
the workflow should continue with the key creation.
Question:
what i’m doing wrong and how to fix it ?

code:


data "aws_key_pair" "proj_key" {
  key_name           = "Proj_key"
}

resource "aws_key_pair" "vm_key" {
  count = length(data.aws_key_pair.proj_key.id) != 0 ? 0 : 1
  key_name   = "cc" #var.public_key_file_name
  public_key = file("variables.tf")
  #  public_key = file(var.public_key_file_name)
}

module "vm" {
  source        = "../../modules/ec2"
  instance_type = var.instance_type
  key_name   = length(data.aws_key_pair.proj_key.id) != 0 ? data.aws_key_pair.proj_key.key_name : aws_key_pair.vm_key[0].key_name

RE: How to Use Conditional Expressions to Create a Resource if it Does Not Exist – HashiCorp Help Center

Hi @AndrewZ,

I think the missing part of your current solution is the means for the caller of this module to specify whether to use an existing object or to create a new one.

A typical way to express that is an optional input variable which can either specify a public key string or can be left as null to tell the module that it should expect to find an existing key with the given name.

variable "key_name" {
  type = string
}

variable "public_key" {
  type    = string
  default = null
}

The idea in the above is that key_name is always required while public_key is optional. If var.public_key is set then that means that the caller intends to declare a new key pair object named after var.key_name. If var.public_key is left unset then that means the caller intends to use an already-created key with the name given in var.key_name.

The rest of the module could then look like this:

data "aws_key_pair" "proj_key" {
  count = var.public_key != null ? 0 : 1

  key_name = var.key_name
}

resource "aws_key_pair" "vm_key" {
  count = var.public_key != null ? 1 : 0

  key_name   = var.key_name
  public_key = var.public_key
}

locals {
  # local.key_name will always match var.key_name but
  # additionally depends on both of the resources
  # above, so that Terraform can see that the module
  # below depends on these resources.
  key_name = one(concat([
    data.aws_key_pair.proj_key[*].key_name,
    aws_key_pair.vm_key[*].key_name,
  ]))
}

module "vm" {
  source = "../../modules/ec2"

  instance_type = var.instance_type
  key_name      = local.key_name
}

The error message you showed indicates that your data block is currently in your root module, which means that the decision about whether or not to set var.public_key would be made on the command line or in the .tfvars files, rather than in a calling module block. Therefore you might prefer to change var.public_key to instead be var.public_key_file and pass the path into the file(...) function to make it more like your original example. The decision between passing in key data directly or a path to a file containing a key isn’t really important for this question, so I showed the simpler form above.

The crucial part is that both the resource "aws_key_pair" block and the data "aws_key_pair" block have opposite count expressions, so that exactly one of them will be declared at any time. Notice that the 0 and 1 results are inverted in each to create that effect.

This is different from your example because your data block didn’t have a count block at all, and so you told Terraform to always require an existing object to exist. The count expression in your resource block also wasn’t quite correct, because you were comparing against the length of the data resource’s id string, rather than the number of instances of that data resource.

Thank you @apparentlymart for the detailed explanation. It occurred to me that I had over-complicated my particular situation. Nevertheless, the explanation is very useful to me, as I, undoubtedly, will run into a similar situation again.

The only situation where I need to determine if the object exists is after the import operation.
In this case, what is the proper course of action should be?
I obviously do not want to manually switch a var on/off every time i deploy the code to a different env

The length expression is what I added based on the error message, stating that i have to use length since the count is not available.
Is this the proper application of the length :

length(aws_key_pair.vm_key[0])

This is exactly what you need to do.

You would set the variables @apparentlymart suggested to different values in each environment, depending on if you are wanting to create or use existing.

An alternative approach is to have the code always create (so you totally remove the data source), and then for environments that already have the resource existing you’d use terraform import. Note that this option is only suitable if nothing else is managing the key, as doing an import is telling everyone that Terraform is now the sole owner and maintainer of the resource.

Indeed, @stuart-c’s alternative of removing the data block is a plausible one if you want to sometimes issue a key outside of Terraform and import it only during the initial set-up of your new environment. In that case, Terraform will believe that it is always managing that object, but then in some cases you will use terraform import to bring an existing object under management as if Terraform had been the one to create it, and then Terraform will manage it moving forward.

As @stuart-c noted, this approach is appropriate only if each environment will have its own key, because Terraform assumes that each resource block had exclusive ownership of the object it’s bound to. If you intend to share the same key object between any two environments then the conditional solution we originally discussed would be the appropriate answer.

Fortunately for key pair objects in particular it’s typically not a problem to have more of them: the key material is provided from outside anyway and so there isn’t anything stopping you from creating multiple key pairs that all have the same public key data stored in them, so you can meet the requirement of having a separate object per environment without necessarily issuing separate key material for each environment.

@stuart-c and @apparentlymart thank you for the information. I understand the conceptual approach now better.