How to do remote-exec operation if i am using terraform cloud

Hi All,
I am trying to do a remote-exec like below for a EC2 instance

connection {
type = “ssh”
user = “ubuntu”
private_key = file(“private_keys/${var.keypair_name}.pem”)
host = aws_instance.bastion[0].public_ip
}

provisioner "file" {
    source = "script/install.sh"
    destination = "/home/ubuntu/install.sh"
}
provisioner "remote-exec" {
    inline = [
        "chmod +x /home/ubuntu/install.sh"
    ]
}

When i was doing the terraform operations from my local machine, i was able to store the private keys in my local machine and point it like the above.

But now, i have moved to terraform cloud where i am mapping my github repo and branch for the workspace. Problem is, as a best practice i will not be able to commit my private key along with my code. Is there any other way where i can do a remote-exec from terraform cloud without adding my pem file as a part of code.

2 Likes

Did you find a solution?

Team, can anyone got any solution on same problem?, We are also using the key to connect initially and for local runs. Now, in TF cloud what is the best approach.

This sort of complexity of managing keys is why Provisioners are a Last Resort: it’s complicated to strike a suitable security compromise where the necessary keys are available only for the situations where they are needed and will not be a liability to security on an ongoing basis.

Instead, the situation in the original comment here seems like a great example of a problem that can be solved by using an AMI that has cloud-init installed (which is true of most mainstrean Linux distribution packages, including the official Ubuntu ones) and then passing a configuration to Cloud-init via EC2’s “user-data” mechanism.

This approach is superior because the configuration data will pass into your EC2 instance indirectly through the EC2 control plane, rather than directly over the network using SSH. That avoids the need for Terraform to be able to access the system at all, and can avoid the need to issue a keypair for a system entirely in cases where you are using immutable infrastructure approaches where the system will need no further manual maintenance once it is running.

Since the original example just mentioned a shell script without any context about what it does, I can’t be sure that what it does isn’t already a built-in feature of cloud-init that can be activated through declarative configuration, but in the general case you can typically send just a shell script to cloud-init and it will run that script on first system boot, so the most direct translation of the original example in this topic would be the following:

resource "aws_instance" "example" {
  # ...

  user_data = file("${path.module}/script/install.sh")
}

There’s a full tutorial on using cloud-init on EC2 with Terraform in Provision Infrastructure with Cloud-init, on HashiCorp Learn.

Thanks for detailed reply. However, we have existing hybrid TF environments with multiple provisioners where chef is core component for post deployments tasks including app deployment.

We are looking for migration with minimum efforts. Almost most of the parts are done as part of POC, but only we are thinking what is the best way to deal with key to initial connect. Hence the question.

If you need to keep using provisioners for a transitional period after adopting Terraform Cloud you could adopt a strategy of having your Terraform configuration generate and use a single-purpose SSH key that is only for provisioning, using the tls_private_key resource type from the hashicorp/tls provider:

terraform {
  required_providers {
    tls = {
      source = "hashicorp/tls"
    }
  }
}

resource "tls_private_key" "example" {
  algorithm = "RSA"
  rsa_bits  = 4096
}

resource "aws_key_pair" "example" {
  key_name   = "example"
  public_key = tls_private_key.example.public_key_openssh
}

resource "aws_instance" "example" {
  # ..
  key_name = aws_key_pair.example.key_name

  connection {
    # ...
    private_key = tls_private_key.example.private_key_pem
  }
}

With the above configuration, the hashicorp/tls provider will generate a new RSA keypair as part of creating tls_private_key.example, and then the remaining resource configurations will register the public part of that keypair with AWS and then use the private part to access the instance over SSH.

This will result in your private key being in the Terraform state, so be sure to configure your workspace appropriately to control who has access to the state. You may wish to arrange for your provisioner script to revoke the temporary key as part of its work (by deleting the public key file generated during the system boot process) so that it will become useless immediately after creation is complete.

I’m suggesting this only as a way to keep using provisioners as an interim step on the path to removing the use of provisioners in a later step.

Yes. This is the one of the approach suggested. Thanks. We are also thinking to have set username/pass set as part of AMI itself, hence no keys are getting used at all.

Surely you can add private key as a variable in terraform cloud and then use that? Eg

variable "tf_cloud_ssh_private_key" {
  type        = string
  description = "(optional) Private SSH key defined as a workspace variable in terraform cloud"
  default     = null
}

connection {
  type        = "ssh"
  user        = "ubuntu"
  private_key = var.tf_cloud_ssh_private_key != null ? var.tf_cloud_ssh_private_key : file("private_keys/${var.keypair_name}.pem")
  host        = aws_instance.bastion[0].public_ip
}