Retrieving DNS name from aws_efs_file_system resource

I am using the following code to hopefully retrieve the DNS name of the AWS EFS filesystem. The validation passes, the plan creation was fine but when I ran it, it errors out. I tried looking for the /tmp log file it mention but could not find them. The instance was not created so I was not able to ssh into the instance to have a look at the logs on the remote host.

provisioner “remote-exec” {
inline = [
“sudo echo ${aws_efs_file_system.cluster_efs.dns_name} > /tmp/nfs.txt”
]
}

I have the following resource

—cut----
“root_module”: {
“resources”: [
{
“address”: “aws_efs_file_system.cluster_efs”,
“mode”: “managed”,
“type”: “aws_efs_file_system”,
“name”: “cluster_efs”,
“provider_name”: “aws”,
“schema_version”: 0,
“values”: {
“arn”: “arn:aws:elasticfilesystem:ca-central-1:083230063072:file-system/fs-fc78d411”,
“creation_token”: “cluster_efs”,
“dns_name”: “fs-fc78d411.efs.ca-central-1.amazonaws.com”,
“encrypted”: true,
“id”: “fs-fc78d411”,
“kms_key_id”: “arn:aws:kms:ca-central-1:083230063072:key/387e89d0-53cd-42e0-a10f-06c0f956dd26”,
“lifecycle_policy”: ,
“performance_mode”: “generalPurpose”,
“provisioned_throughput_in_mibps”: 0,
“reference_name”: null,
“tags”: {
“Name”: “EfsExample”
},
“throughput_mode”: “bursting”
}
},

—cut----

I have enabled TF_LOG=TRACE and TF_LOG_PATH=

Error message from “terraform apply” looks like this

Error: error executing "/tmp/terraform_229133998.sh": Process exited with status 1

TF_LOG_PATH last couple of lines corresponding to the above looks like this

2020/02/18 18:50:17 [TRACE] statemgr.Filesystem: removing lock metadata file .terraform.tfstate.lock.info
2020/02/18 18:50:17 [TRACE] statemgr.Filesystem: unlocking terraform.tfstate using fcntl flock
2020-02-18T18:50:17.210-0800 [DEBUG] plugin: plugin process exited: path=/home/nyue/projects/TerraformTutorial_git/mpi-ec2-compute-cluster/.terra\

form/plugins/linux_amd64/terraform-provider-aws_v2.49.0_x4 pid=19861
2020-02-18T18:50:17.210-0800 [DEBUG] plugin: plugin exited
2020-02-18T18:50:17.217-0800 [DEBUG] plugin: plugin process exited: path=/home/nyue/anaconda3/envs/terraform/bin/terraform pid=19839
2020-02-18T18:50:17.217-0800 [DEBUG] plugin: plugin exited

Hi @nyue,

If you are trying to pass this value into an EC2 instance, I’d recommend using user_data instead of provisioners, because then the data can travel over a channel provided by the EC2 API rather than all the complexity of connecting and logging in over SSH.

If you are using a common Linux distribution image with cloud-init installed (most do) then you can simply put a script to run on first boot directly in the user_data argument, and cloud-init will detect it and execute it for you:

resource "aws_instance" "example" {
  # ...

  user_data = <<-EOT
    echo '${aws_efs_file_system.cluster_efs.dns_name}' >/tmp/nfs.txt
  EOT
}

cloud-init runs this script early during the boot process, and it runs as root so it should be able to write files anywhere on the system. cloud-config has some other capabilities too; if you configure it using its YAML format instead of just shell directly then you can directly ask it to create a file, giving more control over the ownership and permissions of that file:

  user_data = jsonencode({ # (JSON is valid YAML)
    write_files = [
      {
        path        = "/tmp/nfs.txt"
        content     = aws_efs_file_system.cluster_efs.dns_name
        owner       = "root:root"
        permissions = "0755"
      },
    ]
  })

An additional advantage of using cloud-init rather than provisioners is that by default cloud-init keeps a log of everything it did during boot, which you can analyze if things aren’t working as you expect:

cloud-init analyze show

Using that, it will likely be easier to debug what’s going on with creating your file in case you still have a similar problem after switching to using user_data with cloud-init. This easier troubleshooting is one of several reasons why Terraform provisioners are a last resort.

Thank you for the suggestion Martin, I will give it a go.

Does referencing those variables still works if I am reading user_data via a file ? e.g.

  user_data       = file("head_node_setup.sh")

Trying to understand when expansion/interpolation occurs.

Cheers

The file function interprets the content of the given file literally, so it will not perform any template processing.

However, the templatefile function does interpret the given file as a template, using the map given as its second argument to provide the data. So for example you could write this to pass the EFS cluster name to the template:

  user_data = templatefile("${path.module}/head_node_setup.sh", {
    efs_hostname = aws_efs_file_system.cluster_efs.dns_name
  })

Then the EFS hostname value in that template file would appear in expressions as efs_hostname instead of aws_efs_file_system.cluster_efs.dns_name.

If you intend to use the cloud config syntax instead of shell syntax like I showed in my second example in the previous comment, you can follow the advice under Generating JSON or YAML from a template by making your external file consist entirely of a call to jsonencode:

${jsonencode({
 write_files = [
    {
      path        = "/tmp/nfs.txt"
      content     = efs_hostname
      owner       = "root:root"
      permissions = "0755"
    },
  ]
})}
1 Like

Thank you @apparentlymart , the templatefile approach worked out for my use case. I will continue with the templatefile rather than the jsonencode approach because there are a number of yum related commands I need to run and it feels more natural to run them via a script file.