Using local_file resource with Terraform cloud

I’m trying to use Terraform cloud for state backups and runs. I am starting a few servers and then use the outputs to generate an Ansible inventory file using a template and a local_file resource.
This works great running locally, but when I try to use this with remote backend I get a permission denied error. Is there a way to get the file written on a local machine when triggering runs using a CLI?

This is the resource code that I wrote and works localy:

resource "local_file" "inventory" {
  content = templatefile("${path.module}/templates/hosts.tpl",
    {
      worker = module.worker.*.instance_ip
      coordinator = module.coordinator.*.instance_ip
    }
  )
  filename = "../hosts.generated"
}

And the error I get when trying to run with remote backend:

local_file.inventory: Creating...
╷
│ Error: open ../hosts.generated: permission denied
│ 
│   with local_file.inventory,
│   on main.tf line 55, in resource "local_file" "inventory":
│   55: resource "local_file" "inventory" {
│ 
╵

I just found out you can use Local Execution Mode in Settings → General. But will this work as intended using Version Control? Is there a more proper way to handle Ansible inventory using Terraform cloud?

Hi @Alko89,

I expect this is happening because the path ../ traverses outside of the Terraform configuration directory and into some system directory in the remote execution environment which, indeed, isn’t writable by the Terraform process.

However, I would expect this to work if you specify a path that’s inside the configuration tree somewhere. If you know there will only ever be one instance of this module then you could use ${path.module}/hosts.generate, but that’s problematic if the configuration is inside a module which uses count or for_each because all of the instances of the module will share the same path.module. In that shared module situation I’d suggest instead having the calling module pass in the path to use, so that it can systematically assign different paths to each module.

Of course the other thing to consider here is that you don’t have direct access to the remote filesystem in Terraform Cloud, and the directory in question is ephemeral and discarded immediately after terraform apply is finished, and so depending on your goals here creating a local file might not be useful, and so a different strategy that works only in-memory inside Terraform might work better.

If you’re just reading the file back in elsewhere in the same Terraform configuration then that should work okay, although it’s still a bit clunky and so I’d typically still try to find a solution which avoids using a temporary file to pass data between two parts of the same Terraform configuration.

I’m not reading the file back in Terraform. I’m constructing an inventory for use in Ansible playbooks. A remote filesystem doesn’t help me much here.

There are a few options I found out:

  • One is to run everything locally and use the cloud only for storage, which I guess is not ideal.

  • Create outputs for all addresses from different modules and use a single module which will then run locally and construct the inventory. This will enable running all other modules with a remote backend and only require one to run locally.

  • Develop a service that will query remote states and construct the inventory.

Personally I’m leaning to the second option.

Hi @Alko89,

Indeed, for what you described an output value does seem like the most straightforward option, particularly if your account has access to the remote state for this workspace so you can more easily retrieve the content for use in another command:

output "ansible_inventory" {
  value = templatefile("${path.module}/templates/hosts.tpl",
    {
      worker      = module.worker.*.instance_ip
      coordinator = module.coordinator.*.instance_ip
    }
  )
}

Then for whatever step you run next in order to put this data where Ansible can use it, you can use terraform output -raw ansible_inventory to get the raw content of that output value, e.g. to redirect it to a file, or pipe it into another command, or similar.

1 Like

Yes I wen’t with the second option and I’m quite happy with the result. The inventory module doesn’t need to have a remote state, it has access to all remote states from other projects, and I can potentially run it on multiple devices where I want to generate the inventory.

I guess this is the way to go, or maybe you can recommend a more integrated option? I’ve been having a look at Nomad but some of our services require a dedicated machine and Nomad as far as I understood deploys services on any server that has sufficient resources available so in that case Ansible seems like a better choice.

Hi @Alko89,

Nomad isn’t really my area of expertise, so I don’t have a definite answer to your question, but I can say that Nomad offers a constraint stanza which allows you to define rules for where a particular job might be deployed, and I expect you could define a very specific constraint that would effectively match only one node, such as by constraining by hostname.

With that said, if you’d like to discuss that further I’d recommend starting a new topic in the Nomad category of this forum, because the folks there are much more qualified to give suggestions like that!

1 Like

Hi,

I am running into the same problem with GitHub actions. I am running Terraform via Terraform Cloud API and Ansible via Docker image in Github Actions.

Is there anyway I can generate hosts file from Terraform and parse back into the repo from Terraform Cloud?

Any help would be much appreciated.

EDIT:

I have replaced local_file with the github_repository_file. This creates a new repo for Ansible to run in Github Actions but I was not able to parse directly because of tfstate