How best to use `import {}` for multiple environments?

So, we have a repo with several terraform.tfvars files. Each one has all the values for the environment it is deploying into. These environments are identical and it is a business requirement, rather than JUST a technical one.

The pipeline runs the deployments in parallel, each path using 1 of the terraform.tfvars files.

To date, good or bad, this has worked well for us. Admittedly, it was a supplied “solution” that we never changed.

As part of upgrading to Terraform v1.5, we would like to use the import {} feature. What we have found that our current setup is not ideal for this. In fact, terraform import would require a developer to do the work on their host, and so not be peer-reviewed in the way we want it all to be.

By using import {}, combined with variables, we can do this sort of thing:

variable "id_of_the_thing_being_imported" {
 description = "The ID of the thing being imported"
 type = string
}

import {
  to = aws_resource.imported
  id = var.id_of_the_thing_being_imported
}

And then each terraform.tfvars file has the appropriate entry:

  1. live_1_terraform.tfvars:
    id_of_the_thing_being_imported="id-one"
    
  2. live_2_terraform.tfvars:
    id_of_the_thing_being_imported="id-two"
    

And this is sort of where we’re stuck.

Firstly, the resources are really managed by modules, so module.some_name.resource_type.another_name is the more usual pattern.

Secondly, when we run terraform plan, we add the -generate-config-out= option. We can add a unique filename for each environment, but the pipelines run a clean checkout and so handling that file cleanly for each environment back to the repository would result in the resources from all the environments being hard-coded into the repo.

Thirdly, where does moved {} come in with regards to import {}. Can we combine them in some way to help our situation?

What is the “expected” solution here?

We use IaC via VCS rather than have developers do the terraform applies from their hosts. Peer-review of code changes is important.

How should we setup parallel setups. The idea of creating multiple repos for each environment when the only real differences would be terraform.tfvars files, is just increasing the maintenance time for no real gain (at the moment).

Currently, the approach we’ve used for terraform import is to suspend the pipeline and have a trusted administrator do the work on their host, and when all the imports have been done locally, the local plan should result in no additional changes, and then that’s pushed to master (without the pipeline running), a plan-only PR is made and verified as being a no-op and then merged. It is a lot of work.

From what we can tell, the import {} block is really for doing some of the work we currently do manually, not a full “one button” solution.

Is there anything we’re missing that can simplify the work?

Hi @rquadling,

One reason why import may not fit into a “pipeline” workflow like this, is that it’s not clear what a pipeline workflow for import might be in general. The act of “importing” is typically a one-off action, where something created before the use of Terraform or outside of Terraform is brought under Terraform management, and then managed normally from that point onwards.

Config generation is also meant to assist with the adoption of existing infrastructure, but because Terraform cannot know exactly how a particular resource needs to be configured per the remote service, nor how it should relate to the rest of the configuration, it can only serve as a template to help the user build the actual config. In most cases the configuration is expected to already exist, especially when it’s within a module which is considered read-only during operation.

The moved block is used for refactoring modules, so while it can interact with import in that moved instances must be taken into account when creating the import plan, I’m not sure how it would factor into what you are doing.

It sounds like what you are doing here is going to be outside of any “expected” solution. Can you explain more why you are continuously importing resources, rather than having Terraform create and manage the resources itself?

The main issue we have is the way state is managed by Terraform.

If I am on my host, importing a resource (say using terraform import as I have done), that “one off” means I have to absolutely make sure no one does a deployment as the TF code is only local but the state is remote.

The main thing we are finding is new resources that have existed but where not managed by Terraform. If the cloud providers had a single state for ALL resources …then Terraform could easily create the resource blocks and we’d not need to worry about actual “importing” as the resource and the state to manage it already exist.

The dual state (the “physical” infrastructure accessible by a hodge podge of API calls and the Terraform state which is currently S3 backed) is part of the issue.

Now I know this is NOT Terraform’s issue. The cloud providers have their way of doing it and Terraform tries its VERY hardest to try and make all of them work in a consistent manner.

So with regard to importing, it really is still a human, one-off task that requires a suspension of deployments until the work is completed.

Happy with that. It was more about can this be done via a team, rather than 1 person.

Thanks @rquadling,

There’s not really any way around the fact that the act of importing requires modifying both the state and the configuration. Optimally the configuration is added before the actual import process happens, which means that importing will essentially be a normal plan and apply.

While it’s tempting to want the config generation to handle all the details for the user, there simply is not enough information for Terraform to make a useful configuration in may cases. The workflow is usually a user making the config change to handle the import, testing it via the plan output and adjusting the configuration to make sure that the import happens with no unexpected changes. Once satisfied with the proposed outcome, that config change can then be planned and applied using the normal processes.

Just like with other multi-user changes to configuration, the workflow is better controlled outside of the Terraform CLI, since some sort of external coordination will always be required.