Terraform plan wants to create multiple existing resources

Hello guys,

Few weeks ago I start doing the environment for one of our projects. Atthe beginning the idea was to have basic infrastructure into the root folder of the project. Now we want to use modules. For this purpose I moved the files in the modules folder and commented out the imported resource blocks. In the root folder I have the same files as in the modules/project folder which contains just the modules structure blocks as below:

module "app_role" {
  source = "../modules/project"

  providers = {
    aws.provider= aws.provider
  }

 somevariable = var.somevariable
}

My folder structure is:

├── modules
│   └── project
│       ├── locals.tf
│       ├── main.tf
│       ├── outputs.tf
│       ├── secrets.tf
│       └── variables.tf
├── run.sh
├── st
│   ├── imports.tf
│   ├── locals.tf
│   ├── main.tf
│   ├── providers.tf
│   ├── roles.tf
│   ├── secrets.tf
│   └── variables.tf

For backend I use s3. When I run terraform state list I see multiple modules, data and not any resources:

data.aws_acm_certificate.ssl
data.aws_elastic_beanstalk_hosted_zone.current
module.aws_route53_record.data.aws_acm_certificate.ssl
module.aws_route53_record.data.aws_elastic_beanstalk_hosted_zone.current
module.eb_environment.data.aws_acm_certificate.ssl
module.eb_environment.data.aws_elastic_beanstalk_hosted_zone.current
module.eb_iam_role.data.aws_acm_certificate.ssl
module.eb_iam_role.data.aws_elastic_beanstalk_hosted_zone.current
module.eb_client_api.data.aws_acm_certificate.ssl
module.eb_client_api.data.aws_elastic_beanstalk_hosted_zone.current
module.iam.data.aws_acm_certificate.ssl
module.iam.data.aws_elastic_beanstalk_hosted_zone.current

When I run terraform get -update, init and plan Terraform proposing me to install multiple resources but they already exists:

`
module.secretsmanager_secret.aws_iam_instance_profile.app_instance_profile will be created
module.secretsmanager_secret.aws_iam_role.app_role will be created
module.secretsmanager_secret.aws_route53_record.client will be created

Plan: 70 to add, 0 to change, 0 to destroy.`

How can I fix this problem?

Regards,

Hi @ivaylo.bumbovski,

You haven’t provided enough information to know exactly what has happened here, but the output summary of

Plan: 70 to add, 0 to change, 0 to destroy

means that those resources are not in your state for whatever reason. I don’t know if the state was deleted, or the resources were actually destroyed by something else you may have done.

The best advice I can offer is to roll back the state and config to see if you can recover from where you left off originally.

If you have more questions, we’re going to need the exact steps you took to get into the situation you have. When refactoring the configuration, you will want to run terraform plan between each step to carefully verify that only the changes you expect are going to be applied.

Hi @jbardin,

The steps I did was to add the *.tf files into the root folder of the project and import the existing resources and modify the imported configuration to fit the existing resources. This is not new project but never had state file before because the previous colleague forgot to change the backend provider to S3.

After that I moved the configuration files into the “modules/project” folder and start building the modules. Terraform didn’t asked me to destroy or change any of the existing resource. I read In internet that I can restore the state file but for the changes to take effect I have to change the settings of DynamoDB and unlock the state file, which I don’t want to do because the key is used by other projects.

Regards,

If you moved the resource configurations into modules, Terraform would have then planned to destroy the old addresses and create new ones. To fix this you should have added the corresponding moved blocks to keep the existing state in tact, and associate it with the new config addresses. If that was never done, I can’t say how you lost the prior state of the resources.

If you can’t recover the prior state, the only way to move forward is to import the existing resources into the new config locations.

I will try to use the moved blocks tomorrow and see if something the results are satisfying . Today I moved back the configurations from the modules/project folder into the root folder and ran plan and now I have to update just 3 resources. Most probably I will apply this changes soon and then move the configurations back to the modules folder and use the moved blocks refactoring declarations.

Thank you very much for your advice!

Regards,

Hi @jbardin

Reverting back to the last good working terraform state file from s3 did the trick and I was able to add the modules with terraform mv command.

Do you know how to use the same modules for the other working environment?

I’m not sure what you mean by that, other than modules are how you compose pieces of configuration. You can source modules into multiple configurations, and there are thousands of example public modules in the Terraform Registry

Never mind. All is sorted out. Thanks again for the hunt with the moved block.

Regards,