I have upgraded terraform from 0.15.1 version to 1.0.11 In a AWS linux EC2 instance that I use for deployments. I have separate envs for dev, stage, prod etc. After the upgrade I run terraform init and terraform init -upgrade into each env. Now when I run terraform plan I get:
“Note: Objects have changed outside of Terraform”
“Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or respond to these changes.”
“No changes. Your infrastructure matches the configuration.”
“Your configuration already matches the changes detected above. If you’d like to update the Terraform state to match, create and apply a refresh-only plan: terraform apply -refresh-only”
The proposed changes are yellow ~ (modifications) and green + (additions)
Has anyone encountered that situation?
Terraform is reporting external changes made to the resources (either from the state of the remote service, or caused by the provider itself). Have you tried the directions to refresh the state shown in the output?
I haven’t tried anything yet. Truth is no one did any changes to the terraform files or the infrastructure. My only guess is that the terraform init -upgrade has added new properties to the modules and it needs to update those maybe.
The changes could be caused by a few things, but note the exact phrase “Objects have changed outside of Terraform”, these changes are not related to the configuration, they are only showing things that have changes between the stored state and the latest values read from the provider.
An updated provider may contain changes for particular resources schema, or just some normalization of the stored data which needs to be updated with the next apply. Because these are changes outside of Terraform, there is nothing for Terraform to do aside from store the new values, it is only reporting the differences in case there are unexpected critical changes or the changes trigger other changes in the configuration.
Hello again jbardin,
I found a workaround to fix that without running terraform apply -refresh-only.
- I created have created an S3 bucket in terraform file s3.tf which I use for AWS S3 resources. (I have a separate tf file for each type of resource I need in AWS)
- Run terraform apply to create the S3 bucket.
- Comment out the bucket I created in step 1 in terraform.
- Run terraform apply again to delete the bucket.
Now when I run terraform plan no message appears like before.
Did the terraform creation and deletion of the bucket implied the -refresh-only action?
Thank you for all your comments!
I appreciate it!
If you run
terraform apply with no options then by default it will perform both the “refresh” actions and the planning actions.
-refresh-only disables the planning actions to allow you to resynchronize the Terraform state with the outside changes without making any new changes to your infrastructure, such as creating a new S3 bucket as you did here.
Gentlemen, @jbardin and @apparentlymart, I wish to thank you so much for your comments and insightful knowledge. I was able to update the tf state in all my terraform envs without breaking anything. As a feedback I report that I noticed sometimes the terraform plan command if run again a few days later they proposed to -refresh-only the same things I -refresh-only again. Looks like the AWS provider was somehow re-reporting the same staff. This happened a few times only and after the second -refresh-only it no longer appeared again. It was specifically for two Lambda Function Policies I had for Cloudfront Distributions.
Anyway, all is well now, thanks again!!!