Terraform could not find remote state, destroyed something regardless

I’m not sure if this is a bug or something I missed in the documentation. Not sure if this is an issue with the provider plugin or terraform core.

I’m running Terraform v0.13.7. I store the remote state of EKS clusters I manage in S3. I was performing a terraform destroy --force, My remote state config was as follows.

terraform {
  backend "s3" {
    bucket     = "tfstate.myco-stage-demo-3.us-west-2.devops.myco.ninja"
    key        = "myco-stage-demo-3.tfstate"
    region     = "us-west-2"
    dynamodb_table = "terraform-remote-state-lock"
  }
}

Here’s some of the output I recieved. The bucket name was wrong, I did get an error, but I expected Terraform to stop, but it didn’t. Terraform started destroying another cluster. I’m not sure how or why it did this.

Initializing modules...

Initializing the backend...
Backend configuration changed!

Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.



Error: Error inspecting states in the "s3" backend:
    S3 bucket does not exist.

The referenced S3 bucket must have been previously created. If the S3 bucket
was created within the last minute, please wait for a minute or two and try
again.

Error: NoSuchBucket: The specified bucket does not exist
	status code: 404, request id: E7N6Y5YRDS0Y4MMK, host id: oPcg7zPZoYhzz303RQZxwjKlHaHVifS/nrhIjntL4dDqEFf0Vzkgftn0DuX6iruV6qgOkYEF5Xc=


Prior to changing backends, Terraform inspects the source and destination
states to determine what kind of migration steps need to be taken, if any.
Terraform failed to load the states. The data in both the source and the
destination remain unmodified. Please resolve the above error and try again.


Acquiring state lock. This may take a few moments...
data.aws_region.current: Refreshing state... [id=us-west-2]
data.aws_vpcs.found: Refreshing state... [id=us-west-2]
data.aws_ami.eks-worker: Refreshing state... [id=ami-035810617acd47976]
aws_iam_role.cluster: Refreshing state... [id=myco-stage.us-west-2.cluster]
aws_iam_role.node: Refreshing state... [id=myco-stage.us-west-2.node-assume]
data.template_file.user-data-spark-server: Refreshing state... [id=e6a1988e846f83a30e87340c15cca1a04232b7b8b0ce5588fb51a0dcb55a1b83]
aws_eks_cluster.eks: Refreshing state... [id=myco-stage]
aws_iam_role_policy_attachment.service-policy: Destroying... [id=myco-stage.us-west-2.cluster-20210617144851699700000005]
aws_security_group_rule.node-ingress-vpn[0]: Destroying... [id=sgrule-3878891829]
module.sg-http.aws_security_group_rule.ingress-rules[0]: Destroying... [id=sgrule-3877826277]
aws_iam_role_policy_attachment.node-secrets-readonly: Destroying... [id=myco-stage.us-west-2.node-assume-20210617144851485000000002]
aws_iam_role_policy.autoscaler-node-policy: Destroying... [id=myco-stage.us-west-2.node-assume:autoscaler-inline-policy]
aws_security_group_rule.node-ingress-cluster-443: Destroying... [id=sgrule-2106866134]
aws_autoscaling_group.general: Destroying... [id=myco-stage.general.asg-grp]
aws_autoscaling_group.mongo: Destroying... [id=myco-stage.mongo.asg-grp]
aws_autoscaling_group.devops: Destroying... [id=myco-stage.devops.asg-grp]```

Why would terraform if the remote state can not be found, it goes ahead and tries to delete something else?

I doubt I would be able to reproduce this in debug mode

Hi @g11,

The output you’ve shared seems to be the output from terraform init followed by the output from terraform destroy, rather than just the output from terraform destroy.

What you shared here seems like what I’d expect to see happen if you ran a command like terraform init; terraform destroy, where ; here is the typical Unix shell meaning of two sequential commands where the second will run regardless of the outcome of the first.

Are you using Terraform in some sort of wrapper script that is automatically running init followed by destroy, rather than just running terraform destroy directly? If so, I think you’d get the behavior you expected by changing that script to check the exit status of terraform init and to skip running terraform destroy if the initialization didn’t succeed.