It’s not easy to tell here as you don’t show the content of your backend config file. But it seems you are using a completely separate backend (s3 bucket) for dev and prod. Presumably to keep them isolated and with different access controls?
If this is the case then I don’t think there is any need to use the -migrate-state option. The -migrate-state option will attempt to copy existing state to a new backend.
When you are running the init the config is directing terraform where to look for the shared state. When this occurs then the workspace is set to the ‘default’ workspace. Technically there is no need (if you are using completely different buckets) to then change the workspace as the prod and dev are already isolated. But switching to a named workspace causes no issues.
At this point terraform will work against the dev or prod state backend as per your config.
When you pass terraform a different backend config (eg when you move from working with dev to prod) the migrate state option is causing terraform to ‘copy’ the state from backend configured in the config you were using previously (in this example dev), to the backend configured in the one you are running at that point (prod). The same effect occurs in the opposite direction.
In short - you should not need the -migrate-state option unless you are moving where you are storing the state for a given configuration (which I don’t believe is what you are trying to achieve here)
The behaviour you are seeing here is because of a local copy/cache of the backend configuration that is created. If you ever wondered why you only need to provide the -backend-config to the init and not the plan or apply this is why
After you initialize, Terraform creates a .terraform/ directory locally. This directory contains the most recent backend configuration, including any authentication parameters you provided to the Terraform CLI
The fact you are seeing that message leads me to think you are running the INIT on your local computer or on another machine (or pipeline agent) that is not ‘cleaning’ the directory at each run. Therefore Terraform sees that you have an already configured ‘backend’ and prompts you to either:
-migrate - Which will copy the state from the ‘old’ to the new backend
or
-reconfigure - Which will update the local backend configuration with the new configuration
To get the outcome you desire you can resolve this via
removing the ./.terraform directory entirely, or the ./.terraform/terraform.tfstate file before the first init when changing the backend config
Using the -reconfigure option on your init (probably preferred/simpler)
See the below for an illustration of both methods.
C:\..\..\..\temp terraform init -backend-config="a.azurerm.tfbackend"
Initializing the backend...
Successfully configured the backend "azurerm"! Terraform will automatically
use this backend unless the backend configuration changes.
...
Terraform has been successfully initialized!
C:\..\..\..\temp ls
Mode LastWriteTime Length Name
---- ------------- ------ ----
d---- 08/02/2024 17:43 .terraform
-a--- 08/02/2024 17:35 2263 .terraform.lock.hcl
la--- 08/02/2024 17:33 253 a.azurerm.tfbackend
la--- 08/02/2024 17:33 251 b.azurerm.tfbackend
la--- 08/02/2024 17:20 444 main.tf
C:\..\..\..\temp terraform init -backend-config="b.azurerm.tfbackend"
Initializing the backend...
╷
│ Error: Backend configuration changed
│
│ A change in the backend configuration has been detected, which may require migrating existing state.
│
│ If you wish to attempt automatic migration of the state, use "terraform init -migrate-state".
│ If you wish to store the current configuration with no changes to the state, use "terraform init -reconfigure".
╵
C:\..\..\..\temp rm -r .\.terraform\
C:\..\..\..\temp terraform init -backend-config="b.azurerm.tfbackend"
Initializing the backend...
Successfully configured the backend "azurerm"! Terraform will automatically
use this backend unless the backend configuration changes....
Terraform has been successfully initialized!
C:\..\..\..\temp terraform init -backend-config="a.azurerm.tfbackend"
Initializing the backend...
╷
│ Error: Backend configuration changed
│
│ A change in the backend configuration has been detected, which may require migrating existing state.
│
│ If you wish to attempt automatic migration of the state, use "terraform init -migrate-state".
│ If you wish to store the current configuration with no changes to the state, use "terraform init -reconfigure".
╵
C:\..\..\..\temp terraform init -backend-config="a.azurerm.tfbackend" -reconfigure
Initializing the backend...
Terraform has been successfully initialized!
Note that the -reconfigure can still be present even if you are not changing your backend config on that run.
Wow… Thank you for the kind explanation! I completely understand now!
Thank you once again
I have one more question, is it okay to use S3 backends from different AWS accounts? Or is it better to use a single S3 backend and separate them by Workspaces?
First approach) S3 backends from different AWS accounts + Workspaces Dev, Prod
Second approach) Single S3 backend + Workspaces Dev, Prod
The answer to your last question is very much ‘it depends’… on both your security and isolation requirements, organisation policies (SDLC, Security, SoD, etc.).
At its most simple, using a single storage resource in a single account which is shared for all terraform projects & deployments, and each deployment uses ‘workspaces’ to isolate the deployments between differing environments. However, this means that you have to give access to that resource to accounts who are developing or deploying. As the state can contain sensitive information (which is accessible in plain text by anyone with access to the state files) this is often not ideal for unauthorised people to be accessing state from production or other projects)
The next is possibly splitting ‘PROD’ and ‘Non-PROD’ environments for all projects between two different storage resources & accounts. This then allows you to set the security differently, ensuring that prod is ‘controlled’ more than non-prod. Using different security principals between the backends. But this still means that a team working on one project may still be able to access state from another project (which may not be desirable)
Then it can get progressively more complex but with increasingly more fine-grained access control and isolation:
Different back-end state storage per project for prod and non-prod
Different back end state storage per environment, per project (dev,test,acceptance, prod…)
etc…
Again, a consideration as to your GitOps and deployment pipelines, not just devs and engineers, related to how they apply to the project and the environments, and the use of differing credentials to ensure isolation will feed into what is best in your circumstances.