I set up a remote backend in an AWS s3 bucket. I used 2 different workspaces in this backend by running terraform workspace new prod-2
and terraform workspace new prod
.
I set up two different sets of infrastructure in AWS on each workspace respectively.
This process works great from the machine that I set everything up on.
However, when I clone my terraform repo into another folder on another machine, and initialize the repo to the backend state with terraform init
, the workspaces don’t register.
Is there a file that I should commit when I originally create the workspaces that maybe is in my gitignore? How do I save workspace state from machine to machine?
I would expect you to be in the default workspace if you are using a clean checkout. Do the other two workspaces not show if you do a terraform workspace list?
What does your backend block look like?
Yeah, that’s the issue. If I clone a clean checkout and do terraform workspace list
I don’t see any workspaces other than default. From there I’ve to run terraform init
thinking maybe that would pull the workspaces out of my backend, but it doesn’t.
I’ve also tried creating a new workspace with the same name but it doesn’t sense the existing workspace and just assumes i’m creating all new infrastructure from scratch.
It doesn’t sound like you are using a remote state backend successfully. Can you see the objects in the S3 bucket change? What is the code you are using in your backend block?
It seems the backend is working for the most part is it registered an update I made this morning and whenever I run terraform apply it does remember the latest state:
Here’s what the backend block looks like:
When I use a workspace the key in this block is ignored and the object path ends up looking more like this:
<bucket> --> env:/ --> prod-2 --> environment --> prod --> <module-name> --> database --> <region> --> terraform_prod.tfstate
Note, at this moment I’m only able to use the work space from the computer/folder I originally set it up from. Still not able to see the workspace from a clean checkout.
Yes, workspaces are stored as objects in a slightly modified key name for S3.
I’d suggest turning on debug logging for Terraform so that it displays all the API calls being made as it talks to S3. Then try doing an init/workspace list, etc. to see what is happening. You should see requests to fetch objects from S3. Maybe there is a permission problem with part of that (although I’d have expected you to see an error message)?
I set TF_LOG=TRACE
and ran terraform init
and the only logging I see is “Initializing the backend…” with no details. My initial idea was to use Charles to monitor the traffic on my macbook, but then I realize I’m using the new M1 chip / Big Sur, so that software is not yet compatible. Any other suggestions to monitor network traffic? I’m currently looking for solutions, but it seems pretty sparse.
My usecase in this is that I’m migrating from an old to a new laptop where the old had some liquid spilled on it and barely works. Unfortunately I have to run my terraform scripts on it until it works.
The aws creds are the same on both machines, so I don’t think it is as permissions thing.
The only thing I can think to try is start from scratch, migrating my data, etc to a new environment but this time not use workspaces. Before going there I will try a couple experiments with new shell projects with/without workspaces to see if this is a one-off, or if I’m just missing something in my configs. But the docs are so simple with no edge case discussion, I struggle to see how I’m doing anything wrong…
OK, I figured this thing out and as originally suspected, there was a problem with my setup. In my repo I have a discrepancy between the .tfstate file path I specified locally versus what was in the s3 bucket inside that workspace. How it ever worked on the original machine with these discrepancies is well beyond me (it still does), but I’m guessing if I likely changed/committed this after running terraform init
.
So in summary, the path specified by the key property in the backend object in my repo looked like this:
...<path>/terraform_prod.tfstate
And the actual path in the s3 bucket looked like this:
...<path>/terraform_dev.tfstate
Therefore when I ran terraform init
in a clean checkout, all the contents of the s3 workspace were ignored and my local project initialized a new terraform backend as though the workspace never existed, which for the file that I specified the key, it didn’t.
Thank you very much for being helpful in this matter. Next time anything like this happens I will be extra vigilant of the idea that the error is caused by problems in my code (which you quickly recognized) - as it seems like using workspaces with terraform does indeed work great.