I have some 1000+ DNS records across several tf files in a single cloud workspace. It’s come time to break these up into a number of individual projects however I’m stumped as to how to best distribute the state across the 30-40~ new workspaces this will create.
Ideally, I’m looking for a tool that’ll import resources that already exist on the upstream API and recover the state from the larger state file? How should I go about this?
When I have been in similar “state split” scenarios, I have completely ruled out use of
terraform import because it can only handle one resource instance at a time, and the quality of import implementation in various providers can be a bit variable.
I have found two techniques which can be useful:
The first is built around
terraform state rm:
- Make copies of the source state file for each destination state file
- On each destination state file, use
terraform state rm to remove all the resources that shouldn’t end up in that split.
- Use a custom script to reset the
"lineage" value in each destination state to a new UUID (so that in future, the “You’re uploading the wrong state to this workspace” check can tell the split workspaces apart.)
- Upload new state files to the destination workspaces.
The second technique is just a variation:
Write a custom script that does all of the state processing using direct JSON manipulation, instead of
terraform state rm.