I want to be able to audit the state of resources across multiple statefiles vs what is in my cloud provider (current case is AWS Config, but there are others to do).
I had asked a question in GitHub, which James kindly answered. My understanding is that as of v1.0.0, the statefiles are fully internal, so if you want to read them, use the
There a few challenges there:
- I have hundreds of statefiles, sometimes more, which leads to lots of fork/exec calls
- I may or may not have access to the config, which means running
tf show -json, which requires running
tf init first to get modules, will fail
Is there a “proper” way to parse statefiles directly, or is the CLI really the only way to get into there?
According to Terraform v1.x Compatibility Promises | Terraform | HashiCorp Developer, no guarantees are made about the format of state files, meaning that yes, the CLI is the only means of inspecting state that is backed by a guarantee.
That said, if I were in your circumstances myself, I’d likely decide I was just going to parse the state JSON myself anyway, and accept that I might be required to write new code to interoperate with a future Terraform version.
This is exactly right.
The compatibility promises are there to give guidance on what parts of Terraform we intend to preserve in future versions even if that means making tough compromises on how Terraform can evolve.
Terraform’s ability to losslessly preserve all necessary data from one run to the next is crucial though, and so new Terraform features are very likely to require changes to the state snapshot format. Often those changes have been forward-compatible lately, but we can’t be sure that this will always be true. To make compatibility guarantees about the state snapshot format would severely constrain how Terraform can evolve in future.
However, this is still a piece of data on your own computer, so there’s no technical reason why you can’t try to interpret it for other reasons. As @maxb noted, you just need to accept the consequence that each time you upgrade to a newer version of Terraform you may need to update your analysis code, and you may need to refer to the Terraform source code to understand what changes are required (since we only document changes to explicitly-supported formats).
For a one-time bulk analysis I expect this consequence wouldn’t be a concern because you’d complete the analysis once with whatever state format is currently present and then discard the analysis program after your work is complete.
If you want to perform ongoing analysis, on the other hand, it may be better to accept the additional step of asking Terraform CLI to produce its external-facing representation. That format does not need to losslessly round-trip and so there is more room for maneuver when preserving compatibility in future releases.
It might interest you to know that Terraform Cloud itself does something similar: after each run it captures the public-facing state format and saves it for future use. That data then allows, for example, showing resource instance metadata in the Terraform Cloud UI without having to change Terraform Cloud each time the internal state format changes. This does rely on running Terraform in automation that can reliably always capture the latest state JSON after every apply, though; it would not be viable for just directly running Terraform CLI at the command line.
To address the original question directly: there is no HashiCorp-supported library for parsing the internal state format in other applications, so if you do decide to accept the risk of needing to update your integration when you update Terraform you will need to write some integration code yourself. I’d suggest writing it carefully to ignore anything that isn’t directly related to your goal, so that hopefully only future updates directly related to your goals will require you to change your integration code in non-trivial ways
I think the reason I like Hashicorp folks and communities is that I get detailed explanations, which keep me from going too far off-road, but leave the opportunity to look for better ways. That too, anyways.
So for one off, I will do it. For repetitive, could there be a library surface to the
show -json command? I can run
tf show -json, which must execute something and return it to standard format, without exposing the internal format. Is there a golang func that is exposed that I could use for that?
So rather than the unsupported “I will parse the internal statefile json and try to keep up with changes”, I get the “call the exported tf func that converts internal statefile json to publicly supported json and then parse that”.
I don’t believe
terraform is designed to be used as a library, with the expectation that you just run the command (shell exec, etc.) if needed. It does accept various things like environment variables & JSON output options to make that fairly easy. My understanding is that all the main tools (e.g. Terragrunt & even Terraform Cloud) just do this.
Indeed, the vast majority of the Go code has been moved into an
internal package so that the Go toolchain will explicitly prevent attempts to import the code as a library.
The design philosophy exhibited by the Terraform code today does not allow for that.
Understood. It would be nice, but I guess terraform itself is the heavyweight here. I haven’t looked at the other tools internally, but I should.
What is the issue with using
terraform show -json?
I guess not. It does make the config and modules a necessary prerequisite, as
show -json relies on those modules? Or did I misunderstand it? That isn’t too rough a burden, but it adds to requirements. I guess it will matter when there are lots of statefiles and things may have gotten stale (10 statefiles tend to stay in sync; 500 seem to become a mess easily without some solid audit tooling that flags things regularly, some sort of Continuous Validation).
It also means fork/exec, which is the same issue. 10 statefiles = 10 fork/exec? No big deal. 500-1000, and you have that big graph in memory (new process = fork/exec, not new thread), that can be a burden.
tereaform show does require an initialized working directory. In particular, it needs to be able to run the provider plugins because their schemas are required to decode the provider-specific data in the state and prepare it for exporting.
Even if there were some way to run the JSON-producing code in Terraform without launching Terraform as a child process, it would still need to run the provider plugins as child processes and so I don’t think it would be much of an improvement.
For an ongoing integration that needs the documented JSON format I would recommend following Terraform Cloud’s approach of having your automation around Terraform automatically run
terraform show -json after each apply and save the result into a location where you can retrieve it later without running Terraform CLI again.
Oh, I like that approach. It is not that different from “create software source packages and distribute them when you create software binary packages; that is the moment when you are guaranteed to have all of the source.” Same here; at that moment, you are guaranteed to have all that you need, and the exact versions, etc., so generate the json right then and there.
I am curious as to what TFC uses it for? I have used TFC heavily before, but not for a while.
Terraform Cloud uses it as the foundation of quite a few features, along with similar saving of the JSON plan output after the plan phase.
However, the most prominent use is probably the summary of output values and resource instances on a workspace’s main page in the web UI. The other uses are less obvious because they are just a small part of a larger feature, like the workspace healthcheck system which can see the results of preconditions, postconditions, and check blocks via the JSON plan and state.
But essentially, json output if
tf show -json becomes the “standard format” for passing this data around, while the config remains the source and the statefile remains internal. Got it.