Request for Feedback on Terraform Test JUnit XML output

Hi everyone,

Today’s Terraform CLI v1.8.0-alpha20240131 release includes a new experimental feature for generating JUnit XML report alongside the normal test result output.

Since the JUnit XML format is not formally specified, and so each consumer interprets it a little differently, we’d love your help in making sure that the test report structure we’ve chosen will produce useful and pleasant reports in as many different CI/CD tools (and other consuming software) as possible.

Since JUnit XML was originally designed for JUnit’s testing conventions in Java, fitting it onto Terraform’s own model requires some tradeoffs. This experiment implements a set of decisions that seem plausible based on how the different JUnit XML elements have been interpreted in various third-party docs, but the real test will be how the output is interpreted by real software in practice.

With all of that said then, if you’re interested in having JUnit XML reports from testing your Terraform modules then it would be very helpful to try this first experiment in your favorite CI/CD/etc software, and share screenshots (or other information) of how that software interpreted the reports. Please reply to this message if you have any feedback to share.

I understand that many of you will be using terraform test with modules that contain internal details that you might not want to expose in public in a screenshot, so I’ve also prepared a contrived example configuration that you could potentially use instead.

Depending on what feedback we get here, there might well be subsequent alpha releases with other variations to test. If so, I’ll announce them downthread in later comments, in the hope of keeping all of this feedback together in one place so we can all see what software has been tested against which Terraform alpha releases.

Thanks in advance for any contributions!

Hi @apparentlymart ,

I tested the functionality but found one error when xml file gets generated using the terraform test command.

Below is the file which generated for me

Here on line 1 of xml closing double quotes is missing in UTF-8 and hence azure devops fails to publish the result because of xml syntax error. See below screenshot of azure devops.

Can you fix this error?

Huh, interesting! I’m sorry about that. I must have messed that up when preparing the change to be merged, since it was certainly valid XML while I was testing it earlier. :man_facepalming:

I’ll prepare a fix for that to go in a future alpha release, but so we can still proceed with the goal of finding out if the structure of the XML is a good fit for existing tools, could you manually add that missing quote and load the result into your favorite tool for testing?

I know that won’t be so easy for systems that are generating the XML as part of a pipeline, since there won’t be a manual step then, so I will get it fixed for the next round, but hopefully we can still see if this direction seems broadly right with some existing software in the meantime.


Update: I’ve merged a change to the main branch in the Terraform repository, so the bug mentioned above will be fixed in the next v1.8.0 alpha.

In the meantime, if you’d like to build your own copy from source to test, instead of using the alpha, you can clone the Terraform git repository and then build it with the special option to enable experiments, as the build process would normally do for an alpha release:

go build -o terraform -ldflags="-X 'main.experimentsAllowed=yes'"

Sorry for that oversight. It’s what I get for stuffing in code cleanups right at the last moment. :confounded: If this gets stabilized then it’ll have better test coverage to catch silly mistakes like this, but our experiment dev process is optimized more for faster iteration.

2 Likes

Hi,

I can confirm the issue stated by @mefarazahmad

When I add the missing " in the xml file in my pipeline and then use the PublishTestResults@2 task in Azure Pipelines the test results are imported as expected.

@apparentlymart - the following may add some more detail around the mapping of the test result attributes (at least as far as Azure DevOps uses them): PublishTestResults@2 - Publish Test Results v2 task - Result Formats Mapping| Microsoft Learn

I am just putting together some tests using multiple test files, run blocks and resources. I will post findings/detail/comments here when done.

Hi, Just to confirm that putting the double quotes did gave the expected result.

Not sure if I am asking for more but duration timestamp is 0, which is actually incorrect.

Rest it looks good to see the test case name in test result in azure devops. Thanks a lot @apparentlymart .

As @ExtelligenceIT suggested, we need to do a couple of test scenarios.

Thanks for sharing this screenshot!

It’s interesting that Azure Devops chose to render the durations as zero. Terraform doesn’t currently measure test execution time at all and so doesn’t include any of the time-related attributes.

I had hoped that would cause tools to either not show time at all, or to show “unknown” as the duration, or similar. So it’s helpful to see that it’s basically mandatory to measure test runtime if you want to use JUnit XML, and thus we’ll probably need to add test time measurement to the test harness before we could stabilize this.

Thanks again!

Here are the details of my testing so far.

Current Test: 3x Test files, each with multiple run blocks. Two test files crafted to deliberately error. 12 ‘tests’ in total across the 3 files.

XML Output:

<?xml version="1.0" encoding="UTF-8"?><testsuites>
  <testsuite name="tests\api_connection_azureblob.tftest.hcl" tests="5" skipped="0" failures="1" errors="0">
    <testcase name="setup"></testcase>
    <testcase name="create_storage_account"></testcase>
    <testcase name="azureblob-managed_id"></testcase>
    <testcase name="azureblob-service_principal"></testcase>
    <testcase name="azureblob-access_key">
      <failure message="Test run failed"></failure>
      <system-err><![CDATA[
Error: Test assertion failed

  on tests\api_connection_azureblob.tftest.hcl line 87:
  (source code not available)

api type name did not match expected
]]></system-err>
    </testcase>
  </testsuite>
  <testsuite name="tests\api_connection_azurefile.tftest.hcl" tests="3" skipped="0" failures="0" errors="0">
    <testcase name="setup"></testcase>
    <testcase name="create_storage_account"></testcase>
    <testcase name="azurefile-access_key"></testcase>
  </testsuite>
  <testsuite name="tests\api_connection_keyvault.tftest.hcl" tests="4" skipped="0" failures="1" errors="0">
    <testcase name="setup"></testcase>
    <testcase name="create_keyvault"></testcase>
    <testcase name="keyvault-managed_id">
      <failure message="Test run failed"></failure>
      <system-err><![CDATA[
Error: Test assertion failed

  on tests\api_connection_keyvault.tftest.hcl line 52:
  (source code not available)

[{"error":{"code":"ConfigurationNeeded","message":"Parameter value
missing."},"status":"Error","target":"vaultName"}]
]]></system-err>
    </testcase>
    <testcase name="keyvault-service_principal"></testcase>
  </testsuite>
</testsuites>

This was rendered in the Azure Devops Tests page as follows:

Overall test page showing all tests and detail of the high-level test run

Showing detail of a successful test:

Showing detail of a failed test :

Detail of attached log showing error for the same test:

Some observations based upon the above and also the Azure Devops Documentation for result formats mapping

The separate test suites (The individual test files) appear to be ignored when in a single XML file and all tests grouped together under the test-run level only - I will do some manual splitting of the same XMLfile into an XML file per test suite (.tftest) and run through a dummy pipeline to see what occurs and report back.

The start & completed datetime and durations, as not logged in the XML are all reported as 0 (duration) or as the completion time of all of the tests in the pipeline (start/end) - Based upon the documentation, for these to be displayed correctly the following are required:
For the ‘test run’

  • Date started /testsuites/testsuite.Attributes["timestamp"].Value
  • Date completed /testsuites/testsuite.Attributes["timestamp"].Value + SUM(/testsuites/testsuite/testcase.Attributes["time"].Value) for all test cases in the test suite

For each test result:

  • Date started /testsuites/testsuite.Attributes["timestamp"].Value
  • Date completed /testsuites/testsuite.Attributes["timestamp"].Value + /testsuites/testsuite/testcase.Attributes["time"].Value
  • Duration /testsuites/testsuite/testcase/.Attributes["time"].Value

The reported error message Test Run Failed should probably contain the returned error from the actual test. Perhaps (taking from the output log and reviewing the Machine Readable UI output to see if these are separate elements available): Test assertion failed: api type name did not match expected this appears to be needed in the following XML elements:
Error message:
/Testsuites/testsuite/testcase/failure.Attributes["message"].Value Or /Testsuites/testsuite/testcase/error.Attributes["message"].Value Or /Testsuites/testsuite/testcase/skipped.Attributes["message"].Value

The Stack Trace field in ADO could be populated with the current value used for the <system-err> node via the following:

Stack trace /Testsuites/testsuite/testcase/failure.InnerText Or /Testsuites/testsuite/testcase/error.InnerText

And would contain:

Error: Test assertion failed

  on tests\api_connection_azureblob.tftest.hcl line 87:
  (source code not available)

api type name did not match expected

The content of the <system-err> should contain the 'console error log:
Console error log: /Testsuites/testsuite/testcase/system-err
And if appropriate the normal console log output should be populated via:
Console log: /Testsuites/testsuite/testcase/system-out

I realise there is a lot here - happy to discuss item by item if needed. I will submit, in a separate reply, details of what occurs if each .tftest maps to a single test.xml file as opposed to them being within a single file.

Cheers!

Further Azure DevOps Testing related to separate .xml files per testsuite (.tftest.hcl file) and introduction of additional attribute to allow visualisation of tests ‘per file’ when multiple test suites are included in single XML file (as currently occurs)

Separate XML output files per testsuite (.tftest.hcl file)

I took the XML file from my previous post and split it into 3 separate XML files and then used the ‘publish test results’ task in the Azure devops pipeline to import them in a simulated test/build pipeline. This resulted in the following when viewed by ‘test-run’:

This appears to divide the tests as expected but the use of the title provided in the pipeline task instead of the file name (A restriction of the way Azure Devops task imports the file) means that, whist they are split in a ‘per-test-file’ manner, it is not possible to determine the actual test file. This could be a problem with large test suites.

Viewing by ‘test-file’ groups all test runs together:

Introduction of additional attribute to allow visualisation of tests ‘per file’

As per the Azure mapping docs and also reviewing GitHub - testmoapp/junitxml: JUnit XML file format & JUnit XML examples. Including format specification, description & conventions. I manually added the following attribute to each test case /testsuites/testsuite/testcase/Attributes["classname"].Value containing the relevant tftest.hcl name.

XML Excerpt from file:

  <testsuite name="tests\api_connection_azurefile.tftest.hcl" tests="3" skipped="0" failures="0" errors="0" >
    <testcase name="setup" classname="api_connection_azurefile.tftest.hcl"></testcase>
    <testcase name="create_storage_account" classname="api_connection_azurefile.tftest.hcl"></testcase>
    <testcase name="azurefile-access_key" classname="api_connection_azurefile.tftest.hcl"></testcase>
  </testsuite>

Which resulted in the following output when the same task was used in the dummy build pipeline to import the single XML results file:

By ‘test run’

By ‘test file’

This appears to be a good initial solution. Allowing a single XML output file per test run to be provided, but still allowing the results to be grouped by tftest.hcl file.

HTH

Cheers

Thanks for these experiments!

Indeed, I was a little unsure what to do with “class name” since that’s obviously a very object-oriented-programming-specific concept from this format’s origins in Java world, and there isn’t any directly comparable concept in Terraform.

It seems like Azure DevOps has chosen to interpret it generically as “label of a specific item being tested”, in which case I agree that it would be helpful to populate it as you suggested.

However, I think we should wait to see how other tools are treating this data before assuming that’s a good change. In particular, I’m curious to see if any other software prefers to make use of the explicit test suites and thus might end up with a confusing presentation when the same filename is both the test suite name and the “class name”.

Thanks again!

1 Like

Hi all,

Just wanted to note that today’s Terraform CLI v1.8.0-alpha20240214 includes the correction for the syntax error in the <?xml ... ?> processing instruction discussed earlier, in case anyone was waiting for that fix to be included in an official build.

Nothing else about the JUnit XML has changed yet; we’re still interested to see how the current format is handled by a variety of different CI/CD/etc tools that support that format.

Thanks!

1 Like

We already tried the JUnit XML feature in our own project and found the same syntax errors (which we fixed manually then). Great that you could fix it in an official alpha release.

I will try its integration with GitLab’s reporting capabilities within the next couple of days.

As promised, we’d like to share some feedback for this new feature. We are displaying test results using GitLab’s reporting functionality: GitLab CI/CD artifacts reports types | GitLab

The new feature is very helpful to quickly get an overview of passed and failed tests. This is how it looks in a merge request:

Clicking on ‘View details’ gives more details about the test run:

Full reports (from inside pipelines) look as follows:

One can then click on the job and get the results for all tests.

In this view, columns Suite and Filename are not filled, and column Duration is always 0. Also in this view, one can get more details about passed and failed runs:

Overall, this is a very useful feature, and that’s why we’d vote to keep it in an official Terraform release.

Is there anything that you would want us to further test or comment on?

Thanks for the detailed feedback and screenshots!

This seems to concur with the previous experience with Azure DevOps in the following ways:

  • The way we’re describing the test scenarios (each separate .tftest.hcl file) doesn’t seem to match what these tools are expecting. It seems like we should try moving the test scenario name into the “classname” attribute instead, to see if that makes it visible to these tools.

  • Test time durations are effectively mandatory in this format, because tools assume that if they are absent then the test took zero seconds to run, rather than (what we had hoped for) treating it as “test duration unknown”.

    This one is trickier because the test harness doesn’t currently measure total duration of tech test step and scenario at all, so we’ll need to add that to the test harness itself before we could include that information in the JUnit XML output.

This is very helpful since it gives two specific ideas for improvement in the next round of experiment. I expect we’ll continue in this way until we’ve found a good compromise that seems to match the assumptions being made by various consuming software. Thanks again!

Adding the below for future reference. The below report is from the CodeCatalyst workflows on ingestion of a JUnit formatted results xml.

Single file of tests and run blocks

Summary view of results
The file with the the test reference is considered a test suite with the individual run block names as tests.


Failed tests:

Multiple files of tests

Thanks for sharing these, @quixoticmonk!

It’s interesting to see that this tool apparently does support multiple named test suites, unlike the others we were discussing earlier.

I don’t see any obvious place where the JUnit classname attribute would appear in this UI. Do you know if this tool supports that attribute? One way to test it would be to manually add a classname attribute to each of the testcase elements that Terraform generated; it doesn’t really matter what you set them to for testing purposes, since the goal is just to see where in the UI those values would appear, if anywhere.

Thanks!

Yeah. Amazon CodeCatalyst’s JUnit report renderer honored the testsuite classification in that example. Additional example from the dummy file uploaded. The error_message from the run block is getting mapped to the system-err. If it were mapped to the failure message tag, the report result would benefit.

No difference with the class name

<?xml version="1.0" encoding="UTF-8"?><testsuites>
  <testsuite name="tests/main.tftest.hcl" tests="2" skipped="0" failures="1" errors="0">
    <testcase name="test_1" classname="tests/main.tftest.hcl">
      <failure message="Test run failed"></failure>
      <system-err><![CDATA[
Error: Test assertion failed

  on tests/main.tftest.hcl line 9:
  (source code not available)

The dimension values didn't match the expected list
]]></system-err>
    </testcase>
    <testcase name="test_2" classname="tests/main.tftest.hcl"></testcase>
  </testsuite>
  <testsuite name="tests/main2.tftest.hcl" tests="2" skipped="0" failures="1" errors="0">
    <testcase name="test_1" classname="tests/main2.tftest.hcl">
      <failure message="Test run failed"></failure>
      <system-err><![CDATA[
Error: Test assertion failed

  on tests/main2.tftest.hcl line 9:
  (source code not available)

The dimension values didn't match the expected list
]]></system-err>
    </testcase>
    <testcase name="test_2" classname="tests/main2.tftest.hcl"></testcase>
  </testsuite>
</testsuites>

Thanks!

The idea of putting the full diagnostics into system-err came from experience with an earlier experimental incarnation of terraform test, where I tested the output with some JUnit XML consuming software I had available at the time. I found that some of those applications didn’t work well when the failure message included multiple lines of text.

However, this is exacerbated by the fact that the current implementation treats test failures as error diagnostics just like any other error. Ideally we’d be able to treat the test failures as special and capture just their error_message into the failure field. I implemented this just as an alternative renderer for the existing implementation and so I’m not sure how easy that change would be, but I agree that it would make the test results easier to consume.

Hi everyone! Thanks for your feedback on the previous experimental builds.

Last week we released Terraform CLI v1.9.0-alpha20240501, which includes some changes made in response to the feedback. In particular:

  • The test scenario filename also gets populated into the classname attribute of each test case, in addition to its previous placement as the name of a test suite, because some consuming software ignores test suites entirely and uses the test runs exclusively.
  • We now set the time attribute for each run to the duration of the run execution in fractional seconds. It appears that in most software this field is effectively required, in that its absence is treated as time="0" rather than as “time unspecified”.

I would appreciate it if some of the folks who tried the previous builds would repeat their experiments with the new build and let me know if the changes have made the rendered test results easier to understand.

Thanks in advance for any additional feedback!


(This new version does not address the third main piece of feedback about including the specific test failure messages in the failure message instead of just a fixed placeholder message. It’s already clear where in the JUnit XML structure that information would go, and so I think we can safely assume that we’ll be able to do that at some later point without changing any other details of the output, and so that doesn’t need to block stabilizing this feature if the other questions are now resolved.)

Hi @apparentlymart - Here is some further output & feedback regarding the newest release:

Attached is the junit.xml file in case it is of use.

The output contains a selection of:

  • success
  • a failure due to failed test assertion
  • a failure due to an error within the test (referencing an invalid/missing attribute within the assert condition)
  • a failure due to not passing all required variables to a module during the test.

Below is a selction of outputs as they appear within Azure Devops Pipeline tests section after importing the test results (within the pipeline) using the PublishTestResults@2 - Publish Test Results v2 task.

Initial Test screen:

Showing top-level detail/debug

Example of Failed Test Error & Stack Trace - Error in test and failed assert

Example of failure due to missing variable

Full list of test by test-run (default) with no outcome filtering

Full list of test by test file also with additional date started & completed columns

junit file
test.junit.xml.txt (2.9 KB)

If you think of anything else that would be useful, please let me know.

Thanks

Thanks for sharing these results, @ExtelligenceIT!

One thing I notice in your screenshots is that the JUnit XML renderer is apparently rendering diagnostic messages in a context that doesn’t have the configuration source code available, and so the source code snippets are not included in the messages. I expect that’s a relatively minor bug that should be possible to fix, and won’t change anything about how the JUnit XML is structured overall, so I’ll just make a note of that but I don’t think it’s something that needs to block us from moving forward with the current design.

I see that Azure Devops did find the new time attributes and is rendering the duration of each test, which is nice to see.

I also notice that the final screenshot includes the name of each test scenario file (like tests/azureblob.tftest.hcl) and so seems to be reacting to the new classname attribute we added, but that doesn’t seem to be visible in the other screenshots. Do you know why that is?

I’m particularly curious about the one you labelled “Full list of test by test-run (default) with no outcome filtering”, which lists four different setup cases. I notice in the attached JUnit XML file you have four different testsuite elements that each has a testcase with name="setup". Each of them has a different classname value, but Azure Devops doesn’t seem to include that in the report, making it hard to figure out which result belongs to which scenario. Am I understanding correctly what’s shown in that screenshot? Do you know of anything we could change in the XML to cause a presentation more like the last screenshot where the results are grouped by either classname or testsuite name? :thinking: