Possible to inspect the configuration of another resource referenced by ID and fail the plan?

We have received a bunch of (understandably!) frustrated users of the honeycombio_trigger resource who are getting apply-time errors when the honeycombio_query being associated with their trigger isn’t suitable for use by their trigger.

Triggers in Honeycomb have strict requirements about the query that they end up executing, but most notably (and most often erroring for users) that a trigger’s frequency cannot be more than four times (4x) the query’s duration.

I’m curious if there is a way with plan modification (as I’m very sure during validation is not possible) to get at the configuration of the trigger’s query and run these validation-ish tests to error during a plan and avoid the frustration of apply-time failures (made more frustrating in PR-driven workflows like TFC).

I’ve thought through making additional API calls to help solve this, but it feels like a dead end as it is very(!) likely that someone is making a query and a trigger in the same configuration plan-apply run and as a result I can’t call Triggers API’s GET endpoint to check the value as it will be “Unknown” until we’re in the apply stage anyway.

If I’m outta luck here, I’m curious what folks might suggest feels “the most Terraform” way to solve this? Ironically, r/trigger use to take the Query Specification itself and not rely on r/query to create the query and get the ID, but I felt that did not feel like a great Terraform DX :upside_down_face:

Hi @jharley :wave: Interesting topic and sounds like you have done some good investigative work here already.

Terraform resources (data resources or managed resources) are always considered separate, both in state storage and in operations, so your intuitions here about potential options are pretty accurate. If you attempted to do an API lookup during planning (a valid approach), but the separate resource has not been created in the API yet, then you would not get the validation. Passing the query configuration through to the trigger configuration guarantees that the trigger managed resource has all the information to perform the validation itself, but is less convenient than passing an identifier and being able to manage the resources separately (especially if the query is reusable elsewhere).

Upfront, I’m not sure if there is going to be a “best practices” type of answer here, but maybe some of these ideas can offer inspiration for your situation. It probably goes without saying you could go back to how it was with accepting the query specification directly in the trigger managed resource, but it really depends on your practitioner use cases and needs.

One option would be allow two configuration styles and ensure via configuration validation that they both cannot be defined at the same time:

  • Configuration via query identifier as it is today. Queries remain “reusable”, separate managed resources but without the configuration-time or plan-time validation.
  • Configuration via query specification where the trigger managed resource also internally manages and validates the query.

Another, maybe slightly more preferable, option could be allowing both configuration styles at the same time:

  • Requiring configuration via query identifier as it is today. Queries remain “reusable”, separate managed resources.
  • Optionally allowing configuration also via query specification, but only to enable the additional validation in the trigger managed resource.

Beyond that, there are other configuration-based options such as lifecycle precondition and friends, but understandably that could be less than ideal as every practitioner configuration must try to tease out the validation logic themselves, even if there is an example of this in the managed resource documentation.

Courtesy of the upcoming Terraform 1.8 provider-defined functions feature, you could however, make this easier for practitioners by offering a function that performs the necessary validation logic to catch the configuration issue (e.g. validate_trigger_frequency(query_duration, trigger_frequency) or passing in the whole query specification instead of the duration).

Another option in that space without provider-defined functions is offering additional computed attributes in the query managed resource to help practitioners write this validation logic themselves (e.g. max_trigger_frequency or similar).

I’m going to stop here because that is a lot of potential solutions for your situation and while none of these solutions may be ideal, I hope that these additional threads can give you something to pull on.

1 Like

Thanks for the speedy and thoughtful reply, @bflad! I appreciate the confirmation that there really isn’t a way to validate across resources with a consistent success rate (meaning: the API client based plan modifier would work in some of the time).

I’ll ponder the options above and write back with what I move ahead with.

Just closing the loop here for future travellers: I ended up adding query_json as an alternative attribute to query_id on the resource, and if query_json is provided resource-level validation can happen to help reduce the frustrating “broke during apply” behaviour.

PR is here: r/trigger: re-introduce query_json for deeper validation by jharley · Pull Request #487 · honeycombio/terraform-provider-honeycombio · GitHub

Thanks, again, @bflad :pray:

1 Like