Strange "couldn't find resource" issue, following update

Hi there

I am seeing a strange issue, since updating to v4.67.0 of the TF AWS provider.

Basically, prior to the update, I had multiple jobs executing that created Load Balancers (amongst other things) and an “ssl_negotiation_policy” for port 443 listener (which is created as part of the LB itself), there were no issues previously, however, now, I seem to be getting random “couldn’t find resource” errors for the ssl_negotiation_policy - when it is trying to create it, but its strangely intermittent.

I have configured the TF_LOG env var to DEBUG, to see if I could glean further info, but all it gives me is this:

"Response contains error diagnostic: diagnostic_summary=“reading ELB Classic SSL Negotiaton Policy (policy name): couldn’t find resource”


[ERROR] vertex “aws_lb_ssl_negotiation_policy.XYZ” error: reading ELBV Classic SSL Negotiation Policy (xyz policy): couldn’t find resource.

I did notice in the DEBUG logs a couple of “Rate exceeded” messages for when it was creating the LB, which contains the 443 listener that the policy is later to be applied to. Would this cause that issue? or is that normal to see that message? The ssl policy is attempted to be created later on. However, in the other failed job, there were no such throttling errors.

Its odd, if I kick off all of the jobs together (they use seperate keys for tf state path, so no conflict there) a random 1 or 2 will fail, each time its a different one, does this suggest it could be a throttling issue (with the API?)


AWS TF Provider: v4.67.0
TF Binary: v1.2.1

Would really appreciate any tips. Its a really weird one and difficult to troubleshoot.


I am going to try staggering the multiple jobs, at different times, see if it gives a different result.

I am assuming that a rate limit exceeded / throttling error on the API, would effect everything for that account, but how would TF deal with it, in terms of dependency sequencing?