Any way to change default timeout of 10m when queueing jobs?

Is there any way to change the timeout when running a remote CLI apply in TF Cloud?

I know there’s a -lock-timeout option w/ apply but is that what’s happening here? Docs are unclear.

Our scenario is we have a bunch of jobs that get queued up and never get triggered because terraform apply times out waiting in CI/CD…

Preparing the remote apply...

To view this run in a browser, visit:
https://app.terraform.io/app/XXX/XXX/runs/run-XXX

Waiting for 1 run(s) to finish before being queued...
Waiting for 1 run(s) to finish before being queued... (30s elapsed)
Waiting for 1 run(s) to finish before being queued... (1m0s elapsed)
Waiting for 1 run(s) to finish before being queued... (1m30s elapsed)
Waiting for 1 run(s) to finish before being queued... (2m0s elapsed)
Waiting for 1 run(s) to finish before being queued... (2m30s elapsed)
Waiting for 1 run(s) to finish before being queued... (3m0s elapsed)
Waiting for 1 run(s) to finish before being queued... (3m30s elapsed)
Waiting for 1 run(s) to finish before being queued... (4m0s elapsed)
Waiting for 1 run(s) to finish before being queued... (4m30s elapsed)
Waiting for 1 run(s) to finish before being queued... (5m0s elapsed)
Waiting for 1 run(s) to finish before being queued... (5m30s elapsed)
Waiting for 1 run(s) to finish before being queued... (6m0s elapsed)
Waiting for 1 run(s) to finish before being queued... (6m30s elapsed)
Waiting for 1 run(s) to finish before being queued... (7m0s elapsed)
Waiting for 1 run(s) to finish before being queued... (7m30s elapsed)
Waiting for 1 run(s) to finish before being queued... (8m0s elapsed)
Waiting for 1 run(s) to finish before being queued... (8m30s elapsed)
Waiting for 1 run(s) to finish before being queued... (9m0s elapsed)
Error: The operation was canceled.

I’m not sure where this is mentioned at all in the documentation, but looking at the remote backend, I believe it should wait indefinitely unless you provided -lock-timeout, in which case it will attempt to cancel the run if that threshold has passed. The operation being canceled in your output I believe indicates that either the run was canceled via some other process (the UI, for example) at that time or a timeout was provided.

If you have provided a lock timeout, try either removing it entirely or raising the value, as well as ensuring that no other action was taken to cancel the run elsewhere.

1 Like

Thanks for humoring me, this was “user error.” For some reason I completely glossed over the timeout I snuck into my Github Workflow config of 10 minutes. Wasn’t super clear from the output that it was timing out due to the Workflow and not Terraform, thanks!

Hi Chiefy,

I am newbie in Terraform. I am also facing the same issue. How did you fix it?
Can you please help me here? I am triggering pipeline using Jenkins. Thanks in advance.

The issue was because I had set a job.x.timeout-minutes in my Github Workflow YAML. By default if you omit this, it defaults to 360 (6 hours). If you do set a timeout, you should also set a -lock-timeout on your terraform apply command as well (the lock timeout should be shorter than the workflow timeout so it has time to finish/cleanup).

Also be sure to check your Terraform Cloud queue. If you are running in CLI mode in Github Workfows, even if a GH job fails, the TF Cloud job may block (lock) the state so further jobs will just sit in the queue forever.

Thank you. let me try this. but my jobs are waiting in queue in tf WS for approval even I mentioned auto-approve option in the pipeline script.

Are you sure you’re workspace is using “Remote Execution” mode and set to “Manual Apply” mode?

Yes, using Remote mode and set to ‘terraform apply -auto-approve’ through pipeline script. No “manual apply”