How to overcome aws credential session timeout when storing tfstate in s3

Configuring terraform state backend using s3

I am using in-house custom provider and storing tfstate in aws s3.

so basically I need the aws creds to be live for the start and end of the terraform action to store the .tfstate.

I am getting the credentials from vault for aws provider (not using vault provider, using some script to retrieve them). which has a ttl of 1 hour.

Terraform initialisation or starting terraform (apply/destroy) isn’t a problem because if the credentials were invalid it throws error even before starting.

However if terraform started and creation of resources takes more than an hour or if I started just 5 mins before the session timeout. Once after all the resource are created/destroyed, the .tfstate failed to store to s3.

So re running will create duplicate resource or destroying resources will fail.

How I can overcome this?. How can I automatically refresh the credentials.

Note: Terraform produce an error state. However we would like to not to end up in failure at first place.
so basically an option in terraform to execute a command/ script periodically to overcome session timeout will be helpful