Bootstrap problem with creating tf state backend

we would like to change from the default filesystem backend to an S3 bucket as a backend for the tf state to get secure transactions when working as a team with the TF code. However, i think we have a bootstrap problem:
Of course we want to manage the bucket with terraform as well. In our current state, we can simply create the bucket, change the backend provider afterwards, everything’s fine.

But i would assume that one should write TF code so that it can be deployed on an empty cloud tenant, and do stuff from scratch. In that case, the S3 bucket is not existing yet, however TF will try to access this bucket, since it is advised to store its state there, and therefore will fail.

What is the best practice to let terraform create resources which are required for it to being able to run at all? Currently my only thought is: Create the bucket manually each time the TF code is deployed on a new tenant before TF is executed the first time, and import the bucket to TF’s state once it is able to access the bucket. But that doesnt seem to be very elegant to me. Is there a better way?

What we do is have a separate repo for the “bootstrap” resources (in our case DynamoDB & S3 for state) which contains the Terraform to manage them. For one single time we run that Terraform manually (with no backend setup) to create the bucket before adding the backend and migrating state.

From then on any changes are managed via our Jenkins pipelines.

OK, thanks. This is our first “big” TF project, so we weren’t exactly sure about best practices. Your approach sounds reasonable.