I am looking for advice, I want to use Terraform for easing the burden of install on-site servers on multiple customers sites.
We use the ESXI Provider, which deploys the servers as we require them. However, we have many customers that we build the servers for. The basic configuration of the servers is the same, but have some minor differences, hostname, IP address, number of nodes in k8s cluster etc.
Currently, we have around 20 customers sites, but hopefully this will grow slowly to many more.
I am thinking the best way is to create a module for the installation, use S3 for the backend state, then create a new main file for each site, with a different bucket in S3 for each site.
But is this really the best way to proceed?
That sounds about right.
Have a module (or modules) for things which are pretty similar, using variables to configure the differences. Then a different root module for each customer which uses the module(s) as needed, each having a different backend configuration.
I would personally have a different git repo for that code for each customer (so things are well separated) as well as a different S3 bucket with versioning enabled, but you could combine them into a single git repo with directories and a single S3 bucket with different keys.
Many thanks for getting back to me.
I thought of the different repositories, and there are advantages to this, I was just concerned we could end up with 100’s of repositories. Wasn’t sure if hundreds of repositories or hundreds of folders would be better.
That’s very much up to you. We have hundreds of repositories which allows us to organise them in different GitHub Organisations, possibly with different access permissions. We also version each repo for tracability and map each repo to a separate CI/CD job.
But less repos can also work.
Thanks for the advice. Greatly appreciated. Lots to ponder, but I think lots of repo’s is the answer long term.