Can someone suggest an approach to write TF code in a cloud agnostic way. For example I want to provision VPC on AWS/GCP/Azure using some factored out Input/ Output variables and common TF code may be a VPC module which I can control to provision on which cloud .
Hi @sourrabhgupta,
The typical way to work across multiple cloud platforms in Terraform is to identify some level of abstraction you can reasonably provide across multiple different implementations and then write one or modules for each target platform that have similar input variables and/or output values but a different internal implementation.
For example, you might be able to present “kubernetes cluster” as a cross-platform abstraction. In that case you could write a kubernetes cluster module for AWS and another one for Azure, where both of them produce an Kubernetes API base URL. The two are then interchangeable for anything which expects to work with a Kubernetes API base URL.
Writing a single module that allows dynamically choosing which platform to use (e.g. as an input variable) is typically not productive because there is not normally any overlap between the resource types used with one platform and the resource types used with another. Instead, users of your modules will choose which platform to use by choosing which modules to call.
I was also urge caution around how generic you go. While at one level the various different public & private cloud solutions provide similar capabilities, they are also implemented very differently. If you go too generic you can end up with a solution that might work at some level but is significantly more complex or costly that would be achieved if you used facilities that are not identical between different cloud providers.
I’d also be mindful of how different parts are going to be used together. For example you’d never link an AWS EC2 instance with a Google Cloud network subnet, so does it matter if the Terraform modules have slightly different (and more cloud aligned) interfaces?