It sounds like you have a situation where the configuration for one provider refers to an object that’s not yet created by some other provider. In your case, I assume it’s the
hashicorp/kubernetes provider configuration referring to something from the
In that case indeed it’s not typically possible to configure across multiple systems in a single Terraform configuration, because Terraform needs to configure a provider in order to plan against that provider, but at the first plan the
hashicorp/kubernetes provider configuration is incomplete. There’s more about this in the Provider Configuration documentation:
You can use expressions in the values of these configuration arguments, but can only reference values that are known before the configuration is applied. This means you can safely reference input variables, but not attributes exported by resources (with an exception for resource arguments that are specified directly in the configuration).
How a provider reacts to an incomplete configuration is up to that provider, but IIRC the Kubernetes provider in particular reacts to it by trying to connect to some default location, such as localhost, and thus you see “Connection Refused” if you are running Terraform on a system where Kubernetes isn’t running.
The most robust answer to this problem is to split into two configurations where one is responsible for establishing the underlying cluster and then another is responsible for configuring that cluster.
However, since this is only a bootstrapping problem (that is, it goes away once you’ve created everything for the first time and move into a mode of just updating these objects), this can also be a valid situation to use Resource Targeting, allowing you to perform an initial run to establish the cluster and then finally converge on a complete system by running an untargeted apply:
# first create just the cluster and its prerequisites:
terraform apply -target=google_container_cluster.example
# then, after that succeeds, apply everything to converge:
After this initial bootstrapping process you can then just use normal
terraform apply on an ongoing basis as long as you don’t destroy or replace the container cluster.