Google cloud using us-central1 location to run kubernetes workflow node instead of specified zone's region in terraform files

Hi,

I have these settings specified in main.tf file:

provider "google" {
  project = var.project_id
  # Derive the region from the zone using regex
  region = regex("([a-z]+)-([a-z]+)([0-9]+)-([a-z]+)", var.zone)[0]
  zone = var.zone
}

metadata.display.yaml:

apiVersion: blueprints.cloud.google.com/v1alpha1
kind: BlueprintMetadata
metadata:
  name: marketplace-tools-display
  annotations:
    autogenSpecType: SINGLE_VM
    config.kubernetes.io/local-config: "true"
spec:
  ui:
    input:
      variables:
        project_id:
          name: project_id
          title: Project Id
          invisible: true
        goog_cm_deployment_name:
          name: goog_cm_deployment_name
          title: Goog Cm Deployment Name
        zone:
          name: zone
          title: Zone
          xGoogleProperty:
            type: ET_GCE_ZONE

metadata.yaml:

apiVersion: blueprints.cloud.google.com/v1alpha1
kind: BlueprintMetadata
metadata:
  name: marketplace-tools
  annotations:
    autogenSpecType: SINGLE_VM
    config.kubernetes.io/local-config: "true"
spec:
  info:
    title: Google Cloud Marketplace Terraform Module
    version: 8.1.0.0
    actuationTool:
      flavor: Terraform
      version: ">= 1.2"
    description: {}
    softwareGroups:
      - type: SG_OS
        software:

....

  content: {}
  interfaces:
    variables:
      - name: project_id
        description: The ID of the project in which to provision resources.
        varType: string
        required: true
      - name: goog_cm_deployment_name
        description: The name of the deployment and VM instance.
        varType: string
        required: true
      - name: zone
        description: The zone for the solution to be deployed.
        varType: string
        defaultValue: us-west1-a
....

I also explicitly select from Terraform UI deployment screen the zone to be us-west1-a.
Even after that, the backend code in Google Cloud still uses us-central1 location for Kubernetes related operations.
Due to this, I have to provide us-central1 access to the service account so it can give permission to this region.
If I don’t provide access, this is the error seen: Virtual Machine deployment failed
“us-central1” violates constraint “constraints/gcp.resourceLocations” on the resource “projects/projecttest-2251/locations/us-central1/deployments/tf-1”.

Is there any way to fix this so google cloud’s backend operation also run in us-west1 region instead? I do confirm that final deployment of the instance however happens in us-west1-a zone.

I think there’s a layer missing here, as you’re including Kube configs something in GCP’s “marketplace”, but you haven’t included details to the docs you’re following or any terraform code beyond the provider config? Are you following something like this?

(side note: substr(var.zone, 0, length(var.zone) - 2) might be a simpler (and, arguablly, easier to read) way to get the region from the zone.)

Also, normally, service accounts aren’t restricted by region AFAIK (other than org level constraints, which is what it sounds like you’re running up against).

Yes that’s the document followed. Why is GCP backend still deploying its internal workflow nodes in us-central1 location, is still unclear? Although final deployment of VM instance does happen in the user-specified zone: us-west1-a.

I don’t know, and this isn’t a process I’m super familiar with. I think it’s fair to say that most other folks here also won’t be.

But if it’s running the stuff in GCP infrastructure, it’s not surprising that whatever does the initial bootstrapping of the instance that runs the terraform apply (if that’s what’s happening vs. it running it local) might not be following the provider configuration for obvious reasons.

So I’d take a look closely at whatever you’re using to spin that up, and / or ask in a GCP specific venue related to the tooling you’re using.