Starting k8s cluster && creating deployments in same apply

Hi all, I’m attempting to use terraform to both create a gke cluster and start up kubernetes deployments/statefulsets/etc. I’d like to be able to do the entire thing in one apply statement without having to do non-terraform steps (e.g. setting my kubeconfig to reference the newly created cluster). I was wondering if anyone has done this before and had some advice for me. The cluster creation works as expected but then my deployment/pod initializations fail with “no route to host”. This makes sense to me since I’m not telling terraform to reference the new cluster but I’m unclear on how to correct it. Any help would be greatly appreciated.

1 Like

Hi @spexican924

Welcome to the forums!

It is not possible to create the K8S cluster and deploy resources in the new cluster in a single terraform apply.

From:
https://www.terraform.io/docs/providers/kubernetes/index.html#stacking-with-managed-kubernetes-cluster-resources

IMPORTANT WARNING When using interpolation to pass credentials to the Kubernetes provider from other resources, these resources SHOULD NOT be created in the same apply operation where Kubernetes provider resources are also used. This will lead to intermittent and unpredictable errors which are hard to debug and diagnose. The root issue lies with the order in which Terraform itself evaluates the provider blocks vs. actual resources. Please refer to this section of Terraform docs for further explanation.

The best-practice in this case is to ensure that the cluster itself and the Kubernetes provider resources are managed with separate apply operations. Data-sources can be used to convey values between the two stages as needed.

1 Like

Ummm… That doesn’t sounds right. I’ve done exactly that using a managed k8s cluster on DigitalOcean.com.

variable "do_token" {}

data "digitalocean_kubernetes_versions" "my-first-tf-k8s-cluster" {}


resource "digitalocean_kubernetes_cluster" "my-first-tf-k8s-cluster" {
  name    = "my-first-tf-k8s-cluster"
  region  = "tor1"
  # Grab the latest version slug from `doctl kubernetes options versions`
  version = data.digitalocean_kubernetes_versions.my-first-tf-k8s-cluster.latest_version
  tags    = ["dev"]

  node_pool {
    name       = "worker-pool"
    size       = "s-1vcpu-2gb"
    node_count = 2
  }
}

provider "kubernetes" {
  load_config_file = false
  host  = digitalocean_kubernetes_cluster.my-first-tf-k8s-cluster.endpoint
  token = digitalocean_kubernetes_cluster.my-first-tf-k8s-cluster.kube_config[0].token
  cluster_ca_certificate = base64decode(
    digitalocean_kubernetes_cluster.my-first-tf-k8s-cluster.kube_config[0].cluster_ca_certificate
  )
}

provider "digitalocean" {
  token = var.do_token
}

terraform {
  required_providers {
    digitalocean = {
      source = "digitalocean/digitalocean"
      version = "1.22.0"
    }
  }
}

resource "kubernetes_deployment" "jellyfin-deployment" {
  metadata {
    name = "jellyfin"
    labels = {
      App = "jellyfin"
    }
  }
  spec {
    replicas = 1
    selector {
      match_labels = {
        App = "jellyfin"
      }
    }
    template {
      metadata {
        labels = {
          App = "jellyfin"
        }
      }
      spec {
        container {
          image = "linuxserver/jellyfin:latest"
          name = "jellyfin"
          port {
            container_port = 4096
          }
        }
      }
    }
  }
}

resource "kubernetes_service" "jellyfin-lb" {
  metadata {
    name = "jellyfin-lb"
  }
  spec {
    selector = {
      App = kubernetes_deployment.jellyfin-deployment.metadata[0].labels.App
    }
    port {
      port = 80
      target_port = 8096
    }
   type = "LoadBalancer"
 }
}

resource "digitalocean_domain" "default" {
  name = "jellyfin.example.com"
  ip_address = kubernetes_service.jellyfin-lb.load_balancer_ingress[0].ip
}

It seems to work fine!

While it is possible to do in some cases, and may work for you, it is not a recommended practice because there can be conditions where it fails leading to unreliable applies.