Resources are run in unexpected order


I have a weird issue where I have a chain of dependencies that at first glance don’t seem to be observed and I’m not sure why.

I’m running some kubernetes resources (with the kubernetes provider), that depends on a null_resource, which in turn checks of a pod is ready.

resource "kubernetes_manifest" "metallb_bgppeers_0" {
        depends_on = [ null_resource.metallb_controller_wait ]

This null_resource depends on the kube_workers resource, which is the provisioning of the kubernetes worker virtual machines:

resource "null_resource" "metallb_controller_wait" {
  depends_on = [ vsphere_virtual_machine.kube_workers ]

The kube_workers resource depends on the controlplane node 2, the controlplane node 2 on cp node 1.

The issue that I’m having is terraform starts reading the data (which I know needs to start with) and after that it immediately starts with these kubernetes resources for some reason:

data.terraform_remote_state.vpc: Reading...
data.terraform_remote_state.vpc: Read complete after 0s
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_datacenter.datacenter: Read complete after 0s [id=datacenter-2]
data.vsphere_virtual_machine.vault_template: Reading...
data.vsphere_virtual_machine.kubernetes_template: Reading...
data.vsphere_compute_cluster.cluster: Reading... Reading...
data.vsphere_virtual_machine.consul_template: Reading...
data.vsphere_virtual_machine.router_template: Reading...
data.vsphere_datastore.datastore: Reading...
data.vsphere_virtual_machine.dns_recursor_template: Reading... Read complete after 1s [id=network-14]
data.vsphere_datastore.datastore: Read complete after 1s [id=datastore-2078]
data.vsphere_compute_cluster.cluster: Read complete after 1s [id=domain-c7]
data.vsphere_virtual_machine.vault_template: Read complete after 1s [id=423e4784-44dd-84a3-79e5-0234a06c9f85]
data.vsphere_virtual_machine.consul_template: Read complete after 1s [id=423e30b5-4ff2-a1b7-9376-29e8a1f758cb]
data.vsphere_virtual_machine.dns_recursor_template: Read complete after 1s [id=423e6128-eeca-09e4-0beb-0bbe783bea70]
data.vsphere_virtual_machine.kubernetes_template: Read complete after 1s [id=423e31e5-f045-0e1a-6e0b-625e0dbc6ffa]
data.vsphere_virtual_machine.router_template: Read complete after 1s [id=423e6b0a-428c-693d-8407-e252ec852d21]
│ Error: Invalid configuration for API client
│   with kubernetes_manifest.metallb_bgppeers_0,
│   on line 422, in resource "kubernetes_manifest" "metallb_bgppeers_0":
│  422: resource "kubernetes_manifest" "metallb_bgppeers_0" {
│ Get "": dial tcp connect: no route to host

I’m not sure where to start debugging.

I have two .tf files here. The first one imports the terraform state from where I install vault and consul and then it creates the vault certificate and the second one provisions kubernetes and the resources depend on the set-up of the vault certificates.

There is no current terraform state here, it’s starting from the beginning.

I’m running terraform graph to understand this better, but I don’t. In any case, the graph is consistent to what I’m seeing when I do terraform apply:

digraph {
	compound = "true"
	newrank = "true"
	subgraph "root" {
		"[root] data.template_cloudinit_config.kubernetes (expand)" [label = "data.template_cloudinit_config.kubernetes", shape = "box"]
		"[root] data.vsphere_compute_cluster.cluster (expand)" [label = "data.vsphere_compute_cluster.cluster", shape = "box"]
		"[root] data.vsphere_datacenter.datacenter (expand)" [label = "data.vsphere_datacenter.datacenter", shape = "box"]
		"[root] data.vsphere_datastore.datastore (expand)" [label = "data.vsphere_datastore.datastore", shape = "box"]
		"[root] (expand)" [label = "", shape = "box"]
		"[root] data.vsphere_virtual_machine.consul_template (expand)" [label = "data.vsphere_virtual_machine.consul_template", shape = "box"]
		"[root] data.vsphere_virtual_machine.dns_recursor_template (expand)" [label = "data.vsphere_virtual_machine.dns_recursor_template", shape = "box"]
		"[root] data.vsphere_virtual_machine.kubernetes_template (expand)" [label = "data.vsphere_virtual_machine.kubernetes_template", shape = "box"]
		"[root] data.vsphere_virtual_machine.router_template (expand)" [label = "data.vsphere_virtual_machine.router_template", shape = "box"]
		"[root] data.vsphere_virtual_machine.vault_template (expand)" [label = "data.vsphere_virtual_machine.vault_template", shape = "box"]
		"[root] kubernetes_manifest.metallb_bgpadvertisment (expand)" [label = "kubernetes_manifest.metallb_bgpadvertisment", shape = "box"]
		"[root] kubernetes_manifest.metallb_bgppeers_0 (expand)" [label = "kubernetes_manifest.metallb_bgppeers_0", shape = "box"]
		"[root] kubernetes_manifest.metallb_bgppeers_1 (expand)" [label = "kubernetes_manifest.metallb_bgppeers_1", shape = "box"]
		"[root] kubernetes_manifest.metallb_ipaddresspool (expand)" [label = "kubernetes_manifest.metallb_ipaddresspool", shape = "box"]
		"[root] local_file.kube1_root_ca_pem_bundle (expand)" [label = "local_file.kube1_root_ca_pem_bundle", shape = "box"]
		"[root] local_file.root_ca_pem_bundle (expand)" [label = "local_file.root_ca_pem_bundle", shape = "box"]
		"[root] null_resource.metallb_controller_wait (expand)" [label = "null_resource.metallb_controller_wait", shape = "box"]
		"[root] provider[\"\"]" [label = "provider[\"\"]", shape = "diamond"]
		"[root] provider[\"\"]" [label = "provider[\"\"]", shape = "diamond"]
		"[root] provider[\"\"]" [label = "provider[\"\"]", shape = "diamond"]
		"[root] provider[\"\"]" [label = "provider[\"\"]", shape = "diamond"]
		"[root] provider[\"\"]" [label = "provider[\"\"]", shape = "diamond"]
		"[root] provider[\"\"]" [label = "provider[\"\"]", shape = "diamond"]
		"[root] provider[\"\"]" [label = "provider[\"\"]", shape = "diamond"]
		"[root] tls_private_key.kube1_root_ca_key (expand)" [label = "tls_private_key.kube1_root_ca_key", shape = "box"]
		"[root] tls_private_key.root_ca_key (expand)" [label = "tls_private_key.root_ca_key", shape = "box"]
		"[root] tls_self_signed_cert.kube1_root_ca (expand)" [label = "tls_self_signed_cert.kube1_root_ca", shape = "box"]
		"[root] tls_self_signed_cert.root_ca (expand)" [label = "tls_self_signed_cert.root_ca", shape = "box"]
		"[root] var.bootstrap_expect" [label = "var.bootstrap_expect", shape = "note"]
		"[root] var.consul_datacenter" [label = "var.consul_datacenter", shape = "note"]
		"[root] var.consul_domain" [label = "var.consul_domain", shape = "note"]
		"[root] var.consul_keygen" [label = "var.consul_keygen", shape = "note"]
		"[root] var.consul_servers" [label = "var.consul_servers", shape = "note"]
		"[root] var.consul_template" [label = "var.consul_template", shape = "note"]
		"[root] var.dns" [label = "var.dns", shape = "note"]
		"[root] var.dns_recursor_ips" [label = "var.dns_recursor_ips", shape = "note"]
		"[root] var.dns_recursor_template" [label = "var.dns_recursor_template", shape = "note"]
		"[root] var.dns_servers" [label = "var.dns_servers", shape = "note"]
		"[root]" [label = "", shape = "note"]
		"[root] var.kube_certificate_key" [label = "var.kube_certificate_key", shape = "note"]
		"[root] var.kube_join_token" [label = "var.kube_join_token", shape = "note"]
		"[root] var.kubernetes_servers" [label = "var.kubernetes_servers", shape = "note"]
		"[root] var.kubernetes_template" [label = "var.kubernetes_template", shape = "note"]
		"[root] var.public_gw" [label = "var.public_gw", shape = "note"]
		"[root] var.router_instances" [label = "var.router_instances", shape = "note"]
		"[root] var.router_template" [label = "var.router_template", shape = "note"]
		"[root] var.targetnode" [label = "var.targetnode", shape = "note"]
		"[root] var.ui_config" [label = "var.ui_config", shape = "note"]
		"[root] var.vault_servers" [label = "var.vault_servers", shape = "note"]
		"[root] var.vault_template" [label = "var.vault_template", shape = "note"]
		"[root] var.vault_transit" [label = "var.vault_transit", shape = "note"]
		"[root] var.virtual_router" [label = "var.virtual_router", shape = "note"]
		"[root] var.vsphere_cluster_name" [label = "var.vsphere_cluster_name", shape = "note"]
		"[root] var.vsphere_datastore_name" [label = "var.vsphere_datastore_name", shape = "note"]
		"[root] var.vsphere_folder_name" [label = "var.vsphere_folder_name", shape = "note"]
		"[root] var.vsphere_network_name" [label = "var.vsphere_network_name", shape = "note"]
		"[root] vault_mount.kube1_etcd_ca (expand)" [label = "vault_mount.kube1_etcd_ca", shape = "box"]
		"[root] vault_mount.kube1_root (expand)" [label = "vault_mount.kube1_root", shape = "box"]
		"[root] vault_mount.pki_int (expand)" [label = "vault_mount.pki_int", shape = "box"]
		"[root] vault_mount.root (expand)" [label = "vault_mount.root", shape = "box"]
		"[root] vault_pki_secret_backend_cert.etcd_apiserver_client (expand)" [label = "vault_pki_secret_backend_cert.etcd_apiserver_client", shape = "box"]
		"[root] vault_pki_secret_backend_cert.etcd_healthcheck_client (expand)" [label = "vault_pki_secret_backend_cert.etcd_healthcheck_client", shape = "box"]
		"[root] vault_pki_secret_backend_cert.etcd_peer (expand)" [label = "vault_pki_secret_backend_cert.etcd_peer", shape = "box"]
		"[root] vault_pki_secret_backend_cert.etcd_server (expand)" [label = "vault_pki_secret_backend_cert.etcd_server", shape = "box"]
		"[root] vault_pki_secret_backend_config_ca.ca_config (expand)" [label = "vault_pki_secret_backend_config_ca.ca_config", shape = "box"]
		"[root] vault_pki_secret_backend_config_ca.kube1_ca_config (expand)" [label = "vault_pki_secret_backend_config_ca.kube1_ca_config", shape = "box"]
		"[root] vault_pki_secret_backend_config_urls.config_urls (expand)" [label = "vault_pki_secret_backend_config_urls.config_urls", shape = "box"]
		"[root] vault_pki_secret_backend_config_urls.kube1_config_urls (expand)" [label = "vault_pki_secret_backend_config_urls.kube1_config_urls", shape = "box"]
		"[root] vault_pki_secret_backend_intermediate_cert_request.ica1 (expand)" [label = "vault_pki_secret_backend_intermediate_cert_request.ica1", shape = "box"]
		"[root] vault_pki_secret_backend_intermediate_cert_request.kube1_etcd_ca (expand)" [label = "vault_pki_secret_backend_intermediate_cert_request.kube1_etcd_ca", shape = "box"]
		"[root] vault_pki_secret_backend_intermediate_set_signed.ica1 (expand)" [label = "vault_pki_secret_backend_intermediate_set_signed.ica1", shape = "box"]
		"[root] vault_pki_secret_backend_intermediate_set_signed.kube1_etcd_ca (expand)" [label = "vault_pki_secret_backend_intermediate_set_signed.kube1_etcd_ca", shape = "box"]
		"[root] vault_pki_secret_backend_role.etcd_ca (expand)" [label = "vault_pki_secret_backend_role.etcd_ca", shape = "box"]
		"[root] vault_pki_secret_backend_role.etcd_ca_clients (expand)" [label = "vault_pki_secret_backend_role.etcd_ca_clients", shape = "box"]
		"[root] vault_pki_secret_backend_role.kubernetes (expand)" [label = "vault_pki_secret_backend_role.kubernetes", shape = "box"]
		"[root] vault_pki_secret_backend_role.server_role (expand)" [label = "vault_pki_secret_backend_role.server_role", shape = "box"]
		"[root] vault_pki_secret_backend_root_sign_intermediate.ica1 (expand)" [label = "vault_pki_secret_backend_root_sign_intermediate.ica1", shape = "box"]
		"[root] vault_pki_secret_backend_root_sign_intermediate.kube1_etcd_ca (expand)" [label = "vault_pki_secret_backend_root_sign_intermediate.kube1_etcd_ca", shape = "box"]
		"[root] vsphere_virtual_machine.etcd (expand)" [label = "vsphere_virtual_machine.etcd", shape = "box"]
		"[root] vsphere_virtual_machine.kube_controlplane_1 (expand)" [label = "vsphere_virtual_machine.kube_controlplane_1", shape = "box"]
		"[root] vsphere_virtual_machine.kube_controlplane_2 (expand)" [label = "vsphere_virtual_machine.kube_controlplane_2", shape = "box"]
		"[root] vsphere_virtual_machine.kube_workers (expand)" [label = "vsphere_virtual_machine.kube_workers", shape = "box"]
		"[root] data.template_cloudinit_config.kubernetes (expand)" -> "[root] provider[\"\"]"
		"[root] data.template_cloudinit_config.kubernetes (expand)" -> "[root] var.bootstrap_expect"
		"[root] data.template_cloudinit_config.kubernetes (expand)" -> "[root] var.consul_datacenter"
		"[root] data.template_cloudinit_config.kubernetes (expand)" -> "[root] var.consul_domain"
		"[root] data.template_cloudinit_config.kubernetes (expand)" -> "[root] var.consul_keygen"
		"[root] data.template_cloudinit_config.kubernetes (expand)" -> "[root] var.consul_servers"
		"[root] data.template_cloudinit_config.kubernetes (expand)" -> "[root] var.dns_servers"
		"[root] data.template_cloudinit_config.kubernetes (expand)" -> "[root] var.kube_certificate_key"
		"[root] data.template_cloudinit_config.kubernetes (expand)" -> "[root] var.kube_join_token"
		"[root] data.template_cloudinit_config.kubernetes (expand)" -> "[root] var.ui_config"
		"[root] data.template_cloudinit_config.kubernetes (expand)" -> "[root] vault_pki_secret_backend_cert.etcd_apiserver_client (expand)"
		"[root] data.template_cloudinit_config.kubernetes (expand)" -> "[root] vault_pki_secret_backend_cert.etcd_healthcheck_client (expand)"
		"[root] data.template_cloudinit_config.kubernetes (expand)" -> "[root] vault_pki_secret_backend_cert.etcd_peer (expand)"
		"[root] data.template_cloudinit_config.kubernetes (expand)" -> "[root] vault_pki_secret_backend_cert.etcd_server (expand)"
		"[root] data.template_cloudinit_config.kubernetes (expand)" -> "[root] vault_pki_secret_backend_config_ca.kube1_ca_config (expand)"
		"[root] data.vsphere_compute_cluster.cluster (expand)" -> "[root] data.vsphere_datacenter.datacenter (expand)"
		"[root] data.vsphere_compute_cluster.cluster (expand)" -> "[root] var.vsphere_cluster_name"
		"[root] data.vsphere_datacenter.datacenter (expand)" -> "[root] provider[\"\"]"
		"[root] data.vsphere_datastore.datastore (expand)" -> "[root] data.vsphere_datacenter.datacenter (expand)"
		"[root] data.vsphere_datastore.datastore (expand)" -> "[root] var.vsphere_datastore_name"
		"[root] (expand)" -> "[root] data.vsphere_datacenter.datacenter (expand)"
		"[root] (expand)" -> "[root] var.vsphere_network_name"
		"[root] data.vsphere_virtual_machine.consul_template (expand)" -> "[root] data.vsphere_datacenter.datacenter (expand)"
		"[root] data.vsphere_virtual_machine.consul_template (expand)" -> "[root] var.consul_template"
		"[root] data.vsphere_virtual_machine.dns_recursor_template (expand)" -> "[root] data.vsphere_datacenter.datacenter (expand)"
		"[root] data.vsphere_virtual_machine.dns_recursor_template (expand)" -> "[root] var.dns_recursor_template"
		"[root] data.vsphere_virtual_machine.kubernetes_template (expand)" -> "[root] data.vsphere_datacenter.datacenter (expand)"
		"[root] data.vsphere_virtual_machine.kubernetes_template (expand)" -> "[root] var.kubernetes_template"
		"[root] data.vsphere_virtual_machine.router_template (expand)" -> "[root] data.vsphere_datacenter.datacenter (expand)"
		"[root] data.vsphere_virtual_machine.router_template (expand)" -> "[root] var.router_template"
		"[root] data.vsphere_virtual_machine.vault_template (expand)" -> "[root] data.vsphere_datacenter.datacenter (expand)"
		"[root] data.vsphere_virtual_machine.vault_template (expand)" -> "[root] var.vault_template"
		"[root] kubernetes_manifest.metallb_bgpadvertisment (expand)" -> "[root] null_resource.metallb_controller_wait (expand)"
		"[root] kubernetes_manifest.metallb_bgpadvertisment (expand)" -> "[root] provider[\"\"]"
		"[root] kubernetes_manifest.metallb_bgppeers_0 (expand)" -> "[root] null_resource.metallb_controller_wait (expand)"
		"[root] kubernetes_manifest.metallb_bgppeers_0 (expand)" -> "[root] provider[\"\"]"
		"[root] kubernetes_manifest.metallb_bgppeers_0 (expand)" -> "[root] var.router_instances"
		"[root] kubernetes_manifest.metallb_bgppeers_1 (expand)" -> "[root] null_resource.metallb_controller_wait (expand)"
		"[root] kubernetes_manifest.metallb_bgppeers_1 (expand)" -> "[root] provider[\"\"]"
		"[root] kubernetes_manifest.metallb_bgppeers_1 (expand)" -> "[root] var.router_instances"
		"[root] kubernetes_manifest.metallb_ipaddresspool (expand)" -> "[root] null_resource.metallb_controller_wait (expand)"
		"[root] kubernetes_manifest.metallb_ipaddresspool (expand)" -> "[root] provider[\"\"]"
		"[root] local_file.kube1_root_ca_pem_bundle (expand)" -> "[root] provider[\"\"]"
		"[root] local_file.kube1_root_ca_pem_bundle (expand)" -> "[root] tls_private_key.kube1_root_ca_key (expand)"
		"[root] local_file.kube1_root_ca_pem_bundle (expand)" -> "[root] tls_self_signed_cert.kube1_root_ca (expand)"
		"[root] local_file.kube1_root_ca_pem_bundle (expand)" -> "[root] tls_self_signed_cert.root_ca (expand)"
		"[root] local_file.root_ca_pem_bundle (expand)" -> "[root] provider[\"\"]"
		"[root] local_file.root_ca_pem_bundle (expand)" -> "[root] tls_self_signed_cert.root_ca (expand)"
		"[root] null_resource.metallb_controller_wait (expand)" -> "[root] provider[\"\"]"
		"[root] null_resource.metallb_controller_wait (expand)" -> "[root] vsphere_virtual_machine.kube_workers (expand)"
		"[root] provider[\"\"] (close)" -> "[root] kubernetes_manifest.metallb_bgpadvertisment (expand)"
		"[root] provider[\"\"] (close)" -> "[root] kubernetes_manifest.metallb_bgppeers_0 (expand)"
		"[root] provider[\"\"] (close)" -> "[root] kubernetes_manifest.metallb_bgppeers_1 (expand)"
		"[root] provider[\"\"] (close)" -> "[root] kubernetes_manifest.metallb_ipaddresspool (expand)"
		"[root] provider[\"\"] (close)" -> "[root] local_file.kube1_root_ca_pem_bundle (expand)"
		"[root] provider[\"\"] (close)" -> "[root] local_file.root_ca_pem_bundle (expand)"
		"[root] provider[\"\"] (close)" -> "[root] null_resource.metallb_controller_wait (expand)"
		"[root] provider[\"\"] (close)" -> "[root] data.template_cloudinit_config.kubernetes (expand)"
		"[root] provider[\"\"] (close)" -> "[root] tls_private_key.kube1_root_ca_key (expand)"
		"[root] provider[\"\"] (close)" -> "[root] tls_self_signed_cert.kube1_root_ca (expand)"
		"[root] provider[\"\"] (close)" -> "[root] tls_self_signed_cert.root_ca (expand)"
		"[root] provider[\"\"] (close)" -> "[root] vault_pki_secret_backend_cert.etcd_apiserver_client (expand)"
		"[root] provider[\"\"] (close)" -> "[root] vault_pki_secret_backend_cert.etcd_healthcheck_client (expand)"
		"[root] provider[\"\"] (close)" -> "[root] vault_pki_secret_backend_cert.etcd_peer (expand)"
		"[root] provider[\"\"] (close)" -> "[root] vault_pki_secret_backend_cert.etcd_server (expand)"
		"[root] provider[\"\"] (close)" -> "[root] vault_pki_secret_backend_config_urls.kube1_config_urls (expand)"
		"[root] provider[\"\"] (close)" -> "[root] vault_pki_secret_backend_intermediate_set_signed.ica1 (expand)"
		"[root] provider[\"\"] (close)" -> "[root] vault_pki_secret_backend_role.kubernetes (expand)"
		"[root] provider[\"\"] (close)" -> "[root] vault_pki_secret_backend_role.server_role (expand)"
		"[root] provider[\"\"] (close)" -> "[root] data.vsphere_virtual_machine.consul_template (expand)"
		"[root] provider[\"\"] (close)" -> "[root] data.vsphere_virtual_machine.dns_recursor_template (expand)"
		"[root] provider[\"\"] (close)" -> "[root] data.vsphere_virtual_machine.router_template (expand)"
		"[root] provider[\"\"] (close)" -> "[root] data.vsphere_virtual_machine.vault_template (expand)"
		"[root] provider[\"\"] (close)" -> "[root] vsphere_virtual_machine.kube_workers (expand)"
		"[root] root" -> "[root] provider[\"\"] (close)"
		"[root] root" -> "[root] provider[\"\"] (close)"
		"[root] root" -> "[root] provider[\"\"] (close)"
		"[root] root" -> "[root] provider[\"\"] (close)"
		"[root] root" -> "[root] provider[\"\"] (close)"
		"[root] root" -> "[root] provider[\"\"] (close)"
		"[root] root" -> "[root] provider[\"\"] (close)"
		"[root] root" -> "[root] var.dns_recursor_ips"
		"[root] root" -> "[root] var.public_gw"
		"[root] root" -> "[root] var.targetnode"
		"[root] root" -> "[root] var.vault_servers"
		"[root] root" -> "[root] var.vault_transit"
		"[root] root" -> "[root] var.virtual_router"
		"[root] tls_private_key.kube1_root_ca_key (expand)" -> "[root] provider[\"\"]"
		"[root] tls_private_key.root_ca_key (expand)" -> "[root] provider[\"\"]"
		"[root] tls_self_signed_cert.kube1_root_ca (expand)" -> "[root] tls_private_key.root_ca_key (expand)"
		"[root] tls_self_signed_cert.root_ca (expand)" -> "[root] tls_private_key.root_ca_key (expand)"
		"[root] vault_mount.kube1_etcd_ca (expand)" -> "[root] vault_pki_secret_backend_config_ca.ca_config (expand)"
		"[root] vault_mount.kube1_root (expand)" -> "[root] provider[\"\"]"
		"[root] vault_mount.pki_int (expand)" -> "[root] provider[\"\"]"
		"[root] vault_mount.root (expand)" -> "[root] provider[\"\"]"
		"[root] vault_pki_secret_backend_cert.etcd_apiserver_client (expand)" -> "[root] var.kubernetes_servers"
		"[root] vault_pki_secret_backend_cert.etcd_apiserver_client (expand)" -> "[root] vault_pki_secret_backend_role.etcd_ca_clients (expand)"
		"[root] vault_pki_secret_backend_cert.etcd_healthcheck_client (expand)" -> "[root] var.kubernetes_servers"
		"[root] vault_pki_secret_backend_cert.etcd_healthcheck_client (expand)" -> "[root] vault_pki_secret_backend_role.etcd_ca_clients (expand)"
		"[root] vault_pki_secret_backend_cert.etcd_peer (expand)" -> "[root] var.kubernetes_servers"
		"[root] vault_pki_secret_backend_cert.etcd_peer (expand)" -> "[root] vault_pki_secret_backend_role.etcd_ca (expand)"
		"[root] vault_pki_secret_backend_cert.etcd_server (expand)" -> "[root] var.kubernetes_servers"
		"[root] vault_pki_secret_backend_cert.etcd_server (expand)" -> "[root] vault_pki_secret_backend_role.etcd_ca (expand)"
		"[root] vault_pki_secret_backend_config_ca.ca_config (expand)" -> "[root] local_file.root_ca_pem_bundle (expand)"
		"[root] vault_pki_secret_backend_config_ca.ca_config (expand)" -> "[root] vault_mount.root (expand)"
		"[root] vault_pki_secret_backend_config_ca.kube1_ca_config (expand)" -> "[root] local_file.kube1_root_ca_pem_bundle (expand)"
		"[root] vault_pki_secret_backend_config_ca.kube1_ca_config (expand)" -> "[root] vault_mount.kube1_root (expand)"
		"[root] vault_pki_secret_backend_config_urls.config_urls (expand)" -> "[root] vault_pki_secret_backend_config_ca.ca_config (expand)"
		"[root] vault_pki_secret_backend_config_urls.kube1_config_urls (expand)" -> "[root] vault_pki_secret_backend_config_ca.kube1_ca_config (expand)"
		"[root] vault_pki_secret_backend_intermediate_cert_request.ica1 (expand)" -> "[root] vault_mount.pki_int (expand)"
		"[root] vault_pki_secret_backend_intermediate_cert_request.kube1_etcd_ca (expand)" -> "[root] vault_mount.kube1_etcd_ca (expand)"
		"[root] vault_pki_secret_backend_intermediate_set_signed.ica1 (expand)" -> "[root] vault_pki_secret_backend_root_sign_intermediate.ica1 (expand)"
		"[root] vault_pki_secret_backend_intermediate_set_signed.kube1_etcd_ca (expand)" -> "[root] vault_pki_secret_backend_root_sign_intermediate.kube1_etcd_ca (expand)"
		"[root] vault_pki_secret_backend_role.etcd_ca (expand)" -> "[root] vault_pki_secret_backend_intermediate_set_signed.kube1_etcd_ca (expand)"
		"[root] vault_pki_secret_backend_role.etcd_ca_clients (expand)" -> "[root] vault_pki_secret_backend_intermediate_set_signed.kube1_etcd_ca (expand)"
		"[root] vault_pki_secret_backend_role.kubernetes (expand)" -> "[root] vault_mount.pki_int (expand)"
		"[root] vault_pki_secret_backend_role.kubernetes (expand)" -> "[root] vault_pki_secret_backend_config_urls.config_urls (expand)"
		"[root] vault_pki_secret_backend_role.server_role (expand)" -> "[root] vault_mount.pki_int (expand)"
		"[root] vault_pki_secret_backend_role.server_role (expand)" -> "[root] vault_pki_secret_backend_config_urls.config_urls (expand)"
		"[root] vault_pki_secret_backend_root_sign_intermediate.ica1 (expand)" -> "[root] vault_mount.root (expand)"
		"[root] vault_pki_secret_backend_root_sign_intermediate.ica1 (expand)" -> "[root] vault_pki_secret_backend_intermediate_cert_request.ica1 (expand)"
		"[root] vault_pki_secret_backend_root_sign_intermediate.kube1_etcd_ca (expand)" -> "[root] vault_pki_secret_backend_intermediate_cert_request.kube1_etcd_ca (expand)"
		"[root] vsphere_virtual_machine.etcd (expand)" -> "[root] data.template_cloudinit_config.kubernetes (expand)"
		"[root] vsphere_virtual_machine.etcd (expand)" -> "[root] data.vsphere_compute_cluster.cluster (expand)"
		"[root] vsphere_virtual_machine.etcd (expand)" -> "[root] data.vsphere_datastore.datastore (expand)"
		"[root] vsphere_virtual_machine.etcd (expand)" -> "[root] (expand)"
		"[root] vsphere_virtual_machine.etcd (expand)" -> "[root] data.vsphere_virtual_machine.kubernetes_template (expand)"
		"[root] vsphere_virtual_machine.etcd (expand)" -> "[root] var.dns"
		"[root] vsphere_virtual_machine.etcd (expand)" -> "[root]"
		"[root] vsphere_virtual_machine.etcd (expand)" -> "[root] var.vsphere_folder_name"
		"[root] vsphere_virtual_machine.kube_controlplane_1 (expand)" -> "[root] vsphere_virtual_machine.etcd (expand)"
		"[root] vsphere_virtual_machine.kube_controlplane_2 (expand)" -> "[root] vsphere_virtual_machine.kube_controlplane_1 (expand)"
		"[root] vsphere_virtual_machine.kube_workers (expand)" -> "[root] vsphere_virtual_machine.kube_controlplane_2 (expand)"

Exporting it to svg makes things somewhat clearer, but I still don’t follow the logic. I’m attaching the picture of the relevant part:

The kubernetes_manifest resources are dependent on the kubernetes provider, but yet another kubernetes provider which is labelled (close) is dependent on these resources and I’m guessing that’s the reason why the start so soon.

Can this be related to the fact that I’m copying the state from the consul/vault stage, where I’m also pinning the version of the kubernetes module? The reason why I’m doing this is that I cannot define required_providers twice, so I’m defining it directly there, then I copy the terraform state.

Although,when not copying the terraform state at all, I end up in the same situation.

There’s something else that might be going on: the kubernetes resources are connecting to the kubernetes API nonetheless just to check that the cluster is alive, without necessarily trying to apply what I’ve defined there. So that means, I cannot use it unless there’s already a cluster available, which of course there isn’t, because I’m provisioning it before this.

And this confirms it:

This resource requires API access during planning time. This means the cluster has to be accessible at plan time and thus cannot be created in the same apply operation. We recommend only using this resource for custom resources or resources not yet fully supported by the provider.

I can’t believe this :slight_smile:

If you are wanting to deploy things into a Kubernetes cluster and you also want to create the cluster itself you’d normally have two separate root modules. Then you’d run the first root module to create the cluster, followed by the other root module to deploy the applications.

Yeah, I get that. The restriction, though, doesn’t make a lot of sense and it’s yet another tough limitation of terraform. Because, again, it expects to have a kubernetes cluster on the cloud, which is readily available.

The point was exactly that: I needed to add a core component, which is part of the whole cluster provisioning. There’s no logic for me in creating yet another config (state) for a few CRDs which are strictly bound to this provisioning stage.

The result is that I need to fragment the automation even more and to generally turn to even more little hacks.

I get it. You’re annoyed, what you thought was an easy problem turned out to be a hard problem.

But, you’re letting your annoyance provoke you into verbally lashing out.

I opened up the code of the kubernetes_manifest resource, and … it’s complicated. I make no claim to fully understand it, but I seriously doubt the authors wrote all that complexity just to inconvenience you.

It looks to me like:

  • Terraform requires a plan in advance of the intended final state
  • Kubernetes requires type information from the server to combine with an input manifest to fully determine the final state

These requirements of different products come together to force the design decision that you’re claiming doesn’t make sense.

Right. It still doesn’t make sense from the perspective of the actual user, the one who actually uses terraform without being able to understand all the internal workings, but who has a somewhat decent understanding of how terraform does work when using it as such.

That said, I would strongly question that “it makes no sense” counts as “lashing out verbally” :slight_smile: I am annoyed sure, and I still think I’m right to be so. But it is what it is.

Hi @lethargosapatheia,

The kubernetes_manifest resource type in the hashicorp/kubernetes provider is unusual in that the provider itself has no awareness of the schemas available in the cluster; this resource type aims to be compatible with whatever schemas are available in your particular cluster.

But to achieve that requires that the provider access the cluster during the planning step, because it’s not possible to properly plan without knowing what the schema ought to be.

If you use other resource types in this provider whose schemas are hard-coded into the provider then this network access during planning isn’t needed, but in that case you can only work with object types the provider already knows about.

In this case it isn’t really clear how to have it both ways: either the provider hard-codes a schema but it’s rigid in what it can support, or it fetches the schema dynamically but therefore needs the cluster to already be running to understand the meaning of the configuration.

As others have said the most typical answer is to separate the underlying cluster provisioning from the management of objects in the cluster so that this isn’t an issue. But if you want to have these all in the same configuration then there is a different answer that relies on the fact that this problem only arises during initial bootstrapping and there’s no problem for subsequent updates when the cluster is already running:

terraform apply -target=null_resource.metallb_controller_wait

If you use the above then you give Terraform permission to ignore anything that isn’t needed to get that one resource running, and so it should be able to plan getting the cluster created.

Once that partial plan is applied you can then just run terraform apply normally in the future as long as your plans don’t include replacing the Kubernetes cluster.

1 Like

I appreciate your ample explanations. Eventually I decided to create a separate configuration (own state) for this, as it’s the cleanest way, I guess, despite the fragmentation. Otherwise it makes things more confusing.