Cannot Deploy Private AKS Cluster onto Existing VNet

Key Issue

With private cluster enabled, there is no networking and/or vnet configuration documentation available. I suspect that networking and/or the vnet configuration is insufficient and results in the below error. There is no known way to resolve this.

Terraform (and AzureRM Provider) Version

Terrraform version: v0.12.29
Azurerm version: 2.22

Affected Resource(s)

azurerm_kubernetes_cluster

Terraform Configuration Files

resource "azurerm_kubernetes_cluster" "aks" {
  name                    = local.aks_full_name
  location                = var.location
  resource_group_name     = var.resource_group_name
  dns_prefix              = var.dns_prefix
  private_cluster_enabled = true
  kubernetes_version      = "1.16.10"


  default_node_pool {
    name                  = "default"
    node_count            = var.node_count
    type                  = "VirtualMachineScaleSets"
    vnet_subnet_id        = data.azurerm_subnet.subnet.id
    vm_size               = var.vm_size 
    enable_node_public_ip = false
    os_disk_size_gb       = var.os_disk_size_gb
    
    # enable_auto_scaling = var.enable_auto_scaling
    # max_count = var.max_node_count
    # min_count = 1
  }

  network_profile {
    network_plugin     = "kubenet"
    service_cidr       = "10.1.0.0/16"
    docker_bridge_cidr = "172.17.0.1/16"
    dns_service_ip     = "10.1.0.10"
    outbound_type      = "loadBalancer"
    load_balancer_sku  = "standard"

    load_balancer_profile {
      managed_outbound_ip_count = 1
    }
  }
  service_principal {
    client_id = var.client_id
    client_secret = var.client_secret
  }
  addon_profile {
    http_application_routing {
      enabled = false
    }

    kube_dashboard {
      enabled = true
    }

    oms_agent {
      enabled                    = true
      log_analytics_workspace_id = data.azurerm_log_analytics_workspace.prodla.id
    }
  }

  role_based_access_control {
    enabled = true
  }
}

Expected Behavior

Deploy a private cluster AKS on the Vnet (the vnet has no current configuration)

Actual Behavior

Error: waiting for creation of Managed Kubernetes Cluster “aks-jm-test3-wu2-development” (Resource Group “RG-DevTest-AKS”): Code=“ControlPlaneAddOnsNotReady” Message=“Pods not in Running status: kubernetes-dashboard,metrics-server,tunnelfront,coredns,coredns-autoscaler”

on …\aks\aks.tf line 5, in resource “azurerm_kubernetes_cluster” “aks”:
5: resource “azurerm_kubernetes_cluster” “aks” {

Steps to Reproduce

terraform apply

Same here

Terrraform version: v0.13.0
Azurerm version: 2.23.0

Creation of the NodePool timed out.

For me, this was due to the resource group being in a location that did not support some of the resources. Changing my resource group to US East 2 solved my issue.