In the current azurerm_kubernetes_cluster, setting up:
private_cluster_enabled = true
automatically, creates a private endpoint on the subnet_id from the default_node_pool block.
In my scenario, I have multiple adress spaces on my vnet and I wish I could create my aks workers with Azure CNI on an address range that is not reachable from on-prem.
And having the private endpoint picking an ip on another address space, this one being accessible from on prem through vpn/express route.
This way I could have as many pods (=IP consumption) as I want on a specific range and really consume limited number of IPs from the other address range.
I don’t think this is possible wwith the current implementation. I don’t really get why. Could it be possible this is a scenario that as not been considered ?
Thanks for your help.