I configured an AKS cluster to use a system-assigned managed identity to access to other Azure resources
resource "azurerm_subnet" "aks" {
name = var.aks_subnet_name
resource_group_name = azurerm_resource_group.main.name
virtual_network_name = module.network.vnet_name
address_prefix = var.aks_subnet
service_endpoints = ["Microsoft.KeyVault"]
}
resource "azurerm_kubernetes_cluster" "aks_main" {
name = module.aks_name.result
depends_on = [azurerm_subnet.aks]
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
dns_prefix = "aks-${local.name}"
kubernetes_version = var.k8s_version
addon_profile {
oms_agent {
# For monitoring containers
enabled = var.addons.oms_agent
log_analytics_workspace_id = azurerm_log_analytics_workspace.example.id
}
kube_dashboard {
enabled = true
}
azure_policy {
# If we want to enfore policy definitions in the future
# Check requirements https://docs.microsoft.com/en-ie/azure/governance/policy/concepts/policy-for-kubernetes
enabled = var.addons.azure_policy
}
}
default_node_pool {
name = "default"
orchestrator_version = var.k8s_version
node_count = var.default_node_pool.node_count
vm_size = var.default_node_pool.vm_size
type = "VirtualMachineScaleSets"
availability_zones = var.default_node_pool.zones
# availability_zones = ["1", "2", "3"]
max_pods = 250
os_disk_size_gb = 128
vnet_subnet_id = azurerm_subnet.aks.id
node_labels = var.default_node_pool.labels
enable_auto_scaling = var.default_node_pool.cluster_auto_scaling
min_count = var.default_node_pool.cluster_auto_scaling_min_count
max_count = var.default_node_pool.cluster_auto_scaling_max_count
enable_node_public_ip = false
}
# Configuring AKS to use a system-assigned managed identity to access
identity {
type = "SystemAssigned"
}
network_profile {
load_balancer_sku = "standard"
outbound_type = "loadBalancer"
network_plugin = "azure"
# if non-azure network policies
# https://azure.microsoft.com/nl-nl/blog/integrating-azure-cni-and-calico-a-technical-deep-dive/
network_policy = "calico"
dns_service_ip = "10.0.0.10"
docker_bridge_cidr = "172.17.0.1/16"
service_cidr = "10.0.0.0/16"
}
lifecycle {
ignore_changes = [
default_node_pool,
windows_profile,
]
}
}
I want to use that managed identity (the service principal created inside AKS cluster section code) to give it roles like this Network Contributor
over a subnet:
resource "azurerm_role_assignment" "aks_subnet" {
# Giving access to AKS SP identity created to akssubnet by assigning it
# a Network Contributor role
scope = azurerm_subnet.aks.id
role_definition_name = "Network Contributor"
principal_id = azurerm_kubernetes_cluster.aks_main.identity[0].principal_id
# principal_id = azurerm_kubernetes_cluster.aks_main.kubelet_identity[0].object_id
# principal_id = data.azurerm_user_assigned_identity.test.principal_id
# skip_service_principal_aad_check = true
}
But the output I got after terraform apply is:
Error: authorization.RoleAssignmentsClient#Create: Failure responding
to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error.
Status=403 Code="AuthorizationFailed"
Message="The client 'afd5bd09-c294-4597-9c90-e1ee293e5f3a' with object id
'afd5bd09-c294-4597-9c90-e1ee293e5f3a' does not have authorization
to perform action 'Microsoft.Authorization/roleAssignments/write'
over scope '/subscriptions/77dfff95-fbd3-4a15-b97a-b7182939e61a/resourceGroups/rhd-spec-prod-main-6loe4lpkr0hd8/providers/Microsoft.Network/virtualNetworks/rhd-spec-prod-main-wdaht6cn7s3s8/subnets/aks-subnet/providers/Microsoft.Authorization/roleAssignments/8733864c-a5f7-a6a9-a61d-6393989f0ad1'
or the scope is invalid. If access was recently granted, please refresh your credentials."
on aks.tf line 23, in resource "azurerm_role_assignment" "aks_subnet":
23: resource "azurerm_role_assignment" "aks_subnet" {
It seems the service principal is being created does not have enough privileges to perform a role assignment over the subnet, or maybe I have wrong the scope
attribute. I am passing there, the aks subnet id.
What am I doing wrong?