Role assignments for AKS node resource group get re-created

I need to assign different roles on the Node resource group created automatically by AKS.
I currently do it this way:

data "azurerm_resource_group" "rg_workers" {
  name = azurerm_kubernetes_cluster.aks.node_resource_group

resource "azurerm_role_assignment" "cilium_operator_role" {
  scope                            =
  role_definition_name             = "cilium-operator-role"
  principal_id                     = azurerm_kubernetes_cluster.aks.kubelet_identity.0.object_id
  skip_service_principal_aad_check = true

The main issue with this approach is that the Id of the Node resource group is known after apply when fetched by the datasource. Side effect is that the role assignments get re-created on each terraform run if the AKS cluster is modified. It’s a huge caveat for AKS upgrades because this role is needed for the network plugin I’m using (Cilium), but the role gets re-assigned AFTER the AKS gets updated, so if it’s a AKS upgrade, new nodes will come up in NotReady state until someone add the role back.

Anybody got a better approach for role assignments on Node resource group?