Azurerm_hdinsight_kafka An argument named "min_tls_version" is not expected here

Hi

Can you please check below error and suggest a solution?

#############################
on Modules/HDInsight/main.tf line 88, in resource “azurerm_hdinsight_kafka_cluster” “hdi_kafka_cluster”:
88: min_tls_version = “1.2”

An argument named “min_tls_version” is not expected here .

##[error]Bash exited with code ‘1’.
##[error]Bash wrote one or more lines to the standard error stream.
##[error]
Error: Unsupported argument
#############################

resource "azurerm_hdinsight_kafka_cluster" "hdi_kafka_cluster" {
   for_each  =  { for v in local.hd_insight_cluster : v.name  =>  v }     # create a temporary map (of maps) for for_each statement
   
   name                 =  each.value.name
   resource_group_name  =  each.value.resource_group
   location             =  each.value.location
   cluster_version      =  each.value.cluster_version
   tier                 =  each.value.tier
   min_tls_version 		=  "1.2"
   
   component_version {
      kafka       =     each.value.component_version
   }
   
   gateway {
      enabled     =     each.value.gateway.enabled
      username    =     each.value.gateway.username
      password    =     var.cluster_kv_ksc_map["Standard"].secrets["hdi-gw-password"].value
   }
   
   storage_account_gen2 {
      is_default                    =  true
      filesystem_id                 =  local.sa_dl_g2_fs_ids[each.value.storage_account_gen2.sa_data_lake_gen2_fs_name]
      storage_resource_id           =  local.storage_account_ids[each.value.storage_account_gen2.storage_account_name]
      managed_identity_resource_id  =  local.user_msi_ids[each.value.storage_account_gen2.user_msi_name]
   }
   
   roles {
      head_node {
         vm_size              =  each.value.head_node.vm_size
         username             =  each.value.head_node.username
         ssh_keys             =  [ var.cluster_kv_ksc_map["Standard"].secrets["ssh-pub-key"].value ]
         virtual_network_id   =  local.vnet_ids[each.value.head_node.vnet_name]
         subnet_id            =  var.subnet_ids[each.value.head_node.snet_name]
      }
      
      worker_node {
         vm_size                    =  each.value.worker_node.vm_size
         username                   =  each.value.worker_node.username
         ssh_keys                   =  [ var.cluster_kv_ksc_map["Standard"].secrets["ssh-pub-key"].value ]
         virtual_network_id         =  local.vnet_ids[each.value.worker_node.vnet_name]
         subnet_id                  =  var.subnet_ids[each.value.worker_node.snet_name]
         target_instance_count      =  each.value.worker_node.target_instance_count
         number_of_disks_per_node   =  each.value.worker_node.number_of_disks_per_node
      }
      
      zookeeper_node {
         vm_size              =  each.value.zookeeper_node.vm_size
         username             =  each.value.zookeeper_node.username
         ssh_keys             =  [ var.cluster_kv_ksc_map["Standard"].secrets["ssh-pub-key"].value ]
         virtual_network_id   =  local.vnet_ids[each.value.zookeeper_node.vnet_name]
         subnet_id            =  var.subnet_ids[each.value.zookeeper_node.snet_name]
      }
   }
   
   monitor {
      log_analytics_workspace_id    =  var.log_analytics.workspace_id
      primary_key                   =  var.log_analytics.primary_shared_key
   }
   
   # prevent deletion of resource
   lifecycle {
   #   prevent_destroy = true
      ignore_changes = [
         cluster_version,
         component_version[0].kafka,
      ]
   }
   
   #min_tls_version      =  each.value.min_tls_version
   
   # TAGs
   tags  =  var.tags

   #
   depends_on             =  [module.dlf2_msi.user_msi,
                              module.sa_data_lake_gen2_fs.sa_data_lake_gen2_fs]
}

Hi @ramakote899,
would you kindly format all of your code properly using triple back-ticks?

Which provider version are you using?

Hi,
We are using below.
Please check and suggest.

** provider version = “= 2.34.0”

** terraform version 0.14.3

##############################################################################
######################### HDInsight cluster ##################################
##############################################################################
resource "azurerm_hdinsight_kafka_cluster" "hdi_kafka_cluster" {
   for_each  =  { for v in local.hd_insight_cluster : v.name  =>  v }     # create a temporary map (of maps) for for_each statement
   
   name                 =  each.value.name
   resource_group_name  =  each.value.resource_group
   location             =  each.value.location
   cluster_version      =  each.value.cluster_version
   tier                 =  each.value.tier
   min_tls_version 		=  "1.2"
   
   component_version {
      kafka       =     each.value.component_version
   }
   
   gateway {
      enabled     =     each.value.gateway.enabled
      username    =     each.value.gateway.username
      password    =     var.cluster_kv_ksc_map["Standard"].secrets["hdi-gw-password"].value
   }
   
   storage_account_gen2 {
      is_default                    =  true
      filesystem_id                 =  local.sa_dl_g2_fs_ids[each.value.storage_account_gen2.sa_data_lake_gen2_fs_name]
      storage_resource_id           =  local.storage_account_ids[each.value.storage_account_gen2.storage_account_name]
      managed_identity_resource_id  =  local.user_msi_ids[each.value.storage_account_gen2.user_msi_name]
   }
   
   roles {
      head_node {
         vm_size              =  each.value.head_node.vm_size
         username             =  each.value.head_node.username
         ssh_keys             =  [ var.cluster_kv_ksc_map["Standard"].secrets["ssh-pub-key"].value ]
         virtual_network_id   =  local.vnet_ids[each.value.head_node.vnet_name]
         subnet_id            =  var.subnet_ids[each.value.head_node.snet_name]
      }
      
      worker_node {
         vm_size                    =  each.value.worker_node.vm_size
         username                   =  each.value.worker_node.username
         ssh_keys                   =  [ var.cluster_kv_ksc_map["Standard"].secrets["ssh-pub-key"].value ]
         virtual_network_id         =  local.vnet_ids[each.value.worker_node.vnet_name]
         subnet_id                  =  var.subnet_ids[each.value.worker_node.snet_name]
         target_instance_count      =  each.value.worker_node.target_instance_count
         number_of_disks_per_node   =  each.value.worker_node.number_of_disks_per_node
      }
      
      zookeeper_node {
         vm_size              =  each.value.zookeeper_node.vm_size
         username             =  each.value.zookeeper_node.username
         ssh_keys             =  [ var.cluster_kv_ksc_map["Standard"].secrets["ssh-pub-key"].value ]
         virtual_network_id   =  local.vnet_ids[each.value.zookeeper_node.vnet_name]
         subnet_id            =  var.subnet_ids[each.value.zookeeper_node.snet_name]
      }
   }
   
   monitor {
      log_analytics_workspace_id    =  var.log_analytics.workspace_id
      primary_key                   =  var.log_analytics.primary_shared_key
   }
   
   # prevent deletion of resource
   lifecycle {
   #   prevent_destroy = true
      ignore_changes = [
         cluster_version,
         component_version[0].kafka,
      ]
   }
   
   #min_tls_version      =  each.value.min_tls_version
   
   # TAGs
   tags  =  var.tags

   #
   depends_on             =  [module.dlf2_msi.user_msi,
                              module.sa_data_lake_gen2_fs.sa_data_lake_gen2_fs]
}

Please check

Hi @tbugfinder,
Can you please check and suggest?
Is this product issue?

Actually TLS1.2 is default anyway.

Could you try setting: tls_min_version = "1.2"?
Looks like docs and code might be out-of-sync.

Thanks a lot, @tbugfinder !
tls_min_version worked, although i have another policy error creating the cluster. I am checking on it.

How did you check code? Can I refer myself to code in future?
Can you please share me code location?

I created an issue for that.

Hi @tbugfinder,

I am getting below policy violations while creating HDInsight kafka cluster via terraform.

AdditionalInfo=[{"info":{"evaluationDetails":{"evaluatedExpressions":[{"expression":"type","expressionKind":"Field","expressionValue":"Microsoft.HDInsight/clusters","operator":"Equals","path":"type","result":"True","targetValue":"Microsoft.HDInsight/clusters"},{"expression":"Microsoft.HDInsight/clusters/networkProperties.resourceProviderConnection","expressionKind":"Field","operator":"NotEquals","path":"properties.networkProperties.resourceProviderConnection","result":"True","targetValue":"Outbound"}]},"policyAssignmentDisplayName":"Astra Compute Policies","policyAssignmentId":"/providers/Microsoft.Management/managementGroups/e741d71c-c6b6-47b0-803c-0f3b32b07556/providers/Microsoft.Authorization/policyAssignments/astra-az-ap-com-001","policyAssignmentName":"astra-az-ap-com-001","policyAssignmentScope":"/providers/Microsoft.Management/managementGroups/e741d71c-c6b6-47b0-803c-0f3b32b07556","policyDefinitionDisplayName":"Ensure HDInsight Clusters Resource Provider Connection Is Outbound","policyDefinitionEffect":"deny","policyDefinitionId":"/providers/Microsoft.Management/managementgroups/e741d71c-c6b6-47b0-803c-0f3b32b07556/providers/Microsoft.Authorization/policyDefinitions/astra-az-ap-com-004","policyDefinitionName":"astra-az-ap-com-004","policyDefinitionReferenceId":"3685040817773476745","policySetDefinitionDisplayName":"Astra Compute Policies","policySetDefinitionId":"/providers/Microsoft.Management/managementgroups/e741d71c-c6b6-47b0-803c-0f3b32b07556/providers/Microsoft.Authorization/policySetDefinitions/astra-az-ap-com-001","policySetDefinitionName":"astra-az-ap-com-001"},"type":"PolicyViolation"},{"info":{"evaluationDetails":{"evaluatedExpressions":[{"expression":"type","expressionKind":"Field","expressionValue":"Microsoft.HDInsight/clusters","operator":"Equals","path":"type","result":"True","targetValue":"Microsoft.HDInsight/clusters"},{"expression":"Microsoft.HDInsight/clusters/diskEncryptionProperties.vaultUri","expressionKind":"Field","operator":"Exists","path":"properties.diskEncryptionProperties.vaultUri","result":"True","targetValue":"false"}]},"policyAssignmentDisplayName":"Astra Compute Policies","policyAssignmentId":"/providers/Microsoft.Management/managementGroups/e741d71c-c6b6-47b0-803c-0f3b32b07556/providers/Microsoft.Authorization/policyAssignments/astra-az-ap-com-001","policyAssignmentName":"astra-az-ap-com-001","policyAssignmentScope":"/providers/Microsoft.Management/managementGroups/e741d71c-c6b6-47b0-803c-0f3b32b07556","policyDefinitionDisplayName":"Ensure HDInsight Clusters Utilize Encryption at Rest Using CMK","policyDefinitionEffect":"deny","policyDefinitionId":"/providers/Microsoft.Management/managementgroups/e741d71c-c6b6-47b0-803c-0f3b32b07556/providers/Microsoft.Authorization/policyDefinitions/astra-az-ap-com-005","policyDefinitionName":"astra-az-ap-com-005","policyDefinitionReferenceId":"16202260225047331886","policySetDefinitionDisplayName":"Astra Compute Policies","policySetDefinitionId":"/providers/Microsoft.Management/managementgroups/e741d71c-c6b6-47b0-803c-0f3b32b07556/providers/Microsoft.Authorization/policySetDefinitions/astra-az-ap-com-001","policySetDefinitionName":"astra-az-ap-com-001"},"type":"PolicyViolation"},{"info":{"evaluationDetails":{"evaluatedExpressions":[{"expression":"type","expressionKind":"Field","expressionValue":"Microsoft.HDInsight/clusters","operator":"Equals","path":"type","result":"True","targetValue":"Microsoft.HDInsight/clusters"},{"expression":"Microsoft.HDInsight/clusters/encryptionInTransitProperties.isEncryptionInTransitEnabled","expressionKind":"Field","operator":"NotEquals","path":"properties.encryptionInTransitProperties.isEncryptionInTransitEnabled","result":"True","targetValue":"true"}]},"policyAssignmentDisplayName":"Astra Compute Policies","policyAssignmentId":"/providers/Microsoft.Management/managementGroups/e741d71c-c6b6-47b0-803c-0f3b32b07556/providers/Microsoft.Authorization/policyAssignments/astra-az-ap-com-001","policyAssignmentName":"astra-az-ap-com-001","policyAssignmentScope":"/providers/Microsoft.Management/managementGroups/e741d71c-c6b6-47b0-803c-0f3b32b07556","policyDefinitionDisplayName":"Ensure HDInsight Clusters Utilize Encryption in Transit","policyDefinitionEffect":"deny","policyDefinitionId":"/providers/Microsoft.Management/managementgroups/e741d71c-c6b6-47b0-803c-0f3b32b07556/providers/Microsoft.Authorization/policyDefinitions/astra-az-ap-com-006","policyDefinitionName":"astra-az-ap-com-006","policyDefinitionReferenceId":"13587191355263382694","policySetDefinitionDisplayName":"Astra Compute Policies","policySetDefinitionId":"/providers/Microsoft.Management/managementgroups/e741d71c-c6b6-47b0-803c-0f3b32b07556/providers/Microsoft.Authorization/policySetDefinitions/astra-az-ap-com-001","policySetDefinitionName":"astra-az-ap-com-001"},"type":"PolicyViolation"},{"info":{"evaluationDetails":{"evaluatedExpressions":[{"expression":"type","expressionKind":"Field","expressionValue":"Microsoft.HDInsight/clusters","operator":"Equals","path":"type","result":"True","targetValue":"Microsoft.HDInsight/clusters"},{"expression":"Microsoft.HDInsight/clusters/networkProperties.privateLink","expressionKind":"Field","operator":"NotEquals","path":"properties.networkProperties.privateLink","result":"True","targetValue":"Enabled"}]},"policyAssignmentDisplayName":"Astra Compute Policies","policyAssignmentId":"/providers/Microsoft.Management/managementGroups/e741d71c-c6b6-47b0-803c-0f3b32b07556/providers/Microsoft.Authorization/policyAssignments/astra-az-ap-com-001","policyAssignmentName":"astra-az-ap-com-001","policyAssignmentScope":"/providers/Microsoft.Management/managementGroups/e741d71c-c6b6-47b0-803c-0f3b32b07556","policyDefinitionDisplayName":"Ensure HDInsight Clusters Have Private Link Enabled","policyDefinitionEffect":"deny","policyDefinitionId":"/providers/Microsoft.Management/managementgroups/e741d71c-c6b6-47b0-803c-0f3b32b07556/providers/Microsoft.Authorization/policyDefinitions/astra-az-ap-com-008","policyDefinitionName":"astra-az-ap-com-008","policyDefinitionReferenceId":"1896600233978633625","policySetDefinitionDisplayName":"Astra Compute Policies","policySetDefinitionId":"/providers/Microsoft.Management/managementgroups/e741d71c-c6b6-47b0-803c-0f3b32b07556/providers/Microsoft.Authorization/policySetDefinitions/astra-az-ap-com-001","policySetDefinitionName":"astra-az-ap-com-001"},"type":"PolicyViolation"}]

Can you please check suggest how do i set networkproperties to be outbound and private link enabled ? I do not see those options in terraform resource azurerm_hdinsight_kafka_cluster.

Do you also create / own the VNET or do you just consume it within your deployment?

Hi @tbugfinder,

I just use the vnet_name already provided, as part of this HDInsight creation.
But I can ask to change if something needs to be set in vnet.
Where should i need to set outbound OR private_endpoint network properties ?
Can you please give me an example ?

      head_node {
         vm_size              =  each.value.head_node.vm_size
         username             =  each.value.head_node.username
         ssh_keys             =  [ var.cluster_kv_ksc_map["Standard"].secrets["ssh-pub-key"].value ]
         virtual_network_id   =  local.vnet_ids[each.value.head_node.vnet_name]
         subnet_id            =  var.subnet_ids[each.value.head_node.snet_name]
      }

The private endpoint basically is a service mapped NIC within the subnet(s).
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/private_endpoint

Overall it looks like your admin has setup multiple policies and 4 of those are violated, correct? Admin should be able to explain the HDInsight network requirements in your environment.

Inbound & Outbound might be configured using the NSGs within VNET.
I didn’t find ready to use tf code which applies to your “restricted” environment.