Databricks provider error: somehow resource id is not set

I’m writing an internal module for managing our Azure Databricks resources. The first iteration that simply created a workspace ran fine. However, I am now trying to add clusters and instance pools and running into an issue. It appears to be an Azure auth issue:

Error: cannot configure azure-client-secret auth: cannot get workspace: somehow resource id is not set. Attributes used: azure_client_id, azure_client_secret, azure_tenant_id. Please check for details

  on ../resources/ line 12, in data "databricks_spark_version" "spark_version":
  12: data "databricks_spark_version" "spark_version" {

I included the proper depends_on statements for the data blocks and the cluster and instance pool, but still get this error. Based on some google searching, I tried moving the cluster and instance pool to a sub-module, but that produces the same error.

Provider versions:
databricks: “0.4.3”
azurerm: “2.90.0”

Terraform version: 1.0.4

1 Like

Do you have the Databricks environment variables also in place?

No, I am passing authentication info through the provider block:

provider "databricks" {
  alias               = "test_01"
  host                = module.test_module_01.workspace_url
  azure_tenant_id     =["tenant-id"]
  azure_client_id     =["sp-client-id"]
  azure_client_secret =["sp-client-secret"]

  # ARM_USE_MSI environment variable is recommended
  azure_use_msi = true

Then calling the provider in my module block:

module "test_module_01" {
  source = "../"

  providers = {
    databricks = databricks.test_01
<truncated file for space>

Hm, if azure_use_msi should be used, couldn’t azure_*_id variables be dropped?

Yes, that is true. I added them as a desperate attempt to fix my issue.

(Morgan Freeman voice) It did not.

This is the line of code which raises the error.

How about adding the azure_workspace_resource_id and verify contributor role within subscription?

Good idea, but that did not work.

@plieberg have you solved the issue? I’m asking as I have practically the same issue. And that issue started to happen within a week.

1 Like

Same for us. Started happening this week (just returned after break).

I’m wondering if this is related to the following GitHub issue with token {} support.

No, this is still an issue.

Ive been scratching my head at this for a few days now too. For me personally the issue seems to be triggered when two conditions are met. a databricks workspace is created with azurerm in a version that is before the introduction of public_network_access_enabled. Then upgrade azurerm version and it enforces that attribute to true, breaking the databricks provider for inexplicable reasons. I am currently working on figuring out if its possible to set public_network_access_enabled = true and not have the databricks provider be broken. it also forces azurerm_storage_data_lake_gen2_path to be recreated which is not good either.

It is absolutely a bug in azurerm. I am tired of their half baked products. I hope this saves others some time

@ekhaydarov Do you know what version of azurerm that was? Trying to see if I backlevel to that version, if I can get this module I am building to work.

My latest attempt was to create the workspace with my module, but then come back and add straight live resource code to create an instance pool and it fails for the same reason, even with depends_on specified for the data blocks and the live resources.

I opened a github issue for this, if you would like to go hit the thumbs up or comment:

Today I also run into this authentication mess and I think it got fixed by updating azure-cli.

please raise issue on databricks provider issue tracker if this behavior persists.

Attributes used: azure_client_id, azure_client_secret, azure_tenant_id. Please check Terraform Registry for details

i think that you should have added the service principal to databricks workspace. Technically, Azure Databricks requires azure_workspace_resource_id in headers only the first time SPN makes a call to Azure Databricks APIs, but i’ve tried to make it explicit in terraform provider.

I have struggled with similar issue for a couple of days now. I have no infrastructure in place before I try to run terraform apply. It failed on one of my data-resources trying to fetch latest_lts. But without a workspace, there would be no endpoint to get this from.

After reading the troubleshooting guide (link):

Most data resources make an API call to a workspace. If a workspace doesn’t exist yet, authentication is not configured for provider error is raised. To work around this issue and guarantee a proper lazy authentication with data resources, you should add depends_on = [azurerm_databricks_workspace.this] or depends_on = [databricks_mws_workspaces.this] to the body. I added a depends_on to the data-resources. After this it went through.

data "databricks_node_type" "smallest" {
  local_disk = true
  depends_on = [azurerm_databricks_workspace.my_workspace]

data "databricks_spark_version" "latest_lts" {
  long_term_support = true
  depends_on = [azurerm_databricks_workspace.my_workspace]