datalakestore.Client#GetProperties: Failure sending request: StatusCode=0

I have a gen2 storage account created months ago along with a container to it. It was working fine until last week when it fails during terraform plan with below error:

Error: retrieving File System “container” in Storage Account “storage”: datalakestore.Client#GetProperties: Failure sending request: StatusCode=0 – Original Error: context deadline exceeded
**│ **
│ with azurerm_storage_data_lake_gen2_filesystem.filesystem,
│ on storage.tf line 210, in resource “azurerm_storage_data_lake_gen2_filesystem” “filesystem”:
│ 210: resource “azurerm_storage_data_lake_gen2_filesystem” “filesystem” {

We are using below versions:
azurerm = “2.94.0”
Terraform → Required version >= 1.0.8"

Can anybody help us with this issue ?

is how Go describes timeout errors. So the implication here is that Azure took so long to respond, that Terraform gave up waiting for it.

Hopefully this gives you a lead you can follow - it’s the kind of problem which is highly likely to be unique to something in your particular environment.

The same plan on 30th May completed within seconds and now it is not completed after 5 mins. something is definitely not right and changed it seems.

It is true indeed that this is the general error used for a timeout implemented using the Go standard library timeout feature. It’s unfortunate that it’s appearing in Terraform’s UI.

The rest of the text before that message suggests that it’s still the Azure provider detecting and reporting this situation though, rather than Terraform itself. I mention this only because it suggests that this timeout is part of the provider’s own behavior, and so the provider might offer ways to extend this timeout.

For example, it’s a common convention for resource types that can potentially run for a long time to support a block called timeouts inside the resource block that can customize the timeouts enforced by the provider for that resource. I’m not sure if this one supports it, but hopefully the docs have something to say about it.

We finally found our problem, it was all related to several network stuff. The same set up was working previously but one fine day, it stopped working.

We finally found our problem, it was related to several network stuff:

  • We allowed an access to our Domain Controllers NSG to communicate to this vnet.
  • We changed our target sub-resource on the private endpoint created for this storage account, it was previously blob and we switched to dfs.