Variables and conditional with dynamic blocks, not work as expected (azurerm provider)

I am trying to use the dynamic blocks introduced in version 0.12. With this vesion it is possible, for example, to declare a unique resource to create virtual machines (windows / linux).

The problem I’m having is how to declare the condition to know what kind of machine to create, I have tried several options but all get the same problem.

I do not understand what is happening, also if I declare the value I need directly with locals there are no problems, but doing this does not make sense.

Any help is well come
Thank you!!

Issue description:
In the example that follows, if I use this condition I get an error because it tries to create the two dynamic blocks, even when the result of “os_win” is “true” (it is being evaluated well):

locals {
  os_win = substr(lookup (var.image, "publisher", "value_not_declared"), 0, 9) == "Microsoft" ? true: false
  dynamic_linux = ! local.os_win ? {dummy_create = true}: {}
  dynamic_windows = local.os_win ? {dummy_create = true}: {}

If I declare it manually (it’s not a variable), it works correctly, but as I said earlier, it does not make sense to do so.

locals {
  os_win = false
  dynamic_linux = ! local.os_win ? {dummy_create = true}: {}
  dynamic_windows = local.os_win ? {dummy_create = true}: {}

All code:

terraform -version
Terraform v0.12.5
+ provider.azurerm v1.31.0
# variable "image" {
#   type = map
#   default = {
#     publisher = "Canonical"
#     offer     = "UbuntuServer"
#     sku       = "16.04-LTS"
#     version   = "latest"
#   }
# }

variable "image" {
  type = map
  default = {
    publisher = "MicrosoftWindowsServer"
    offer     = "WindowsServer"
    sku       = "2016-Datacenter"
    version   = "latest"

locals {
  os_win = substr(lookup(var.image, "publisher", "value_not_declared"), 0, 9) == "Microsoft" ? true : false
  #os_win          = true
  dynamic_linux   = ! local.os_win ? { dummy_create = true } : {}
  dynamic_windows = local.os_win ? { dummy_create = true } : {}

resource "azurerm_resource_group" "main" {
  name     = "testjc-resources"
  location = "West US 2"

resource "azurerm_virtual_network" "main" {
  name                = "testjc-network"
  address_space       = [""]
  location            = azurerm_resource_group.main.location
  resource_group_name =

resource "azurerm_subnet" "internal" {
  name                 = "internal"
  resource_group_name  =
  virtual_network_name =
  address_prefix       = ""

resource "azurerm_network_interface" "main" {
  name                = "testjc-nic"
  location            = azurerm_resource_group.main.location
  resource_group_name =

  ip_configuration {
    name                          = "testconfiguration1"
    subnet_id                     =
    private_ip_address_allocation = "Dynamic"

resource "azurerm_virtual_machine" "main" {
  name                  = "testjc-vm"
  location              = azurerm_resource_group.main.location
  resource_group_name   =
  network_interface_ids = []
  vm_size               = "Standard_DS1_v2"

  storage_image_reference {
    publisher = lookup(var.image, "publisher")
    offer     = lookup(var.image, "offer")
    sku       = lookup(var.image, "sku")
    version   = lookup(var.image, "version")
  storage_os_disk {
    name              = "myosdisk1"
    caching           = "ReadWrite"
    create_option     = "FromImage"
    managed_disk_type = "Standard_LRS"
  os_profile {
    computer_name  = "hostname"
    admin_username = "testadmin"
    admin_password = "Password1234!"

  dynamic "os_profile_linux_config" {
    for_each = local.dynamic_linux

    content {
      disable_password_authentication = false

  dynamic "os_profile_windows_config" {
    for_each = local.dynamic_windows

    content {
      enable_automatic_upgrades = false
      provision_vm_agent        = true


terraform plan

Error: "os_profile_windows_config": conflicts with os_profile_linux_config

  on line 81, in resource "azurerm_virtual_machine" "main":
  81: resource "azurerm_virtual_machine" "main" {

Error: "os_profile_linux_config": conflicts with os_profile_windows_config

  on line 81, in resource "azurerm_virtual_machine" "main":
  81: resource "azurerm_virtual_machine" "main" {

Hi @jcmaasb,

Unfortunately I think this situation is running up against a limitation of the Terraform SDK, which the Azure provider uses to interact with Terraform.

Terraform is asking the provider to validate the configuration when var.os_win isn’t known yet, and so Terraform isn’t yet able to determine the values of local.dynamic_linux and local.dynamic_windows, so it must conservatively assume that a block of both types might appear in the result. The provider then rejects that because it has a rule in its schema that these two cannot be used together.

As long as the AzureRM provider is still Terraform 0.11 compatible and thus using the 0.11-oriented Terraform SDK, I don’t think there is any way you could write this that would pass the validation. The new 0.12-oriented SDK is starting development now, so a future provider version using it is likely to address this, but not for the foreseeable future.

For the moment I think the best option would be to write the resource block itself twice – once with the Windows config block and once with the Linux config block – and use a conditional count on each one to ensure that only one is used at a time.

It’s possible that the provider code could potentially work around this itself by handing the conflict constraint in a more flexible way: rather than just declaring that the two are always incompatible, it could instead instruct the SDK to allow both and then do its own checking of the image operating system to dynamically choose which one is allowed. By implementing the logic itself, the provider can then potentially handle the case where the image isn’t known yet by skipping that validation check, and thus deferring that check until plan time or until apply time once a specific value is known. It may be worth starting that discussion in an issue in the AzureRM provider repository, though there may well be specific details about the Azure API that make it infeasible to implement what I suggested here, once the AzureRM provider team investigates.

Hi @apparentlymart,
Thank you very much for your quick response.

I will follow all of your suggestions and open a issue in the AzureRM provider repository so that they can assess whether it is possible to make improvements in the future.