TIPS: Howto implement Module depends_on emulation

Good morning. I tought I would share a workaround I put together to implement a missing “depends_on” support for modules. It is fairly simple and will be easy to swap out once Terraform implement built-in depends_on support.

1st, create a file called something like and throw the following into it:

    Add the following line to the resource in this module that depends on the completion of external module components:

    depends_on = ["null_resource.module_depends_on"]

    This will force Terraform to wait until the dependant external resources are created before proceeding with the creation of the
    resource that contains the line above.

    This is a hack until Terraform officially support module depends_on.

variable "module_depends_on" {
  default = [""]

resource "null_resource" "module_depends_on" {
  triggers = {
    value = "${length(var.module_depends_on)}"

Next, follow the commented section and add the “depends_on = [“null_resource.module_depends_on”]” line to the resource in your module you want Terraform to wait for. In my case, it was the in the azurerm_virtual_machine resource:

resource azurerm_virtual_machine VM {
  name                             = "${}"
  depends_on                       = ["null_resource.module_depends_on"]

Finally, when you call the module in a Terraform tf file you simply do it like this. The interesting line is the one called module_depends_on. Put as many resource references you need in this list of resources:

module "jumpbox01" {
  source = ""

  module_depends_on                 = ["${module.FWCore01.firewall}", "${module.FWMGMT01.firewall}"]
  name                              = "${var.envprefix}-jumpbox01"


Thanks for sharing this tip, @bernardmaltais!

The key insight here is that variables are nodes in the dependency graph too, and so you can use them as a “hub” for passing dependencies across the module boundary.

If using Terraform 0.12, this pattern is more straightforward because depends_on can refer directly to variables in Terraform 0.12 and later:

variable "vm_depends_on" {
  type    = any
  default = null

resource "azure_virtual_machine" "example" {
  depends_on = [var.vm_depends_on]

  # ...

Since this variable is being used only for its dependencies and not for its value, I defined it as having type any to get the most flexibility. The caller of this module can then use vm_depends_on in the same way as the first-class depends_on meta-argument:

module "example" {
  source = "..."

  vm_depends_on = [module.fw_core01.firewall]

One nice side-benefit of this approach is that you can potentially offer multiple different depends_on-like variables so that different parts of the module can have different dependencies. Keeping the dependencies tightly scoped will improve the performance of terraform apply because it will allow more actions to potentially run concurrently, but of course that comes at the expense of some usability for the caller of having to think about the dependencies of different parts of the module separately.


Thank you for this improved solution! Much cleaner. I will implement this in my modules in the future.

Thanks for the workaround…
Tried this and it worked well…but sometimes I see this error
Error: error updating S3 Bucket (mys3bucket) tags: error setting resource tags (mys3bucket): NoSuchBucket: The specified bucket does not exist
status code: 404, request id: <>, host id: <>

rerun fixes the issue


I’m trying to create a Azure kubernetes Cluster followed by kubernetes secret, unfortunately the above soultion is not working for me.

My module tree structure is :

The file root/module_kubernetes_secret/ looks like as follows :

variable “target_group_depends_on” {
type = any
default =
resource “kubernetes_secret” “secret” {
depends_on = [var. target_group_depends_on]


The file root/module_kubernetes_cluster/ :

output “kubconfig” {
value = azurerm_kubernetes_cluster.k8s-cluster.kubconfig
depends_on = [azurerm_kubernetes_cluster.k8s-cluster]

The file root/staging/ :
module k8s {
source = “…/module_kubernetes_cluster”
module secret {
source = “…/module_kubernetes_secret”
target_group_depends_on = “module.k8s.kubconfig”

Can you help me with this ?


I’ve found a similar issue on Azure when creating a subnet when the associated vnet wasn’t yet finished being created. Without a way force the subnet to wait for the network creation, the two ran in parallel and the subnet process always failed on the first run, but succeeded on subsequent runs because the vnet was already created.

I found Apparentlymart’s reply got me moving in the right direction. I wanted to further expand on it when doing an interaction between two modules in a aid to see if this would help others jump ahead in the learning curve.

In the code snippets below, I’ve only included the code that is relevant to this example for forcing the Subnet resource creation to wait on the output from the Virtual Network module.


variable "subnet_depends_on" {
  description "Variable to force module to wait for the Virtual Network creation to finish"  


resource "azurerm_subnet" "subnet" {
  depends_on = [var.subnet_depends_on]


resource "azurerm_virtual_network" "virtual_network" {


output "id" {
   description "The Azure assigned ID generated after the Virtual Network resource is created and available."
   value =


provider "azurerm" {

module "subnet" {
  source = "../modules/subnet"
  subnet_depends_on =

module "vnet" {
  source = "../modules/vnet"

A post was split to a new topic: “Invalid for_each argument” with data source result