Access module output variables the same way as resource output attributes

Hi there, I am trying to convert repetitive multi environment terraform code to modules using Terraform Up and Running as a guide and running into an issue. I am trying to access module output variables the same way as resource output attributes and it does not seem to be working. I am sure I am doing something wrong but following the book.

terraform --version
Terraform v0.12.6
+ provider.aws v2.22.0

Tree:

.
├── global
│   ├── main.tf
│   ├── output.tf
│   └── var.tf
├── main.tf
├── modules
│   └── cluster
│       ├── dnodes.sh
│       ├── enodes.sh
│       ├── main.tf
│       └── var.tf
├── prod
│   └── cluster
│       └── main.tf
└── provider.tf

global/output.tf

output "sg" {
    value = "${aws_security_group.sg.name}"
}

output "profile" {
    value = "${aws_iam_instance_profile.profile.name}"
}

Just one instance resource in modules/cluster/main.tf

resource "aws_instance" "dnode" {
  count                   = "${var.dnodes}"
  ami                     = "${var.ami}"
  instance_type           = "${var.instance_type_dnode}"
  subnet_id               = "${var.subnet}"
  key_name                = "${var.key}"
  vpc_security_group_ids  = ["${module.global.sg}"]
  iam_instance_profile    = "${module.global.profile}"
  user_data               = "${file("dnodes.sh")}"
}

prod/cluster/main.tf

provider "aws" {
  region  = "us-west-2"
  profile = "prod"
}

module "cluster" {
  source                 = "../../modules/cluster"
  availability_zone      = "us-west-2a"
  environment            = "PROD"
  iam_instance_profile   = "${module.global.profile}"
  key                    = "AWS_PROD_ML_OREGON"
  sg                     = "${module.global.sg}"
}

module "global" {
  source      = "../../global"
  environment = "PROD"
  vpc         = "vpc-d86ec3be"
}

terraform plan:

Error: Reference to undeclared module

  on modules/cluster/main.tf line 7, in resource "aws_instance" "dnode":
   7:   vpc_security_group_ids  = ["${module.global.sg}"]

No module call named "global" is declared in global.cluster.


Error: Reference to undeclared module

  on modules/cluster/main.tf line 7, in resource "aws_instance" "dnode":
   7:   vpc_security_group_ids  = ["${module.global.sg}"]

No module call named "global" is declared in prod.cluster.


Error: Reference to undeclared module

  on modules/cluster/main.tf line 8, in resource "aws_instance" "dnode":
   8:   iam_instance_profile    = "${module.global.profile}"

No module call named "global" is declared in global.cluster.


Error: Reference to undeclared module

  on modules/cluster/main.tf line 8, in resource "aws_instance" "dnode":
   8:   iam_instance_profile    = "${module.global.profile}"

No module call named "global" is declared in prod.cluster.


Error: Reference to undeclared module

  on modules/cluster/main.tf line 34, in resource "aws_instance" "enode":
  34:   vpc_security_group_ids  = ["${module.global.sg}"]

No module call named "global" is declared in global.cluster.


Error: Reference to undeclared module

  on modules/cluster/main.tf line 34, in resource "aws_instance" "enode":
  34:   vpc_security_group_ids  = ["${module.global.sg}"]

No module call named "global" is declared in prod.cluster.

Hi @wblakecannon,

Your module "global" block is in the prod/cluster directory, so only .tf files in that directory can refer to module.global.

In modules/cluster, you must access those values instead via input variables on that module. You seem to already have input variables iam_instance_profile and sg defined on that module, according to the module "cluster" block, so you can use var.iam_instance_profile and var.sg to access those values from the files in the modules/cluster directory:

resource "aws_instance" "dnode" {
  count                   = "${var.dnodes}"
  ami                     = "${var.ami}"
  instance_type           = "${var.instance_type_dnode}"
  subnet_id               = "${var.subnet}"
  key_name                = "${var.key}"
  vpc_security_group_ids  = ["${var.sg}"]
  iam_instance_profile    = "${var.iam_instance_profile}"
  user_data               = "${file("dnodes.sh")}"
}

Thanks for your reply. I am thoroughly confused how to do this. I updated my terraform per your recommendations and now I get

Error: Reference to undeclared output value

  on prod/cluster/main.tf line 10, in module "cluster":
  10:   iam_instance_profile = "${module.global.profile}"

An output value with the name "profile" has not been declared in
module.prod.module.global.


Error: Reference to undeclared output value

  on prod/cluster/main.tf line 10, in module "cluster":
  10:   iam_instance_profile = "${module.global.profile}"

An output value with the name "profile" has not been declared in
module.global.module.global.


Error: Reference to undeclared output value

  on prod/cluster/main.tf line 12, in module "cluster":
  12:   sg                   = "${module.global.sg}"

An output value with the name "sg" has not been declared in
module.global.module.global.


Error: Reference to undeclared output value

  on prod/cluster/main.tf line 12, in module "cluster":
  12:   sg                   = "${module.global.sg}"

An output value with the name "sg" has not been declared in
module.prod.module.global.

Hi @wblakecannon,

Unfortunately it’s hard to follow exactly how you’ve structured your configuration from what you’ve shared. These new error messages include module addresses like module.global.module.global, which suggests that you are referring to the global module from inside itself, or that something else is going on that I can’t see from the snippets here.

Since this configuration has lots of different parts, could you perhaps put it into a temporary GitHub repository and share a link to the repository? That’ll make it easier to see how all of these modules are connected together and what exactly each of them is expecting (via Input Variables) and exporting (via Output Values).

Sure. Here’s the temporary git repo https://github.com/wblakecannon/terraform-ml

Just a warning… I have no idea what I’m doing!

Thanks @wblakecannon! Much easier to see what is depending on what now.

It looks like the update you made after my first comment did fix that first round of errors, and these new errors are an unrelated problem. This error is saying that module.global must explicitly export outputs named sg and profile for its calling module to refer to. You can declare these in a file global/outputs.tf like this:

output "sg" {
  value = aws_security_group.sg.id
}

output "profile" {
  value = aws_iam_instance_profile.profile.name
}

Another thing I noticed reading through the configuration is that your root main.tf is creating two instances of the module in prod/cluster, one in a module "prod" block and another in a module "global" block. Having two blocks here will cause Terraform to try to create all of the resources inside this module twice, which may either lead to duplicate objects or naming conflicts depending on the constraints of the remote APIs. This is also why each of the error messages in your output appeared twice (once for module.global.module.global and once for module.prod.module.global).

Perhaps what you intended was for the root main.tf to only have the module "prod" block, and then the “prod” module in turn has a module "global" block referring to the module in global. If you do that then Terraform will only create one of each requested instance.

The name “prod” suggests that you’re using that module to represent an environment, though. While having all of your environments together in a single Terraform configuration is possible, in most cases we recommend having a separate configuration root per environment so that you can apply changes to each one separately. In your case, doing that would mean deleting the .tf files in the root and then running Terraform from your prod/cluster directory:

cd prod/cluster
terraform init
terraform apply

If you were to then make another environment named “staging” then you might create a new directory prod/staging that contains a similar set of module instantiations but with different arguments passed in to the modules to reflect the differences between the environments. To apply changes to the “staging” environment you’d run Terraform from that other directory instead:

cd staging/cluster
terraform init
terraform apply

Each of these configurations would have its own instance of the global module and thus would have its own versions of all of the resources described inside, keeping the two environments separated from one another.

That’s Martin for the time you put forth helping me out on this issue I was having. I’m not new to terraform but just couldn’t figure out why my modules weren’t working.

I can now run a terraform plan so I should be good to go. I just think I need to do more playing around and configuring to make this work.

Our nonprod environments (stage, dev, test, etc) are in one VPC and our prod are in another VPC. That’s why I made the global module. However, I’m not sure that will work in nonprod because I’ll have to make all that stuff in just one environment with the way I have it set up now. Anyways, I have the ball rolling now. Thanks for your help again.