How to retrieve the null_resource returned value?

Hi All,

I have the below null_resource to retrieve the prometheus cluster IP:

 resource "null_resource" "get_prometheus_ip" {
  provisioner "local-exec" {
    command = "kubectl get svc prometheus-server -n monitoring | awk -F' ' '{print $3}' | tail -1"
  }
}

I want to use its returned result in another place:

resource "helm_release" "prometheus-adapter" {
  name = "prometheus-adapter"
  chart = "${path.module}/helm/charts/stable/prometheus-adapter/"
  namespace = "default"

  // prometheus URL
  set {
    name = "prometheus.url"
    value = "http://${returnedValueHere}"
  }

is this doable or not and is there also a better way to do this?

Thanks :slight_smile:

1 Like

The local-exec provisioner doesn’t directly support output. The example at https://www.terraform.io/docs/provisioners/local-exec.html demonstrates using redirection to a file, which can later be read using the file() function, or https://www.terraform.io/docs/providers/local/d/file.html.

Use of depends_on may be necessary to ensure correct ordering.

2 Likes

Thank you very much for your answer, this works and I will publish the solution :slight_smile:

The solution as @jeremykatz suggested is as below:

resource "null_resource" "get_prometheus_ip" {
  triggers  =  { always_run = "${timestamp()}" }
  provisioner "local-exec" {
    command = "kubectl get svc prometheus-server -n monitoring | awk -F' ' '{print $3}' | tail -1 | tr -d '\n' >> ${path.module}/prometheus_private_ips.txt"
  }
  depends_on = ["helm_release.prometheus"]
}

data "local_file" "prometheus-ip" {
    filename = "${path.module}/prometheus_private_ips.txt"
  depends_on = ["null_resource.get_prometheus_ip"]
}

resource "helm_release" "prometheus-adapter" {
  name = "prometheus-adapter"
  chart = "${path.module}/helm/charts/stable/prometheus-adapter/"
  namespace = "default"

  // prometheus URL
  set {
    name = "prometheus.url"
    value = "http://${data.local_file.prometheus-ip.content}"
  }
1 Like

I realize I’m late to this topic, but I just wanted to note that because the command you are running seems to be a read-only command gathering data another option is to use the external data source to run a wrapper script to gather the data and return it in JSON format, which can then avoid creating temporary files on disk and reading them back in.

Provisioners are intended for operations whose value comes from their side-effects, like starting up services or similar. While we can use them to read data by exploiting the side-effect of creating a temporary file on disk, the external data source more directly represents the intent to gather some data without side-effects.

With that said, if what you did with the local_file data source is meeting your needs then that’s fine! I’m just sharing the above in case it’s useful to you or to others.

4 Likes

2 posts were split to a new topic: Using null_resource and external data source together

I also use the external data source for a similar use case.

data "external" "policy_document" {
  program = ["bash", "${path.module}/get-policy-doc.bash"]

  query = {
    lb_cntrl_version = var.lb_cntrl_version
  }
}

resource "aws_iam_policy" "this" {
  name        = local.identifier
  policy = base64decode(data.external.policy_document.result.ecoded_doc)
}

And here is the script.

#!/usr/bin/env bash

set -euo pipefail

eval "$(jq -r '@sh "lb_cntrl_version=\(.lb_cntrl_version)"')"

policy_document=$(curl -sS https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v${lb_cntrl_version}/docs/install/iam_policy.json)

ecoded_doc=$(echo $policy_document | base64 -w 0)

jq -n --arg ecoded_doc "$ecoded_doc" '{"ecoded_doc":$ecoded_doc}'
1 Like

Hello there, I just wanted to add that I accomplish this with Ansible Tower AWX run by using awxkit pip module and suplying the --monitor flag see: HashiQube - A Development Lab Using All the HashiCorp Products and other Popular Applications such as Docker, Kubernetes, Traefik, Ansible, AWX Tower and loads more.

With a null_resource, I also accomlish it by tailing the user-data output log, see: https://github.com/star3am/terraform-hashicorp-hashiqube/blob/master/modules/aws-hashiqube/main.tf#L190 and https://github.com/star3am/terraform-hashicorp-hashiqube/blob/master/modules/shared/startup_script#L2

resource "null_resource" "debug" {
  count = var.debug_user_data == true ? 1 : 0

  triggers = {
    timestamp = local.timestamp
  }

  connection {
    type     = "ssh"
    user     = "ubuntu"
    host     = aws_eip.hashiqube.public_ip
    private_key = file(var.ssh_private_key)
  }

  provisioner "remote-exec" {
    inline = [
      # https://developer.hashicorp.com/terraform/language/resources/provisioners/remote-exec#scripts
      # See Note in the link above about: set -o errexit
      "set -o errexit",
      "while [ ! -f /var/log/user-data.log ]; do sleep 5; done;",
      "tail -f /var/log/user-data.log | { sed '/ USER-DATA END / q' && kill $$ || true; }",
      "exit 0"
    ]
    on_failure = continue
  }

  depends_on = [
    aws_instance.hashiqube,
    aws_eip_association.eip_assoc
  ]
}