Hi,
Im provisioning a cluster with terraform and want single out the etcd nodes IP’s after the cluster is done to trigger a ansible playbook and manipulate the udev write-cache.
As of know im come so far as having this " kubectl get nodes --selector=node-role.kubernetes.io/etcd -o jsonpath=’{$.items[*].status.addresses[?(@.type==“InternalIP”)].address}’ --kubeconfig=kube_config_openstack"
Which singles out the etcd_node ips. But next step is a bit unclear for me.
How do i get that information in a local file that I later can use as hosts for my ansible?
/Anders.
HI @anders.johansson,
depending on your overall environment there might be more options.
How do you orchestrate execution between terraform and ansible, do you have CI/CD tooling in place? If yes, the kubectl command could maybe executed there.
Within terraform only, you could also use a local-exec provisioner and run the kubectl command there including redirecting output to a file.
If nothing really fits well, null_resource is an option, too:
resource "null_resource" "example1" {
provisioner "local-exec" {
command = "open WFH, '>completed.txt' and print WFH scalar localtime"
interpreter = ["perl", "-e"]
}
}
Hi @anders.johansson,
I’m not specifically familliar with kubectl
and so my advice here will be more general, but a typical way to use Terraform as part of a broader pipeline with subsequent steps is to declare output values in your root module, and then after terraform apply
has succeeded you can run terraform output -json
to get those finalized output values in a machine-readable format for use with downstream software.
The JSON representation of the output values is unlikely to be directly accepted by your downstream software, but since JSON is a commonly-supported format it’s typically possible to write a small amount of glue code to decode Terraform’s format and then either use it directly to run some other software or generate a new file that is in a different format that the other software knows how to parse.
If you’re running this in an automated pipeline then the typical pattern I’ve seen is to have the job which runs terraform apply
capture the output values JSON as an artifact, and then downstream jobs can retrieve that artifact in order to process it further.
1 Like
Thanks @tbugfinder will take a look at this.
@apparentlymart also an awesome tip, didnt think of this.
Thanks guys for pointing me in the right direction.
Ended up using the null_resource and local_exec provisioner.