Hello everyone,
I’m trying to circumvent the limitation of my cloud provider not being able to give the necessary features.
Therefore I would need to fallback to API call.
For such, I need dynamicly generated tokens from another module (because these tokens can be reused).
I tried to give them directly to the provisioner:
resource "null_resource" "this" {
triggers = {
[...]
}
provisioner "local-exec" {
environment = {
ACCESS_TOKEN = var.access_token
}
command = "./execute_command.sh [...]"
working_dir = "${path.module}/assets"
interpreter = ["bash", "-c"]
}
provisioner "local-exec" {
when = destroy
environment = {
ACCESS_TOKEN = var.access_token
}
command = "./execute_command.sh [...]"
working_dir = "${path.module}/assets"
interpreter = ["bash", "-c"]
}
}
But I’m receiving the well known error:
│ Destroy-time provisioners and their connection configurations may only
│ reference attributes of the related resource, via 'self', 'count.index', or
│ 'each.key'.
│
│ References to other resources during the destroy phase can cause dependency
│ cycles and interact poorly with create_before_destroy.
And if I put the token in the triggers block, the token generated a long time ago and saved in the state seems to be used instead of a new one and for sure this one is obsolete.
resource "null_resource" "this" {
triggers = {
access_token = var.access_token
}
I searched several solutions on the web but wasn’t able to find a proper one.
Can you help me in providing a transient/dynamic value to a provisioner?
Or should I re-design my code?
Thanks in advance!
Hi @sebastien.latre,
You can do this with the new (well, available since v1.4) terraform_data
Managed Resource Type. As per the null_resource
documentation, the terraform_data
resource is recommended over null_resource
Below is an example for using a variable and getting through to the provisioner command via the self
attribute. I am using powershell as the interpreter in my example but the concept using a different interpreter is the same.
variable "access_token" {
type = string
default = "ABCD1234"
sensitive = true
}
resource "terraform_data" "this" {
input = var.access_token
triggers_replace = [timestamp()]
provisioner "local-exec" {
environment = {
ACCESS_TOKEN = self.input
}
command = "echo $ENV:ACCESS_TOKEN >> output.txt"
interpreter = ["PowerShell", "-Command"]
}
}
Hope that helps.
Happy Terraforming
Hello, thanks for this idea.
To be honest, I thought about it but basically these 2 resources are sharing pretty the same concepts so I didn’t go on it at first. I still performed a test and that seems to prove me right.
The input data is saved into the state so this is the one used at destroy time. Unfortunately at this time the token is more probably expired so destroy is failing…
I’m looking some smart ways to give to my script transient information that is NOT saved into the state therefore not in “triggers” variable for null_resource nor in “input” variable for terraform_data.
And I wouldn’t pass through files to do that (which will require another dummy resource to do that )
OK, I understand. I thought by ‘reusable’ the tokens would be long-lived, but if they are likely to have expired then, yes, the destroy call using the local provisioner script with the ‘original’ token (from state) isn’t going to work. This was in your original post but I missed that detail.
I’ve tried a few things that I thought might work but I think I have run up to the same blocks as you have - e.g… the provisioners using the information from state from apply time. I also tried the terracurl provider but also find that the destroy elements are constructed at apply time so, although they destroy elements (url, body, etc.) can be different, as they are constructed at plan/apply time then they do not use update data (from data resources or variables) at destroy time.
A solution I might be tempted to follow is to make the external script self-contained, e.g… obtaining a token and performing the API call itself. Using the identity of the context the script runs in to authenticate and get the required token for the API call. But this, of course, depends upon that being possible.
Interesting problem - That I would be interested in knowing what final solution you reach.
I already thought of a self-contained script as well (and it’s possible) but recreating the token in the different resources I need would lead to some copy/paste of codes in several locations and I don’t want that.
It will also trigger several token creation where only one is required…
I don’t have either a centralized location where to store the obtained token for downstream resources.
(Environment variables would have been perfect but this is not modifiable)
A potential idea (but ugly in my opinion), is to write them to file and read back the file wherever needed through linux script’s source
.
This is again some variance of a self-contained script I would say.
The drawback of this is that there is not anymore a clear dependency between the code that creates the token and the one that consumes it (because it’s out of Terraform finally)
Honestly any other idea to change that is welcome!
As I didn’t any other better suggestion, I finally implemented the “authentication” through file.
Let me detail a bit what I did (id it can help others):
A first resource is doing the authentication (so that it can be shared accross several sub resources), and writing the tokens to a file.
Then the sub-resource that needs the tokens, is reading the file and getting the tokens internally (so this is done outside of the state).
Note: Technically speaking, as I’m on bash side, I’m sourcing the file to get tokens as environment variables.