You thought you were late - check me out!
I’m fairly new to Terraform and I’m looking at a module that uses a null_resource to derive a value and write it to a file, then uses an external data source to inspect that file and return the value. It seems like this pattern confuses Terraform on runs after the first one - it seems that on later runs if state has not changed, Terraform will not call the null_resource provider to regenerate the file, but it will also never return values from state. Instead it returns the value from the file - if the file is gone it just doesn’t work.
@apparentlymart is this combination of null_resource and external data source what you intended to recommend? Have you seen this problem before? Is there a better way to implement the pattern?
Pay particular attention to the first line in the null resource defined in foudelsahi’s solution:
triggers = { always_run = “${timestamp()}” }
And also the docs for null_resource:
- triggers (Map of String) A map of arbitrary strings that, when changed, will force the null resource to be replaced, re-running any associated provisioners.
Note that the string "${timestamp()}"
will always change (unless you plan & apply in the same millisecond… haha), and thus always rerun the provisioner.
Hi @doug-fish! I’ve moved this to a new topic because that other topic was pretty old and it’s nice to keep each topic focused on only one question.
With that said, in the comment you were referring to I was suggesting using external
instead of null_resource
, rather than using the two together. That advice only applies if you’re talking about only retrieving some data, and not also creating an important side-effect at the same time.
For example, if it’s important to your situation that you end up with a file on disk containing the information then external
wouldn’t be appropriate for that alone, though you could potentially get the same effect by using the external
data source first and then writing its result to disk using the local_file
resource.
I’d also note that Terraform is not really intended for managing local files on disk, as the warning in the documentation mentions. The hashicorp/local
provider exists as a pragmatic way to help solve some problems on the edges of Terraform’s scope, but I’d suggest exhausting all other possible solutions first and treating the creation of a temporary file as a last-resort solution. If you can say a little more about your underlying problem, rather than the way you’re currently trying to solve it, I might be able to suggest some alternative strategies.
@apparentlymart Thanks for the organization and response!
The fundamental problem at hand is to model IP address reservation in Infoblox IPAM to represent IP address ranges that are used for Azure private ranges to be assigned to Virtual Networks and subnets. The scripting has been configured to know which IP ranges it is responsible for managing. It takes requests for a particular Azure VNET name and Azure Subnet and interacts with IPAM to determine if there is already an existing range assigned - or if this is a new Subnet request. In any case it needs to return the VNET and subnet ranges for use by other Terraform to add ranges to the VNET and create the subnet.
It takes somewhat longer to reserve a new address as compared to returning an existing address.
We are currently approaching this problem using the null_resource for the “reservation” scripting, and an external data provider to return information.
Do you think this is a reasonable way to go about it? What approach would you take to this problem?