Vault provider tries to connect to vault before actually deploying it

Hello,

I’m trying to set up a vault cluster for which I’m going to find the token after it has been set up. Unfortunately the vault provider does a check-up beforehand and if it cannot connect to vault, which doesn’t yet exist in the first place. Is there any way I can skip this step or postpone it until the vault resource has finished or something to that effect?

I don’t really get how this is supposed to work. I guess, I’m not supposed to combine providers like that in the same state anyway?

I was just reading this: https://github.com/hashicorp/terraform/issues/2430
I’m guessing this is still the case now too.

In my case, given that I’m generating the kubernetes certificates through vault, which I deploy after consul (because I need the dns names) and, of course, before kubernetes (as I’m injecting the certificates through cloud-init - I guess that’s not necessarily clever because of the security concerns, but let’s ignore that for now), this looks like an almost impossible situation :slight_smile:

If you are deploying something and then also want to setup things within that system the normal way would be to have two different root modules (i.e. two different states) and only run the second one after the first is completed.

Yeah, I thought that might be the case. Any ideas how I can decently share the common variables like, I don’t know, network subnets and such?

We use the remote state data source

I’m reading now the documentation, where it says:

Although terraform_remote_state only exposes output values, its user must have access to the entire state snapshot, which often includes some sensitive information.

Does that mean that I have to explicitly create output variables in the initial stage for all the global variables before I can to use them?

For the second stage would something like that make sense?

data "terraform_remote_state" "core" {
  backend = "local"

  config = {
    path = "${path.module}/../1-core/terraform.tfstate"
  }
}

Am I understanding this correctly?

Yes remote state can only access items explicitly shared via outputs, similarly to how modules work.

Do you know if there’s a way of doing this through modules by exposing the variables to the whole terraform state? That is to say, to have them available also outside the module block, so for everything that I’m running through the .tf files?

No… But you could have multiple modules load the same YAML file, to share data that way.

So basically, with remote state, not only am I not able to use global variables directly, I need to double the number of entries, as each variable will need to have a corresponding output variable, right?

Therefore I cannot keep the original variable name reference, because this is will be an output variable than belongs to the root module? So that means that for my other main.tf (the second terraform state) I cannot keep the same variable reference, so I basically need yet another pattern there, different from the original/initial state? Am I understanding this correctly?

I don’t think I understand that related to multiple modules loading the same yaml file. And don’t you mean hcl though anyway?

Remote state is a similar concept to how modules work.

So if you want to pass things from say the root module to a sub module you need to have a variable defined in that module. And if you want to pass a value from a sub module back to the root module you need to define an output in the module.

You can then reference that output using something like module.<name>.<output>

The idea is similar with remote state. In your root module you define output blocks with whatever you want to share.

Then somewhere else entirely (in a totally different root module) you can setup the remote state data source to point to the S3 bucket, file, etc containing the state file and reference those outputs: data.terraform_remote_state.<name>.outputs.<output>

I’m not quite sure what you are meaning when you talk about “global variables”, “double the entries” or “original reference”. Could you explain a bit more? Maybe via an example?

1 Like

Yeah, by global variable I simply meant to say any random variable defined in variables.tf and/or terraform.tfvars that both stages (with their own state) would need.

In the meantime I was able to overcome this dogmatic insanity by simply using symlinks for the variable files (that also includes data sources, actually, such as datacenter id, name, whatever in vsphere) and it works ok for now.

I just hope I won’t have to start programming around the vsphere API because I’d find it easier to do :slight_smile:

[later edit:]
Just to be more clear (related to your question): First you define the variable (like you normally do), then you double that entry in that you have to define a corresponding output variable (so if you have 12 variables that need to be used in both stages, you’re going to have 24 entries) and then that variable cannot be referenced in the second stage as var.variable_name, but, as you say, data.terraform_remote_state.<name>.outputs.<output>, which kind of screws up the pattern for the second stage whose syntax is very similar to the one in the first tage - for example when defining the cloudinit configuration file through template configs where I use lots of variables and all that.

Symlinks are such an elegant solution to this horrid prospect, I have to say.