How to debug "Vault token contains non-printable characters" errors

I’m getting the following error when doing a terraform plan of Vault configuration/resources:

Error: configured Vault token contains non-printable characters and cannot be used

  on target/main.tf line 10, in provider "vault":
  10: provider "vault" {

The way the Vault token is passed on is via VAULT_TOKEN environment variable set in the docker-compose.yml file. The token string is correct as it works as expected via Vault CLI tool.

While I’m by no means sure, I suspect the issue may be related to some kind of mangled or extra characters being added to the token string, as it is provided as a KMS encrypted string and then decrypted on the fly via shush. The problem is that there is no way of telling which is the actual string being parsed by Vault that results in the error. I’ve tried with TRACE and DEBUG levels in Terraform and none shows the token (see [1] for an output sample).

If I just run env in the Docker image entrypoint before calling terraform the value of VAULT_TOKEN variable looks correcty printed. In fact if I just copy the string from the terminal and paste it to use with Vault CLI, it authenticates successfully against the Vault server.

Any ideas of what may be the issue or how to debug it?

Software versions:

  • Terraform: v0.12.5
  • Terraform Vault provider: v2.0.0_x4

P.S. Opening this topic in the Vault category as it is the component that is returning the error, per [2]. Feel free to categorize as Terraform if needed.

[1] https://gist.github.com/frodera/e89d1396855e76fcc626b67ce91fe4bb
[2] https://github.com/hashicorp/vault/blob/e4136718ad4337abd1886160e5f2b4e760666d84/api/client.go#L759

Hi Francisco,

Vault and its provider are careful not to log the token so as not to expose it to those who shouldn’t have it. You could try replacing the Vault provider in your terraform config with something like local-exec, which could then print the token so you can debug what’s going on.

I have to say that I’ve never heard of a setup like yours, where the token is being encrypted by another system. That doesn’t mean it’s wrong or bad, but it does make me wonder if there’s a simpler way to do it. Care to expand on your use case?

Thanks @ncabatoff for your comments.

First of all, we have found a work around for this by calling shush directly in the container entrypoint. The problem seems to occur only when using environment variables. I have not had time to dive any deeper, but looks like it may be some issue with shush.

As for our setup, we are encrypting in KMS the Vault token used by Terraform Vault’s plugin. AFAIK this token is required by Terraform to apply the desired config settings in the remote Vault server.
Is there a better, more secure or convenient way of doing this?

I don’t know if there’s a better option than what you’re doing, because I don’t know any of the details of your setup. How are you invoking Terraform? Is it a human, or some process like Jenkins? What environment are you running it in, e.g. a cloud VM, an onprem k8s cluster, etc?

Assuming that you’re running within AWS, a good option might be to deploy Vault Agent with auto-auth and caching enabled. Then you could have Vault Agent authenticate to Vault using the AWS auth method (see e.g. https://learn.hashicorp.com/vault/developer/vault-agent-aws).

Then the question becomes how we get the terraform vault provider to ask agent for a token. I learned today that Vault has a notion called a “token helper”: https://www.vaultproject.io/docs/commands/token-helper.html. So you could create a ~/.vault file containing a token_helper config setting that points to a shell script which asks Vault Agent to create a child token, then emits it on stdout. I haven’t tried this but looking at the code it should work.

If this sounds appealing I suggest filing an issue against the terraform vault provider to add explicit Vault Agent support so that this can be streamlined.

2 Likes