Enabling HTTPS for Vault inside a container

We are running vault inside a Docker container.

In the vault documentation it was mentioned that to enable https we should specify the path of the .cer and .pem file in the vault config.hcl file.

Should we do the same when vault is running inside the container?

Can anyone help me on this.

Is this straight Docker or an orchestration system like Nomad, Kubernetes, etc.?

Generally you’d use volume mounts to expose those files into the container - for Kubernetes that would be using secrets, while for straight Docker it would me mounting a path from the underlying server into the container.

Thanks Stuart. Its just Docker. Instead of mounting a path from underlying server onto the container, mapping the container port on to the server port on which https is enabled will also work right?

Please correct me if i am wrong.

There are two things you need to do:

  1. Map the path from the server into the container for the certificate files
  2. Expose port 443 from the container to the outside world

if i dont map the path from the server into the container for the certificate files and instead just map the port 8200 in the container to the port 443 (i.e the port for which https is enabled on the server) on the server. It should also work right?

You can map the files easily enough as readonly (:ro). For certs I rather map them individually rather than the whole cert directory. I’m just a little paranoid with mapping drives.

Yes, you either have TLS enabled or not on port 8200, 443 it not necessary when you enable TLS on a listener.

Normally you map 443 to 8200 on a load balancer as a TLS pass thru then enable TLS on the 8200 listener. That way it terminates the SSL session on the node.

So then, i can enable https on port 443 on the server and map this to the port 8200 on the container instead of mapping the certificate files on the server to the container.

It isn’t an OR it is a BOTH.

You need to expose the port as well as mapping the certificate files.

Depends on how you use Vault and what your VAULT_ADDR is set to:

then: -p 443:8200

then: -p 8200:8200

As @stuart-c said, you need to map and include the cert files.

Followed the steps but got the following error while executing vault commands inside the container

vault status

Error checking seal status: Get “”: x509: cannot validate certificate for because it doesn’t contain any IP SANs

Referred the following GitHub issue SSL/TLS Question · Issue #212 · hashicorp/vault · GitHub

and when i add tls-skip-verify it works.

vault status -tls-skip-verify

Is this because the .cer has a different host name than the container on which it is executing?
Can i ignore this error

Make sure is in SAN while generating the SSL cert. In k8s you need to define very pod/container name combo in the SAN list. Ours has like 20 aliases defined.

You can ignore it, but that’s not exactly the best way to do it.

Thanks Aram.This is very helpful.
Since vault is running inside the container, in the vault config file can i define the api and cluster addr as below or should i use the docker host domain name

api_addr= “
cluster_addr = “

Following message is getting displayed in vault logs

[ERROR] core: error during forwarded RPC request: error=“rpc error: code = Unavailable desc = connection error: desc = “transport: Error while dialing remote error: tls: internal error””
2021-10-20T15:57:10.612Z [ERROR] core: forward request error: error=“error during forwarding RPC request”

http: TLS handshake error
remote error: tls: unknown certificate

Your SSL cert is not setup correctly. Verify the cert and key for the name you’re using makes sure its either in a Kubernetes secret or mounted.

But from the browser i am able to make https requests. May be is this because the certificate has a CN corresponding to docker host and not

When you’re on the host, the default is to access via so set your VAULT_ADDR to avoid the error.