Having trouble bringing up Vault node w/ integrated storage

i am having trouble bringing up a Vault node with integrated storage. Getting the following errors. Anyone have a quick answer?

[root@ip-172-31-94-240 system]$ systemctl status vault
● vault.service - “HashiCorp Vault Service”
Loaded: loaded (/etc/systemd/system/vault.service; enabled; vendor preset: disabled)
Active: activating (auto-restart) (Result: exit-code) since Fri 2023-12-22 23:11:03 UTC; 1s ago
Process: 5231 ExecStart=/usr/bin/vault server -config=/etc/vault.d/vault.hcl (code=exited, status=1/FAILURE)
Main PID: 5231 (code=exited, status=1/FAILURE)

Dec 22 23:11:03 ip-172-31-94-240.ec2.internal systemd[1]: vault.service: main process exited, code=exited, status=1/FAILURE
Dec 22 23:11:03 ip-172-31-94-240.ec2.internal systemd[1]: Unit vault.service entered failed state.
Dec 22 23:11:03 ip-172-31-94-240.ec2.internal systemd[1]: vault.service failed.

here is my vault.hcl file-
listener “tcp” {
address = “0.0.0.0:8200”
cluster_address = “0.0.0.0:8201”
tls_disable = 1
}

storage “raft” {
path = “/vault-data”
node_id = “ip-172-31-94-240.ec2.internal”
}

ui = true
#log_level = “ERROR”
#api_addr = “http://vault.gswhv.com:8200
#cluster_name = “my-vault-cluster”
#cluster_addr = “https://vault-node-a.gswhv.com:8201

here is my vault.service file-
[Unit]
Description=“HashiCorp Vault Service”
Requires=network-online.target
After=network-online.target
ConditionFileNotEmpty=/etc/vault.d/vault.hcl

[Service]
User=vault
Group=vault
ProtectSystem=full
ProtectHome=read-only
PrivateTmp=yes
PrivateDevices=yes
SecureBits=keep-caps
AmbientCapabilities=CAP_IPC_LOCK
Capabilities=CAP_IPC_LOCK+ep
CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK
NoNewPrivileges=yes
ExecStart=/usr/bin/vault server -config=/etc/vault.d/vault.hcl
ExecReload=/bin/kill --signal HUP $MAINPID
StandardOutput=/logs/vault/output.log
StandardError=/logs/vault/error.log
KillMode=process
KillSignal=SIGINT
Restart=on-failure
RestartSec=5
TimeoutStopSec=30
StartLimitIntervalSec=60
StartLimitBurst=3
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

Are you able to start Vault from the CLI pointing to your config file? Based on your example config, I think you have some parameters missing. I took your example and added a few things:

ui            = true
cluster_addr  = "http://127.0.0.1:8201"
api_addr      = "https://127.0.0.1:8200"
disable_mlock = true

storage "raft" {
  path    = "./vault-data"
  node_id = "ip-foo.bar"
}

listener "tcp" {
  address       = "0.0.0.0:8200"
  cluster_address = "0.0.0.0:8201"
  tls_disable = 1
}

Then I can start Vault in dev mode to test:

vault server -config=vault.hcl -dev -dev-root-token-id=root

WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory
and starts unsealed with a single unseal key. The root token is already
authenticated to the CLI, so you can immediately begin using Vault.

You may need to set the following environment variables:

    $ export VAULT_ADDR='http://127.0.0.1:8200'

The unseal key and root token are displayed below in case you want to
seal/unseal the Vault or re-authenticate.

Unseal Key: ZMvGHvI6x63WvNilMoW5sqHsi4E/pFyPNp3rm52O00Q=
Root Token: root

Development mode should NOT be used in production installations!

From another terminal I can test that I can access Vault

export VAULT_ADDR=http://127.0.0.1:8200
export VAULT_TOKEN=root
vault auth list
Path      Type     Accessor               Description                Version
----      ----     --------               -----------                -------
token/    token    auth_token_54cb3b16    token based credentials    n/a

If you remove the cluster_addr, like your example, it throws an error:

Cluster address must be set when using raft storage

This tutorial provides an example of deploying with integrated storage:

i will add these values this afternoon!! Thanks for the help!!

thank you !! i was able to make the modifications to the vault.hcl file. It worked!!

I was able to init and unseal it !!

Key Value


token s.uFmwflmRWYQf9u2Hql81uoBh
token_accessor rmG5JeOvP2rtVfG9I5hum5BC
token_duration ∞
token_renewable false
token_policies [“root”]
identity_policies
policies [“root”]
[root@ip-172-31-94-240 vault.d]$ vault operator raft list-peers
Node Address State Voter


ip-172-31-94-240.ec2.internal 127.0.0.1:8201 leader true

I am now going to make 2 more and make it a 3 node cluster. Do i need to change the following when bringing on the other two nodes?
cluster_addr = “http://127.0.0.1:8201

are there any temporary licenses for Vault Enterprise that are longer than 30 minutes?
i just saw that on the UI

Awesome! And yes cluster_addr is the address advertised to other Vault servers so this should be accessible by the other Vault servers in the cluster.

More info on that (and other settings) can be found here:

You can contact your account manager to get a trial license for Vault Enterprise. Yet another way (though you wont be installing Vault) is to use HCP Vault which runs Vault Enterprise:

i was able to get a license. i changed cluster_addr to the public IP of my ec2 instance.
this is getting fun!!

1 Like

i am trying to join the 2nd and 3rd nodes to the cluster and i keep getting the following. Any ideas?

Error unsealing: Error making API request.

URL: PUT http://127.0.0.1:8200/v1/sys/unseal
Code: 500. Errors:

  • failed to decrypt encrypted stored keys: cipher: message authentication failed

never mind !! i got it working. i went ahead and deployed a whole new set of three hosts and started clean. The api_address is the public IP and the cluster_addr is the private IP. See below-

listener “tcp” {
address = “0.0.0.0:8200”
cluster_address = “0.0.0.0:8201”
tls_disable = 1
}

storage “raft” {
path = “/vault-data”
node_id = “ip-172-31-41-20.ec2.internal”
}

ui = true
#log_level = “ERROR”
api_addr = “http://54.242.37.207:8200
#cluster_name = “my-vault-cluster”
cluster_addr = “http://172.31.41.20:8201
disable_mlock = true

I was able to form a 3 node cluster
[root@ip-172-31-41-20 ec2-user]# vault operator raft list-peers
Node Address State Voter


ip-172-31-41-20.ec2.internal 172.31.41.20:8201 leader true
ip-172-31-47-55.ec2.internal 172.31.47.55:8201 follower true
ip-172-31-21-199.ec2.internal 172.31.21.199:8201 follower true

thanks a ton for your help!!!

Glad you got it working!

You should be able to set the API address to the internal IP as well, unless you have a use case for that to be public.

making it public for now. Thanks for all your help!!

1 Like