Service Registration Error: "redirect address must not be empty"

Hi all,

I’m very new to Vault, and this question comes from playing around with the dev environment in the Getting Started docs. But, if you have the time, I would appreciate the help! :slight_smile:

I’m running a dev Consul agent on loopback:8500. My Vault config is really simple. (I’ll post it, if the answer to my problem isn’t obvious.)

When my Consul agent is running and I then start my Vault server, I get:

[ERROR] service_registration.consul: error running service registration: redirect address must not be empty

Vault still runs, but I’m just curious. I’ve done some hunting, here, in the Google Group… No luck. Just trying to get the basics down before my exam tomorrow. gulp

Thanks!

John

I don’t think we will need the Vault configuration, but the service check configuration of Consul for Vault instead. But both would be ok, too.

Ah, OK. Well, I’m literally just running

consul agent -dev

(With the binary in my path, obviously.) My Vault config is:

disable_mlock = true

storage “consul” {
address = “127.0.0.1:8500”
path = “vault/”
}

listener “tcp” {
address = “127.0.0.1:8200”
tls_disable = 1
}

Can you remove the listener stanza and try again?
So just using

disable_mlock = true

storage “consul” {
  address = “127.0.0.1:8500”
path = “vault/”
}

as your Vault configuration.

If it is successfull you should be able to visit http://localhost:8500/ui/dc1/kv showing an vault entry.

Yes, Vault seems to still be up and running with that smaller configuration, as you suggested, but I’m still seeing that service registration error. (The first warning was repeating until I started Consul.)

Vault writing to STDOUT
BEGINS


2020-04-30T14:22:01.303+0100 [WARN] storage migration check error:
error=“Get http://127.0.0.1:8500/v1/kv/vault/core/migration: dial tcp
127.0.0.1:8500: connect: connection refused”
2020-04-30T14:22:03.303+0100 [WARN] no api_addr value specified in
config or in VAULT_API_ADDR; falling back to detection if possible,
but this value should be manually set
2020-04-30T14:22:03.304+0100 [ERROR] service_registration.consul:
error running service registration: redirect address must not be empty
==> Vault server configuration:

         Api Address: [https://127.0.0.1:8200](https://127.0.0.1:8200/)
                 Cgo: disabled
     Cluster Address: [https://127.0.0.1:8201](https://127.0.0.1:8201/)
           Log Level: info
               Mlock: supported: true, enabled: false
       Recovery Mode: false
             Storage: consul (HA available)
             Version: Vault v1.4.0

==> Vault server started! Log data will stream in below:

ENDS

Consul agent -dev
BEGINS

==> Starting Consul agent…
Version: ‘v1.7.2’
Node ID: ‘e71d3ff1-9026-214f-e254-c5106591862c’
Node name: ‘hobbiton’
Datacenter: ‘dc1’ (Segment: ‘’)
Server: true (Bootstrap: false)
Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: 8502, DNS: 8600)
Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming:
false, Auto-Encrypt-TLS: false

==> Log data will now stream in as it occurs:

2020-04-30T14:22:02.761+0100 [DEBUG] agent: Using random ID as

node ID: id=e71d3ff1-9026-214f-e254-c5106591862c
2020-04-30T14:22:02.763+0100 [DEBUG] agent.tlsutil: Update: version=1
2020-04-30T14:22:02.764+0100 [DEBUG] agent.tlsutil:
OutgoingRPCWrapper: version=1
2020-04-30T14:22:02.765+0100 [INFO] agent.server.raft: initial
configuration: index=1 servers=“[{Suffrage:Voter
ID:e71d3ff1-9026-214f-e254-c5106591862c Address:127.0.0.1:8300}]”
2020-04-30T14:22:02.765+0100 [INFO] agent.server.serf.wan: serf:
EventMemberJoin: hobbiton.dc1 127.0.0.1
2020-04-30T14:22:02.766+0100 [INFO] agent.server.serf.lan: serf:
EventMemberJoin: hobbiton 127.0.0.1
2020-04-30T14:22:02.766+0100 [INFO] agent: Started DNS server:
address=127.0.0.1:8600 network=udp
2020-04-30T14:22:02.766+0100 [INFO] agent.server.raft: entering
follower state: follower=“Node at 127.0.0.1:8300 [Follower]” leader=
2020-04-30T14:22:02.766+0100 [INFO] agent.server: Adding LAN
server: server=“hobbiton (Addr: tcp/127.0.0.1:8300) (DC: dc1)”
2020-04-30T14:22:02.766+0100 [INFO] agent.server: Handled event
for server in area: event=member-join server=hobbiton.dc1 area=wan
2020-04-30T14:22:02.768+0100 [INFO] agent: Started DNS server:
address=127.0.0.1:8600 network=tcp
2020-04-30T14:22:02.773+0100 [INFO] agent: Started HTTP server:
address=127.0.0.1:8500 network=tcp
2020-04-30T14:22:02.773+0100 [INFO] agent: Started gRPC server:
address=127.0.0.1:8502 network=tcp
2020-04-30T14:22:02.773+0100 [INFO] agent: started state syncer
==> Consul agent running!
2020-04-30T14:22:02.829+0100 [WARN] agent.server.raft: heartbeat
timeout reached, starting election: last-leader=
2020-04-30T14:22:02.829+0100 [INFO] agent.server.raft: entering
candidate state: node=“Node at 127.0.0.1:8300 [Candidate]” term=2
2020-04-30T14:22:02.829+0100 [DEBUG] agent.server.raft: votes: needed=1
2020-04-30T14:22:02.829+0100 [DEBUG] agent.server.raft: vote
granted: from=e71d3ff1-9026-214f-e254-c5106591862c term=2 tally=1
2020-04-30T14:22:02.829+0100 [INFO] agent.server.raft: election
won: tally=1
2020-04-30T14:22:02.829+0100 [INFO] agent.server.raft: entering
leader state: leader=“Node at 127.0.0.1:8300 [Leader]”
2020-04-30T14:22:02.830+0100 [INFO] agent.server: cluster
leadership acquired
Processing server acl mode for: hobbiton - 0
2020-04-30T14:22:02.830+0100 [INFO] agent.server: Cannot upgrade
to new ACLs: leaderMode=0 mode=0 found=true leader=127.0.0.1:8300
2020-04-30T14:22:02.830+0100 [INFO] agent.server: New leader
elected: payload=hobbiton
2020-04-30T14:22:02.830+0100 [DEBUG] connect.ca.consul: consul CA
provider configured:
id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81
is_primary=true
2020-04-30T14:22:02.850+0100 [INFO] agent.server.connect:
initialized primary datacenter CA with provider: provider=consul
2020-04-30T14:22:02.850+0100 [INFO] agent.leader: started
routine: routine=“CA root pruning”
2020-04-30T14:22:02.850+0100 [DEBUG] agent.server: Skipping self
join check for node since the cluster is too small: node=hobbiton
2020-04-30T14:22:02.850+0100 [INFO] agent.server: member joined,
marking health alive: member=hobbiton
2020-04-30T14:22:02.970+0100 [DEBUG] agent: Skipping remote check
since it is managed automatically: check=serfHealth
2020-04-30T14:22:02.970+0100 [INFO] agent: Synced node info
2020-04-30T14:22:02.970+0100 [DEBUG] agent: Node info in sync
2020-04-30T14:22:03.303+0100 [DEBUG] agent.http: Request finished:
method=GET url=/v1/kv/vault/core/migration from=127.0.0.1:60616
latency=50.628µs
2020-04-30T14:22:03.305+0100 [DEBUG] agent.http: Request finished:
method=GET url=/v1/agent/self from=127.0.0.1:60616 latency=1.503339ms
2020-04-30T14:22:03.307+0100 [DEBUG] agent.http: Request finished:
method=GET url=/v1/kv/vault/core/seal-config from=127.0.0.1:60616
latency=47.199µs
2020-04-30T14:22:04.576+0100 [DEBUG] agent: Skipping remote check
since it is managed automatically: check=serfHealth
2020-04-30T14:22:04.576+0100 [DEBUG] agent: Node info in sync
2020-04-30T14:22:04.831+0100 [DEBUG] agent.tlsutil:
OutgoingRPCWrapper: version=1
2020-04-30T14:23:02.830+0100 [DEBUG] agent.server: Skipping self
join check for node since the cluster is too small: node=hobbiton

ENDS

Including

api_addr = “http://127.0.0.1:8200

In my Vault config gets rid of the error. But I’m confused, as I thought that was only needed for clusters.

1 Like

Hi!

Apologies for being late to the party on this one. Yes, that is a new change that came out with 1.4.0. It wasn’t included in the changelog because the api_addr is described as being used for plugin back-ends, and Consul is a plugin back-end, so if it worked without it before, it was more of a happy accident than an expected behavior.

Question for you, would you have noticed this change if it had been in the changelog? Upgrade guide? Or in the Vault configuration docs. You aren’t the first person to encounter this and I’m thinking about where it would be most helpful to document it.

Thanks!

2 Likes

Wow. Didn’t expect that. Thanks for the thorough response. To be honest, this is my first week with Vault, getting trained up to deploy it at customer sites eventually, so I was looking everywhere for answers: the configuration docs, the Google group, the Learn: Getting Started docs, the Consul docs, Google, devops Stackexchange. :slight_smile:

I guess I could see arguments for all three places you’ve referenced. I’ll let some more experienced folks weigh in on that, I think.

Cheers,

John

1 Like

Ah cool! I have the immediate ability to update the configuration docs, so I’ll start with them and perhaps branch out from there. Thanks for the info!

1 Like

I’m still getting this error

==> Vault server configuration:

         Api Address: http://127.0.0.1:8200
                 Cgo: disabled
     Cluster Address: https://127.0.0.1:8201
          Listener 1: tcp (addr: "127.0.0.1:8200", cluster address: "127.0.0.1:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
           Log Level: info
               Mlock: supported: false, enabled: false
       Recovery Mode: false
             Storage: consul (HA available)
             Version: Vault v1.4.1

==> Vault server started! Log data will stream in below:

2020-05-20T14:16:01.882+0200 [INFO]  proxy environment: http_proxy= https_proxy= no_proxy=
2020-05-20T14:16:01.884+0200 [WARN]  no `api_addr` value specified in config or in VAULT_API_ADDR; falling back to detection if possible, but this value should be manually set
2020-05-20T14:16:01.884+0200 [ERROR] service_registration.consul: error running service registration: redirect address must not be empty

my config :

disable_mlock = true

storage "consul" {
  address = "127.0.0.1:8500"
  path    = "vault/"
}


listener "tcp" {
api_addr = "127.0.0.1:8200"
tls_disable = 1
}

Do I miss something?

Thank you in advance

Unless I’m mistaken, I don’t think the api_addr property belongs in the listener block; you have to set it globally, so directly under your disable_mlock property, for example. The listener block will need an address property, though, like your storage block (with the proper port, of course).

2 Likes

Setting the api_addr globally, fixed the issue.

disable_mlock = true
api_addr = "http://127.0.0.1:8200"

storage "consul" {
address = "127.0.0.1:8500"
path    = "vault/"
}


listener "tcp" {
tls_disable = 1
}

Thanks!