Prometheus Metrics Are Empty

I am trying to enable Prometheus metrics from a Vault cluster hosted in Kubernetes installed via the Helm chart. I added this to my Vault configuration:

            telemetry {
              disable_hostname = true
              prometheus_retention_time = "12h"
            }

I also created a policy

path "/sys/metrics*" {
  capabilities = ["read"]
}

It creates the policy but is probably moot since I’m using the root token for now. Anyways, when I do a read operation on the /sys/metrics` path it returns nothing.

(⎈ |supervisor:vault)~ % vault read /sys/metrics
(⎈ |supervisor:vault)~ %

Can you offer any guidance on why this is empty? Note, I restarted one of the pods just to make sure.

try curling your vault endpoint at /v1/sys/metrics?format=prometheus and see what it says. If prometheus is not enabled, it will come back and tell you so.

If I curl the endpoint I get lots of metrics. But when I add format=prometheus at the end I get no matches found.

Update: At some point this started working today when using curl but the vault read command still isn’t coming back with anything.