Unable to access consul service data in nomad job spec template

I have a brand pair of nomad and consul clusters (first time I’ve set this up). Jobs created in Nomad populate Consul services as expected. I have used nomad setup consul to authorise new jobs in Nomad to generate consul tokens. These get generated (according to the Consul UI which lists a new tokens for each new task) properly.

However, if I try to lookup a Consul service in the job spec template, it never works.

read_config:
{{ range service "influxdb" }}
  - url: {{ .Address }}:{{ .Port }}
{{ end }}
# test

resolves to:

read_config:
# test

To investigate this further, I tried the nomad-cluster job (as one that I didn’t create) - same results.

Interesting, if you:

{{ range services }}
{{ .Name }}
{{ end }}

it will print out the name of all expected services - so it is appropriately connecting to consul, right?

If I query the service via the HTTP API, it returns the correct info for the service.

Weirdly, if I enable debug logging, the config for the call to consul-template is logged. If I log in to the UI using that token, I am not able to read the services:

There don't seem to be any registered services in this Consul cluster, or you may not have service:read and node:read access to this view. Use Terraform, Kubernetes CRDs, Vault, or the Consul CLI to register Services.
  • policy applied to said token is:
service_prefix "" {
  policy = "read"
}

key_prefix "" {
  policy = "read"
}

service "" {
  policy = "read"  
}

node "" {
  policy = "read"
}

The first two were added by nomad setup consul. I added the latter in direct result to the UI message, but expect the service and service_prefix to act the same.

Any idea what I am missing?

Hi @joe-bowman,

Welcome to the HashiCorp Forums!

Instead of the newly added rules, add the following, which should work for you.

node_prefix "" {
  policy = "read"
}

I hope this helps.

Works perfectly, thank you!