Nomad ACL returning 403 even with submit-job policy permission

Hi Team,

This is a query regarding the nomad job run demo.hcl
When I run this in the cli I see there is a PUT request sent to the load balancer.

When I use the admin token, which I have been using till now, but now that I made another non admin token with policy

namespace "default" {
  policy = "read"
  capabilities = ["submit-job","dispatch-job","read-logs","alloc-exec"]
}

it returns 403 error

nomad job run load_balancer/haproxy.nomad
Error submitting job: Unexpected response code: 403 (Permission denied)

I have tried this with

nomad -v 
Nomad v1.0.4 (9294f35f9aa8dbb4acb6e85fa88e3e2534a3e41a)

The policy has the submit-job permissions then why it returns 403 ? Am i missing something in the pollicy.

In the nomad api section of jobs, I only found GET and POST api’s : Jobs - HTTP API | Nomad by HashiCorp

Thanks

Hi @surajthakur,

The nomad job run command uses the Job Register API endpoint and is a write operation. I would therefore suggest changing the policy = "read" to policy = "write.

If you wanted to test a policy just for the nomad job run command execution then it would look something like:

namespace "default" {
  policy       = "write"
  capabilities = ["submit-job"]
}

Thanks,
jrasell and the Nomad team

Thanks @jrasell for the update. We will update the policy.

Doesn’t that necessarily mean you’re giving more access than should be required? If the ‘submit-job’ capability is all that is required, a full ‘write’ policy is overkill. The fact that this is resulting in a 403 implies that the job register API requires more than just the ‘submit-job’ capability, or that something is broken in the capability system.

I Just ran through a reproduction for this on Nomad 1.0.4 and it worked without having to change the policy on the default namespace to write. I was able to use the policy as provided in the ticket. Perhaps there is something in the job that is running afoul of a different permission?

I have my reproduction script in this gist. I’m not sure what might have happened in @surajthakur’s environment.

Wanted to at least chime in with the results of my experiment.

Regards,
Charlie V.

Just in case anyone else is having this issue and is using host volumes, I had to add:

host_volume "*" {
  policy = "write"
}

So default read policy with submit-job capability and wildcard host_volume with write policy was enough