Federated DCs - ACL Tokens with Local scope show up (only) in the Primary DC - how should Nomad deal with that?

Hi *,

I have a set of federated consul datacenters with enabled ACLs and connect. On top of that is a Nomad cluster. As I want to start Nomad jobs with connect enabled.

Everything looks good. But when I try to start this nomad job

job "egw" {

  datacenters = ["prod1"]
  region = "de-west"
  node_pool = "static-clients"

  group "egw" {
    count = 2
    network {
      mode = "bridge"
    }

    service {
      name = "egw"
      connect {
        gateway {
          proxy {}
          terminating {
            service {
              name = "postgres"
            }
            service {
              name = "redis-svc"
            }
          }
        }
      }
    }
  }
}

from GitHub - nairnavin/practical-nomad-consul: Set up a 3 Tier application (classic springboot petclinic) in a Nomad / Consul cluster leveraging features of service mesh, ingress and terminating gateways, load balancers etc. I get the error failed to derive SI token: Unexpected response code: 500 (rpc error making call: Local tokens are disabled).

What I see is, that only my primary consul cluster has the “Local” tag in the UI. If I create a token from the UI in one of the secondary datacenters, I cannot see it in the datacenter, where it has been created but only in the primary datacenter.

This article (How Nomad Manages ACL Tokens/Polices for Consul Service Mesh – HashiCorp Help Center) mentions, that SI tokens, that are generated by Nomad are only generated with a Local scope and are no longer replicated.

But how are Nomad created connect jobs supposed to work across federated datacenters, then?

Kind regards,
Töns