(Connect) Potential to badly configure a proxy with upstream causes bypass

Whilst playing with Connect and figuring out the various options (via nomad) I came across the following situation which feels unsafe but could easily be typo’d in config…

Normally: I’ve got an ingress gateway which sees a service A in consul running on :5000… It instead sends the request to :12345 and hits the proxy sidecar but with a header (or something else) telling it it’s meant for port 5000.
The sidecar sends it to localhost:5000, hitting the service, which in turn calls an upstream service on localhost:5002. This goes to the same proxy which sends it to upstream service B… fine.

BUT, If I “accidentally” advertise the service A in consul as being on port 5002 then:
Gateway sends to request to :12345 but with header saying it’s meant for 5002…
Sidecar sends it to localhost:5002, which misses the service and instead goes straight back into itself, seeing it as a request for the upstream… This upstream request goes through the same intentions checks which all pass, and the upstream proxy sends it to the upstream service B!

So despite only having an ingress to service A, by “badly” configuring which port I advertise service A in consul, it seems I can send requests to service B…

Note: I haven’t actually managed a successful request because service B is grpc and I think something in service B isn’t happy with my html formatting, but I can see in the logs that proxy B is seeing the request and successfully authing it.

Is there a use case for this or is it something that could potentially be picked up when trying to deploy a job - specifying a service port that matches an upstream port causes errors when trying to deploy, example bad config:

service {
      name = "test-client"
      port = "5002" <-- should be 5000

      connect {
        sidecar_service {
          proxy {
            upstreams {
              destination_name = "test-service"
              local_bind_port = 5002