Best practices to deploy patroni-consul within a two-datacenter consul federated clusters

Hi all. I am planning to deploy Patroni using a two dc federated consul. I alreay have the federated clusters working:

[erauser@hvidldiodb12 ~]$ consul members -wan
Node                           Address             Status  Type    Build   Protocol  DC   Partition  Segment
hvidldiodb12.phx.aexp.com.dc1  10.13.102.155:8302  alive   server  1.15.2  2         dc1  default    <all>
hvidldiodb13.phx.aexp.com.dc1  10.13.102.156:8302  alive   server  1.15.2  2         dc1  default    <all>
hvidldiodb14.phx.aexp.com.dc1  10.13.102.157:8302  alive   server  1.15.2  2         dc1  default    <all>
hvidldiodb15.phx.aexp.com.dc2  10.13.102.158:8302  alive   server  1.15.2  2         dc2  default    <all>
hvidldiodb16.phx.aexp.com.dc2  10.13.102.159:8302  alive   server  1.15.2  2         dc2  default    <all>

Patroni service are running in the same hosts as consul.
Patroni configuration related to using consul:

namespace: /service/
scope: tddiodb11-consul
name: host_1
log:
    dir: /var/log/patroni
    level: DEBUG
restapi:
    connect_address: 10.13.102.155:8679
    listen: 10.13.102.155:8679

consul:
    host: 127.0.0.1:8500
    scheme: http
    dc: dc1

So far so good, I managed to start three nodes (from dc1) successfully:

+ Cluster: tddiodb11-consul ------------+---------+----+-----------+
| Member | Host          | Role         | State   | TL | Lag in MB |
+--------+---------------+--------------+---------+----+-----------+
| host_1 | 10.13.102.155 | Replica      | running |  4 |         0 |
| host_2 | 10.13.102.156 | Leader       | running |  4 |           |
| host_3 | 10.13.102.157 | Sync Standby | running |  4 |         0 |
+--------+---------------+--------------+---------+----+-----------+

But when attempting to start patroni on a node from dc2 (same patroni configuration) this happens:

2023-05-11 10:04:59,733 DEBUG: http://127.0.0.1:8500 "PUT /v1/session/create?dc=dc1 HTTP/1.1" 500 85
2023-05-11 10:04:59,733 WARNING: Retry got exception: 500 rpc error making call: rpc error making call: apply failed: Missing node registration

Not sure what is missing here. It is actually possible to write using consul kv put from a dc2 into dc1:

[erauser@hvidldiodb15 patroni-consul]$ hostname
hvidldiodb15.phx.aexp.com
[erauser@hvidldiodb15 patroni-consul]$ consul members -wan
Node                           Address             Status  Type    Build   Protocol  DC   Partition  Segment
hvidldiodb12.phx.aexp.com.dc1  10.13.102.155:8302  alive   server  1.15.2  2         dc1  default    <all>
hvidldiodb13.phx.aexp.com.dc1  10.13.102.156:8302  alive   server  1.15.2  2         dc1  default    <all>
hvidldiodb14.phx.aexp.com.dc1  10.13.102.157:8302  alive   server  1.15.2  2         dc1  default    <all>
hvidldiodb15.phx.aexp.com.dc2  10.13.102.158:8302  alive   server  1.15.2  2         dc2  default    <all>
hvidldiodb16.phx.aexp.com.dc2  10.13.102.159:8302  alive   server  1.15.2  2         dc2  default    <all>
[erauser@hvidldiodb15 patroni-consul]$ consul kv put -datacenter=dc1 mydata 2
Success! Data written to: mydata

Any hints will be much appretiated.

Regards,
Gerardo