Considerations when grouping two EKSs into one Consul Cluster (admin partition)

I am grouping two EKSs into one Consul cluster.
I am using the admin partitions function of Consul, and I have tried several tests, but communication between services is still not smooth.
The process I went through is as follows.

  1. Deploy the first EKS’ Consult Server
  2. Proxy-default create
  3. Client Server Deployment for Second EKS
  4. Application Deployment of the first EKS and Service Default Deployment of the Application
  5. Application Deployment of the second EKS and Service Default Deployment of the Application
  6. Deploy exported-service to communicate the application of the second EKS with the first EKS
  7. Ingress-gateway create

I can communicate between services normally without acl here.
However, if the acl of the first EKS is applied and the partitions-token of the second EKS is added and proceeded, several errors due to acl occur.

I must apply acl. (Oidc interlocking, audit, etc.)
Please help me, everyone

To address ACL issues when grouping two EKS clusters into one Consul cluster with admin partitions, ensure the following:

  1. Correctly configure ACLs, including bootstrapping, defining appropriate policies and roles, and using partition-specific tokens.
  2. Verify exported services and intentions are properly set up for cross-partition communication.
  3. Check OIDC integration and ensure JWT tokens are correctly linked to Consul ACL tokens.
  4. Ensure network policies and DNS settings don’t block communication or misresolve names.
  5. Troubleshoot by examining logs for ACL errors and reviewing token permissions.
  6. Use a Consul version that fully supports admin partitions and addresses known ACL issues.