Bootstrap secondary Cluster in mesh gateway federation

Hi,

I am trying to connect multiple Consul clusters with mesh gateway federation.
My main issue is how I can bootstrap the secondary clusters so that I can create tokens using Vault.
My primary datacenter works well with the mesh gateway, but how do I bootstrap the secondary cluster so that I can create token for the secondary cluster mesh gateway and the default tokens for secondary consul cluster?
Setting the replication token via http api requires a consul token, if ACL is enabled in the cluster.

Any help is highly appreciated

Since last week I have been further investigating this issue. Added logs to the consul-envoy in the federation mesh gateway.

Current state is that the primary cluster can be reached by the secondary clusters, but the replication is not taking place. When looking into the consul-envoy logs there are these linee about the secondary cluster

Blockquote
[2023-08-14 11:56:29.089][12849][trace][filter] [source/extensions/filters/network/sni_cluster/sni_cluster.cc:16] [C10170] sni_cluster: new connection with server name consul-server-0-lon1.server.lon1.consul
[2023-08-14 11:56:29.089][12849][debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:394] [C10170] Cluster not found consul-server-0-lon1.server.lon1.consul and no on demand cluster set.
[2023-08-14 11:56:29.089][12849][debug][connection] [source/common/network/connection_impl.cc:139] [C10170] closing data_to_write=0 type=1
[2023-08-14 11:56:29.089][12849][debug][connection] [source/common/network/connection_impl.cc:250] [C10170] closing socket: 1
[2023-08-14 11:56:29.089][12849][trace][connection] [source/common/network/connection_impl.cc:423] [C10170] raising connection event 1

Since replication has not finished I can’t start the consul-mesh gateway in the secondary cluster.

I use the latest Consul version and the compatible envoy version

Did anyone face the same issue and found a solution?

Hi @fmp88

For replication to occur, you can set the replication token in the secondary servers configuration file under acl.tokens.replication. Once you set this and restart the Consul servers in the secondary, it will start replicating from the primary.

If you don’t want to set the token in the config file, you will have to first manually bootstrap the cluster using consul acl bootstrap and then set the replication token using consul acl set-agent-token replication <secret-id>.

ref: Agents - Configuration File Reference | Consul | HashiCorp Developer

I hope this helps.

Hi @Ranjandas,

thank you for response.

So my process so far is, that I start and setup the primary cluster and then the secondary cluster with the following configuration, including the replication token.

datacenter = "lon1"
data_dir = "/opt/consul/data"

primary_datacenter = "fra1"

tls {
  defaults {
    verify_incoming = false
    verify_outgoing = true

    #ca_file = "/opt/consul/agent-certs/ca.pem"
    ca_file = "/usr/local/share/ca-certificates/vault-ca.crt"
    cert_file = "/opt/consul/agent-certs/agent.pem"
    key_file = "/opt/consul/agent-certs/agent.key"
  }

  internal_rpc {
    verify_server_hostname = true
  }
}

bind_addr = "{{ GetPrivateInterfaces | include \"network\" \"192.168.32.0/24\" | attr \"address\" }}"

retry_join = ["192.168.32.8"]

encrypt = "...="

acl {
  enabled = true
  default_policy = "deny"
  enable_token_persistence = true

  enable_token_replication = true

  tokens = {
    replication = "..."
  }

}


primary_gateways = ["...:8443"]

connect {
  enabled = true

  enable_mesh_gateway_wan_federation = true

}

config_entries {
  bootstrap = [
    {
      kind = "proxy-defaults"
      name = "global"
      mesh_gateway = {
        mode = "local"
      }
    }
  ]
}

ports {
  "http"  = -1
  "https" = 8501

  "grpc" = -1
  "grpc_tls" = 8503
}

telemetry {
  "prometheus_retention_time" = "24h"
  "enable_host_metrics" = true
}

ui_config {
  enabled = true
}
client_addr = "0.0.0.0"

server = true
bootstrap_expect = 5

log_file  = "/var/log/consul/ops.log"
log_level = "debug"

I updated all secondary clusters and restarted them, at which point I can see the secondary clusters in the dropdown menu of the primary cluster.
But when looking into the replication process on the secondary cluster I see only this

{"Enabled":true,"Running":true,"SourceDatacenter":"fra1","ReplicationType":"tokens","ReplicatedIndex":0,"ReplicatedRoleIndex":0,"ReplicatedTokenIndex":0,"LastSuccess":"0001-01-01T00:00:00Z","LastError":"2023-08-15T08:07:28Z","LastErrorMessage":"failed to retrieve remote ACL policies: rpc error getting client: failed to get conn: Remote DC has no server currently reachable"}

and in the primary Consul cluster envoy logs

[2023-08-15 08:09:37.717][2546][debug][filter] [source/extensions/filters/listener/tls_inspector/tls_inspector.cc:117] tls:onServerName(), requestedServerName: consul-server-1-lon1.server.lon1.consul
[2023-08-15 08:09:37.717][2546][debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:233] [C24098] new tcp proxy session
[2023-08-15 08:09:37.717][2546][debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:394] [C24098] Cluster not found consul-server-1-lon1.server.lon1.consul and no on demand cluster set.

If there is any other information missing let me know please

@fmp88,

Can your secondary consul servers talk to the Primary Mesh Gateways?

The initial replications happen by the Consul Servers in secondary directly talking to Mesh Gateways in Primary.

Once the replication is started, you will be able to start the mesh gateways in the secondary and, later the communication to primary switches to use the secondary mesh gateways.

@Ranjandas ,

atleast some communication between the secondary consul servers and the primary consul servers through the primary mesh gateway works, because I can see the secondary cluster in the dropdown ui menu of the primary cluster. But somehow the data for the replication on the way back does not reach the secondary clusters. Thats when I see timeouts in the logs of both clusters (primary and secondary)

I opened all TCP/UDP ports just for debugging but this also didnt help

@fmp88, could you paste the logs from the leader here? In addition, I would recommend you try restarting the leader agent to see whether the issue gets resolved as most of the routines of initialization of secondary happens on the leader.

@Ranjandas , this is the log of the primary cluster leader

2023-08-15T08:37:25.708Z [INFO]  agent.server.memberlist.wan: memberlist: Suspect consul-server-0-lon1.lon1 has failed, no acks received
2023-08-15T08:37:25.710Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send UDP ping: read tcp 192.168.30.13:36604->192.168.30.14:8443: read: connection reset by peer
2023-08-15T08:37:25.713Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send indirect UDP ping: read tcp 192.168.30.13:36606->192.168.30.14:8443: read: connection reset by peer
2023-08-15T08:37:25.716Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send indirect UDP ping: read tcp 192.168.30.13:36618->192.168.30.14:8443: read: connection reset by peer
2023-08-15T08:37:25.718Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send indirect UDP ping: read tcp 192.168.30.13:36626->192.168.30.14:8443: read: connection reset by peer
2023-08-15T08:37:26.209Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send gossip to 192.168.32.9:8302: read tcp 192.168.30.13:36628->192.168.30.14:8443: read: connection reset by peer
2023-08-15T08:37:26.211Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send gossip to 192.168.32.11:8302: read tcp 192.168.30.13:36642->192.168.30.14:8443: read: connection reset by peer
2023-08-15T08:37:26.213Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send gossip to 192.168.33.8:8302: read tcp 192.168.30.13:36650->192.168.30.14:8443: read: connection reset by peer
2023-08-15T08:37:26.516Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to forward ack: read tcp 192.168.30.13:36666->192.168.30.14:8443: read: connection reset by peer from=192.168.32.8:8302
2023-08-15T08:37:26.756Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to forward ack: read tcp 192.168.30.13:36680->192.168.30.14:8443: read: connection reset by peer from=192.168.32.10:8302

After restart and new leader election

2023-08-15T08:43:06.134Z [INFO]  agent.server: Handled event for server in area: event=member-join server=consul-server-1-lon1.lon1 area=wan
2023-08-15T08:43:06.396Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send gossip to 192.168.33.9:8302: read tcp 192.168.30.12:42854->192.168.30.14:8443: read: connection reset by peer
2023-08-15T08:43:06.399Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send gossip to 192.168.33.11:8302: read tcp 192.168.30.12:42856->192.168.30.14:8443: read: connection reset by peer
2023-08-15T08:43:06.896Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send gossip to 192.168.32.10:8302: read tcp 192.168.30.12:42860->192.168.30.14:8443: read: connection reset by peer
2023-08-15T08:43:06.898Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send gossip to 192.168.33.12:8302: read tcp 192.168.30.12:42872->192.168.30.14:8443: read: connection reset by peer
2023-08-15T08:43:06.901Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send gossip to 192.168.32.12:8302: read tcp 192.168.30.12:42888->192.168.30.14:8443: read: connection reset by peer
2023-08-15T08:43:07.396Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send gossip to 192.168.33.8:8302: read tcp 192.168.30.12:42898->192.168.30.14:8443: read: connection reset by peer
2023-08-15T08:43:07.398Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send gossip to 192.168.32.9:8302: read tcp 192.168.30.12:42908->192.168.30.14:8443: read: connection reset by peer
2023-08-15T08:43:07.896Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send gossip to 192.168.33.10:8302: read tcp 192.168.30.12:42920->192.168.30.14:8443: read: connection reset by peer
2023-08-15T08:43:07.899Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send gossip to 192.168.32.12:8302: read tcp 192.168.30.12:42928->192.168.30.14:8443: read: connection reset by peer
2023-08-15T08:43:07.901Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send gossip to 192.168.33.8:8302: read tcp 192.168.30.12:42930->192.168.30.14:8443: read: connection reset by peer

and envoy logs

[2023-08-15 08:44:37.688][2546][debug][connection] [source/common/network/connection_impl.cc:250] [C40672] closing socket: 1
[2023-08-15 08:44:37.858][2546][debug][filter] [source/extensions/filters/listener/tls_inspector/tls_inspector.cc:117] tls:onServerName(), requestedServerName: consul-server-1-lon1.server.lon1.consul
[2023-08-15 08:44:37.858][2546][debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:233] [C40673] new tcp proxy session
[2023-08-15 08:44:37.858][2546][debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:394] [C40673] Cluster not found consul-server-1-lon1.server.lon1.consul and no on demand cluster set.
[2023-08-15 08:44:37.858][2546][debug][connection] [source/common/network/connection_impl.cc:139] [C40673] closing data_to_write=0 type=1

This is the configuration for the consul envoy service for the primary consul cluster

[Unit]
Description=Consul Envoy
After=syslog.target network.target

[Service]
ExecStart=/usr/bin/consul \
          connect envoy \
          -http-addr=https://127.0.0.1:8501 \
          -ca-file=/opt/consul/agent-certs/ca.pem \
          -client-cert=/opt/consul/agent-certs/agent.pem \
          -client-key=/opt/consul/agent-certs/agent.key \
          -gateway=mesh -register \
          -service "consul-mesh-gw-fra1" \
          -address "192.168.30.14:8443" \
          -wan-address "...:8443" \
          -expose-servers \
          -token-file /etc/consul.d/tokens/gateway \
          -- -l debug \
          --log-path /opt/consul/envoy_logs.txt
ExecStop=/bin/sleep 5
Restart=always

[Install]
WantedBy=multi-user.target

@fmp88, Sorry it wasn’t clear. I wanted you to do all those steps on the Secondary Cluster. Could you please share the logs from the secondary leader and restart the secondary leader?

@Ranjandas my mistake

2023-08-15T10:12:42.710Z [WARN]  agent.server.rpc: RPC request to DC is currently failing as no server can be reached: datacenter=fra1
2023-08-15T10:12:42.711Z [ERROR] agent.acl: Error resolving token: error="Error communicating with the ACL Datacenter: Remote DC has no server currently reachable"
2023-08-15T10:12:42.943Z [DEBUG] agent.server.cert-manager: CA has not finished initializing
2023-08-15T10:12:43.945Z [DEBUG] agent.server.cert-manager: CA has not finished initializing
2023-08-15T10:12:44.297Z [WARN]  agent.server.rpc: RPC request to DC is currently failing as no server can be reached: datacenter=fra1
2023-08-15T10:12:44.298Z [ERROR] agent.acl: Error resolving token: error="Error communicating with the ACL Datacenter: Remote DC has no server currently reachable"
2023-08-15T10:12:44.298Z [WARN]  agent.server.rpc: RPC request to DC is currently failing as no server can be reached: datacenter=fra1
2023-08-15T10:12:44.299Z [ERROR] agent.acl: Error resolving token: error="Error communicating with the ACL Datacenter: Remote DC has no server currently reachable"
2023-08-15T10:12:44.299Z [WARN]  agent: Coordinate update blocked by ACLs: accessorID=primary-dc-down
2023-08-15T10:12:44.946Z [DEBUG] agent.server.cert-manager: CA has not finished initializing
2023-08-15T10:12:45.948Z [DEBUG] agent.server.cert-manager: CA has not finished initializing
2023-08-15T10:12:46.006Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send UDP ping: Remote DC has no server currently reachable
2023-08-15T10:12:46.532Z [DEBUG] agent.server.serf.wan: serf: forgoing reconnect for random throttling
2023-08-15T10:12:46.949Z [DEBUG] agent.server.cert-manager: CA has not finished initializing
2023-08-15T10:12:47.250Z [INFO]  agent: (WAN) joining: wan_addresses=["*.fra1/192.0.2.2"]
2023-08-15T10:12:47.251Z [DEBUG] agent.server.memberlist.wan: memberlist: Failed to join 192.0.2.2:8302: Remote DC has no server currently reachable
2023-08-15T10:12:47.252Z [WARN]  agent: (WAN) couldn't join: number_of_nodes=0
  error=
  | 1 error occurred:
  | \t* Failed to join 192.0.2.2:8302: Remote DC has no server currently reachable
  | 
  
2023-08-15T10:12:47.252Z [WARN]  agent: Join cluster failed, will retry: cluster=WAN retry_interval=30s
  error=
  | 1 error occurred:
  | \t* Failed to join 192.0.2.2:8302: Remote DC has no server currently reachable
  | 
  
2023-08-15T10:12:47.951Z [DEBUG] agent.server.cert-manager: CA has not finished initializing
2023-08-15T10:12:48.952Z [DEBUG] agent.server.cert-manager: CA has not finished initializing
2023-08-15T10:12:49.125Z [DEBUG] agent.server.memberlist.lan: memberlist: Initiating push/pull sync with: consul-server-0-lon1 192.168.32.11:8301

after the restart of Consul

2023-08-15T10:14:59.858Z [DEBUG] agent.server.cert-manager: server management token watch fired - resetting leaf cert watch
2023-08-15T10:14:59.858Z [DEBUG] agent.server.cert-manager: CA has not finished initializing
2023-08-15T10:14:59.862Z [WARN]  agent.leaf-certs: handling error in Manager.Notify: error="rpc error making call: CA has not finished initializing" index=1
2023-08-15T10:14:59.862Z [DEBUG] agent.server.cert-manager: got cache update event: correlationID=leaf error="rpc error making call: CA has not finished initializing"
2023-08-15T10:14:59.862Z [ERROR] agent.server.cert-manager: failed to handle cache update event: error="leaf cert watch returned an error: rpc error making call: CA has not finished initializing"
2023-08-15T10:14:59.863Z [WARN]  agent.leaf-certs: handling error in Manager.Notify: error="rpc error making call: CA has not finished initializing" index=1
2023-08-15T10:14:59.865Z [WARN]  agent.leaf-certs: handling error in Manager.Notify: error="rpc error making call: CA has not finished initializing" index=1
2023-08-15T10:14:59.866Z [WARN]  agent.leaf-certs: handling error in Manager.Notify: error="rpc error making call: CA has not finished initializing" index=1
2023-08-15T10:15:00.860Z [DEBUG] agent.server.cert-manager: CA has not finished initializing
2023-08-15T10:15:01.863Z [DEBUG] agent.server.cert-manager: CA has not finished initializing
2023-08-15T10:15:02.865Z [DEBUG] agent.server.cert-manager: CA has not finished initializing
2023-08-15T10:15:03.474Z [WARN]  agent.leaf-certs: handling error in Manager.Notify: error="rpc error making call: CA has not finished initializing" index=1
2023-08-15T10:15:03.807Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send gossip to 192.168.33.8:8302: Remote DC has no server currently reachable
2023-08-15T10:15:03.808Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send gossip to 192.168.33.12:8302: Remote DC has no server currently reachable
2023-08-15T10:15:03.866Z [DEBUG] agent.server.cert-manager: CA has not finished initializing
2023-08-15T10:15:04.307Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send gossip to 192.168.33.12:8302: Remote DC has no server currently reachable
2023-08-15T10:15:04.807Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send gossip to 192.168.33.12:8302: Remote DC has no server currently reachable
2023-08-15T10:15:04.868Z [DEBUG] agent.server.cert-manager: CA has not finished initializing
2023-08-15T10:15:05.807Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send UDP ping: Remote DC has no server currently reachable
2023-08-15T10:15:05.870Z [DEBUG] agent.server.cert-manager: CA has not finished initializing
2023-08-15T10:15:06.873Z [DEBUG] agent.server.cert-manager: CA has not finished initializing
2023-08-15T10:15:07.866Z [INFO]  agent.server.serf.wan: serf: EventMemberJoin: consul-server-2-syd1.syd1 192.168.33.10
2023-08-15T10:15:07.868Z [INFO]  agent.server: Handled event for server in area: event=member-join server=consul-server-2-syd1.syd1 area=wan
2023-08-15T10:15:07.882Z [DEBUG] agent.server.cert-manager: CA has not finished initializing
2023-08-15T10:15:08.307Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send gossip to 192.168.33.10:8302: Remote DC has no server currently reachable
2023-08-15T10:15:08.508Z [WARN]  agent.leaf-certs: handling error in Manager.Notify: error="rpc error making call: CA has not finished initializing" index=1
2023-08-15T10:15:08.883Z [DEBUG] agent.server.cert-manager: CA has not finished initializing
2023-08-15T10:15:09.306Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send gossip to 192.168.33.9:8302: Remote DC has no server currently reachable
2023-08-15T10:15:09.885Z [DEBUG] agent.server.cert-manager: CA has not finished initializing
2023-08-15T10:15:10.807Z [ERROR] agent.server.memberlist.wan: memberlist: Failed to send UDP ping: Remote DC has no server currently reachable
2023-08-15T10:15:10.887Z [DEBUG] agent.server.cert-manager: CA has not finished initializing
2023-08-15T10:15:10.917Z [DEBUG] agent.router.manager: cycled away from server: server="consul-server-1-fra1.fra1 (Addr: tcp/192.168.30.13:8300) (DC: fra1)"
2023-08-15T10:15:10.918Z [ERROR] agent.server.rpc: RPC failed to server in DC: server=192.168.30.13:8300 datacenter=fra1 method=ACL.TokenRead error="rpc error getting client: failed to get conn: dial tcp 192.168.32.9:0->138.197.180.162:8443: i/o timeout"
2023-08-15T10:15:10.918Z [ERROR] agent.acl: Error resolving token: error="Error communicating with the ACL Datacenter: rpc error getting client: failed to get conn: dial tcp 192.168.32.9:0->138.197.180.162:8443: i/o timeout"
2023-08-15T10:15:10.919Z [WARN]  agent: Coordinate update blocked by ACLs: accessorID=primary-dc-down
2023-08-15T10:15:10.919Z [ERROR] agent.acl: Error resolving token: error="Error communicating with the ACL Datacenter: rpc error getting client: failed to get conn: dial tcp 192.168.32.9:0->138.197.180.162:8443: i/o timeout"

Is 138.197.180.162 your Primary DC Mesh Gateway? If yes, I think there is some network issue as the log shows i/o timeout.

Can you verify the network connectivity once again?

I tried with netcat to see if the port is reachable and open. From the secondary to the primary mesh gateway and it works.

This IP “192.0.2.2” surprises me, because it is not part of any cidr range i use.

2023-08-15T11:59:56.553Z [WARN]  agent: Join cluster failed, will retry: cluster=WAN retry_interval=30s
  error=
  | 1 error occurred:
  | \t* Failed to join 192.0.2.2:8302: Remote DC has no server currently reachable
  | 

It is an IP address specifically assigned for use in documentation or example code.

It shows you have configuration that was taken directly from some sort of example, and not updated to be appropriate for your environment.

Hi @maxb,
interesting point about that being a special address.

I posted my configuration in this thread, if you see anything wrong please let me know

Thanks

That address is just a placeholder IP that is used in WAN Federation.

ref: https://github.com/hashicorp/consul/blob/217107f6276f26ada3cca2fedcd1a8fb50180e17/agent/retry_join.go#L65-L70

Did you try netcat from secondary consul servers (not just secondary mesh gateways) to the primary mesh gateways? Is it working?

Hi @fmp88,

From your DC names and the public IP, I guessed that you are running on DigitalOcean and tested the same and ended up having the same issue as yours.

I figured out that it is due to how the droplet networking is set up. Consul is trying to reach the primary Mesh Gateway via the Private Interface on the VM, which doesn’t work.

You can test the same behaviour by using curl to force the traffic via a specific interface:

curl -I eth0 www.google.com   <== this works
curl -I eth1 www.google.com   <== this doesn't

The fix is to add the following iptables rules so that Consul is able to talk to the primary mesh gateway and trigger initial replication.

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

Once the above rule is added, you will see that the replication succeeds. After this, you can go ahead and start the secondary Mesh Gateway, and all the requests will now start to flow through the mesh gateways.

Once this is done, you can remove (iptables -F -t nat) the POSTROUTING rule and everything will continue to work.

root@ubuntu-s-1vcpu-1gb-lon1-01:~# curl https://localhost:8501/v1/acl/replication?pretty -k
{
    "Enabled": true,
    "Running": true,
    "SourceDatacenter": "fra1",
    "ReplicationType": "tokens",
    "ReplicatedIndex": 62,
    "ReplicatedRoleIndex": 1,
    "ReplicatedTokenIndex": 334,
    "LastSuccess": "2023-08-16T02:43:33Z",
    "LastError": "2023-08-16T02:42:55Z",
    "LastErrorMessage": "failed to retrieve remote ACL tokens: rpc error getting client: failed to get conn: Remote DC has no server currently reachable"
}
root@ubuntu-s-1vcpu-1gb-lon1-01:~# consul members -wan
Node            Address         Status  Type    Build   Protocol  DC    Partition  Segment
server-01.fra1  10.19.0.5:8302  alive   server  1.16.1  2         fra1  default    <all>
server-01.lon1  10.16.0.5:8302  alive   server  1.16.1  2         lon1  default    <all>

# Testing a cross-dc query

root@ubuntu-s-1vcpu-1gb-lon1-01:~# consul catalog services -datacenter fra1
consul
mesh-gateway

Logs from secondary:

2023-08-16T02:58:48.552Z [DEBUG] agent: Node info in sync
2023-08-16T02:58:48.552Z [DEBUG] agent: Service in sync: service=mesh-gateway
2023-08-16T02:58:48.552Z [DEBUG] agent: Check in sync: check=service:mesh-gateway
2023-08-16T02:58:49.473Z [DEBUG] agent.server.memberlist.wan: memberlist: Stream connection from=10.16.0.5:36004
2023-08-16T02:58:50.737Z [DEBUG] agent: Check status updated: check=service:mesh-gateway status=passing
2023-08-16T02:59:00.563Z [DEBUG] agent.server.replication.acl.token: finished fetching acls: amount=7
2023-08-16T02:59:00.563Z [DEBUG] agent.server.replication.acl.token: acl replication: local=7 remote=7
2023-08-16T02:59:00.564Z [DEBUG] agent.server.replication.acl.token: acl replication: deletions=0 updates=0
2023-08-16T02:59:00.564Z [DEBUG] agent.server.replication.acl.token: ACL replication completed through remote index: index=334
2023-08-16T02:59:00.738Z [DEBUG] agent: Check status updated: check=service:mesh-gateway status=passing
2023-08-16T02:59:10.740Z [DEBUG] agent: Check status updated: check=service:mesh-gateway status=passing

I hope this helps.

1 Like

Hi @Ranjandas ,

i just tried your recommendation and it worked perfectly.

Thank you very very much

1 Like