When consul supports large-scale nodes, there is a problem of high node pressure. Is there an optimization method?

I want to optimize consul performance in the case of large-scale objects

My scenario is as follows:

I plan to add 100,000 objects to the console, and about 11 services are watching the objects in consul. When I add 13,000 units, the memory pressure of the consul node is high, and the node has a performance bottleneck.

Each of these 11 services will establish 13,000 connections with consul, resulting in a performance bottleneck in consul:

Each of these 11 services will establish 13,000 connections with consul, resulting in a performance bottleneck in consul.

The number of network connections on consul is as follows:

[root@ ~]# netstat -antpl | grep "8500" | wc -l
143327

#The reasons are as follows: 13000 * 11 = 143300

I have optimized and adjusted the number of consul connections, but the resources of the nodes are limited.

{
    "limits":{
    "http_max_conns_per_client": 1000000,
    "rpc_max_conns_per_client": 1000000
    }
}

Is there a solution that can support large-scale horizontal expansion?