Making a lot of connections to the database through one Boundary session causes the latency issue

I have almost no delays if I make one connection through one boundary session, but there are huge delays if I make even 50 connections to the MySQL database through one session and my cluster inoperable. I cannot even see the active sessions in the Admin UI.

50 connections handled without any problems by a direct connection to the MySQL

Results when you connect to MySQL directly without a boundary

  • Average number of seconds to run all queries: 14.456 seconds

Results when you connect to MySQL with boundary

  • Average number of seconds to run all queries: 187.119 seconds
  • Also al lot of errors: ERROR : Lost connection to MySQL server during query

Perhaps there is some kind of limit on the number of connections per session?

What kind of system is the Boundary worker? Is it possibly struggling under load or running out of bandwidth?

My configuration:
Boundary is running in the hashicorp cloud
Vault is running in the hashicorp cloud
MySQL is places in the Digital Ocean cloud - Linux server

I did some tests:

  • Tried to use a self-managed worker that is installed on the same Linux machine as MySQL
  • Tried to use the HCP boundary workers

I got the same results:
I have almost no delays if I make one connection through one session, but there are huge delays if I make even 50 connections to the database through one session and my cluster inoperable.
It is handled without any problems by a direct connection to the MySQL

it looks like the problem is in the HCP boundary cloud cluster itself. When I make 100 connections in one session my HCP boundary cloud cluster web interface hangs and I can’t open any pages.
If there is a way to increase the capacity of the HCP boundary cluster in Hashicorp cloud ? Or please suggest what else might be the problem.

Can you do a test using the self-managed worker, and then configuring the ingress filter for that target to specify the self-managed worker, taking the HCP managed worker out of the equation? That would help us understand if it’s a capacity issue on the managed worker or something else.

I ran some tests and noticed an interesting thing.
I make 100 connections in one boundary session

I did tests in the configurations:

Connect to MySQL with boundary (with both ingress and egress set to self-managed worker)
It worked very well, it was able to handle all my connections without the cloud cluster web interface freezing up.

That i tried
Connect to MySQL with boundary (with both ingress and egress default HCP boundary workers)
And the cloud cluster web interface hangs again.

Now I cannot connect to my cluster neither through the web interface nor through the CLI. I can’t even change my target settings

Here’s what it looks like


Screenshot_3