Boundary for NFS access

I am using HCP Boundary with a local worker to connect to our NFS share. It mounts fine, but the i/o is significantly slower.

What would cause this? Isn’t Boundary essentially just a transparent proxy?

Each Boundary connection is a single-port TCP proxy, so depending on the NFS version you’re running, that might be slow or might not even work. What’s your NFS setup look like outside of Boundary?

Its a TrueNAS nfs3 TCP server. rpcbind port 111. mountd port 800. nfsd 2049.

Boundary is mapped to port 2049 => 9049, and we mount specifying the mountd port and url outside of boundary

127.0.0.1:/mnt/tank/shared /local nfs soft,intr,mountport=800,mounthost=10.10.165.10,vers=3,port=9049,tcp

The environment looks like this (pardon my ASCII art) then?

   [Client]            [Worker]         [File server]
+-NFS client        Boundary worker -------> NFS
|                           ^       2049/tcp
| 9049/tcp                  |
v                           |
Boundary tunnel ------------+

I don’t have an NFS server handy to test with (though it’s not that many years since I would have…) but I looked around and found a lot of questions and discussion in line with that I remember around NFSv3 being very slow through port forwards or TCP proxies. Sometimes it seems to be about UDP not being handled, others it’s that NFS is extremely sensitive to even minor additional latency in the connection.

Just for testing – if, instead of going through the Boundary worker, you just port-forward the TCP connection directly through the worker, do you see the same slowdown? Can you do a test with something else that pushes a lot of data, like SCP’ing a large file to a host directly vs. through Boundary?

I’ll see if I can duplicate your setup in a couple of VMs in my lab and if I can, whether I can reproduce what you’re seeing.

That setup is correct, though I’m not 100% sure if there is an additional hop with HCP Boundary or not. I will do those tests and get back.