Boundary connect usage in script

I’m trying to make a gitlab-ci script that uses Boundary to connect to a VM and use scp to send package.
I can manage to authenticate to Boundary and make a connection to the target, but then I’m stuck and can’t make another command without failure.

What I’m trying to do is this :

boundary connect ssh -target-id=$TARGET -host-id=$HOST_ID -username=$USER – -4 -f -NL $PORT:localhost:22 -i ~/.ssh/$PRIVATE_KEY -o StrictHostKeyChecking=no &

I’m trying to get the result of the boundary connect command in background to be able to directly use scp with the open $PORT.

scp -P $PORT package.tar $USER@localhost:/path/to/package

When I add the “&” at the end of the boundary connect command, it does not seem to work correctly. If I don’t put it, the next command just doesn’t appear.

Do you have an idea about finding a way to achieve a boundary connect command that could be followed by other command ?

Thanks in advance for your help !

I don’t have a one-line solution for you, but I did some experimentation with a dev-mode Boundary and this did successfully SSH and forward ports:

TARGET="ttcp_1234567890"
HOST_ID="hst_1234567890"
USER="[my local user]"
PORT="31337"
PRIVATE_KEY="id_rsa"

boundary connect ssh -target-id $TARGET -host-id $HOST_ID \
-username $USER -- -4 -L $PORT:localhost:22 -i ~/.ssh/$PRIVATE_KEY \
-o StrictHostKeyChecking=no

Obviously that doesn’t quite get you there, since it doesn’t go to background after forwarding the ports; when I used -N and -f I got a success result in $? and could see the successful auth in the SSH service logs followed immediately by pam_unix(sshd:session): session closed for user [my user]. So something closes the session right after successful auth when it’s running in Boundary; my guess is it’s Boundary itself seeing the session “close” when SSH forks off.

That said, I did make this work by using a generic boundary connect and bash coprocs. (Just backgrounding the generic boundary connect command would probably work too, but you’d need to redirect the output somewhere so you can capture the assigned proxy port from the command output.)

# Spawn `boundary connect` in a coproc, with JSON output
coproc BOUNDARY_PROXY ( boundary connect -target-id $TARGET -host-id $HOST_ID -format json )

# The coproc gives us an array of two file descriptors -- index 0 is the stdout fd
read -r -u ${BOUNDARY_PROXY[0]} BOUNDARY_PROXY_INFO

# Parse the JSON for some info with jq
BOUNDARY_PROXY_ADDR=$(echo $BOUNDARY_PROXY_INFO | jq -r '.address')
BOUNDARY_PROXY_PORT=$(echo $BOUNDARY_PROXY_INFO | jq -r '.port')
BOUNDARY_SESSION_ID=$(echo $BOUNDARY_PROXY_INFO | jq -r '.session_id')

echo "Boundary proxy is running on ${BOUNDARY_PROXY_ADDR}:${BOUNDARY_PROXY_PORT}"
# `coproc name` etc. gives you the PID of the process in the variable name_PID
echo "Boundary process ID is ${BOUNDARY_PROXY_PID}"

ssh -p $BOUNDARY_PROXY_PORT $BOUNDARY_PROXY_ADDR -4 -f -NL $PORT:localhost:22 -i ~/.ssh/$PRIVATE_KEY -o StrictHostKeyChecking=no

[do stuff]

echo "Stopping Boundary session"
boundary sessions cancel -id $BOUNDARY_SESSION_ID

# Canceling the session terminates the SSH connection but doesn't actually terminate the proxy
kill -INT $BOUNDARY_PROXY_PID

That indeed solved my problem !
Thank you very much !

coprocs are a handy tool for this sort of thing! :slight_smile:

(The other thing you can do if you really want an all-purpose hammer for something coprocs don’t work for, is manipulate containers to do it, but that’s like using a sledgehammer to tap in finishing nails for something like this.)