I’ve been using Boundary to execute kubectl commands against a cluster mostly successfully. However, it seems that the kube
command is able to introspect the session and get the underlying hostname used for the TLS connection to the k8s API. However, while kubectl allows for me to update the TLS servername for verification purposes, helm does not. Even if it did, I don’t get enough information about the connection from environment variables or the {{boundary.X}}
template strings. Helm also doesn’t have a mechanism for me to disable TLS verification (as of v3.8.0 anyway).
It would seem the only work-around would be to use Helm to render the template to a file and use kubectl to apply it. Not ideal, but do-able.
Anyone have thoughts on making this a more seamless process? Are there any upcoming features that we’re waiting on to simplify this workflow?
One idea I’m thinking of is exposing a loadbalancer with its own custom certificate using 127.0.0.1 to make it validate correctly with Helm – but it does seem like sort of a kludge.