I donāt know k8s well enough to answer, but isnāt https just another ingress? Doesnāt matter if itās UI or API.
As long as you have the listener defined in vaultās config itāll answer.
Hey thanks for answering , but no, to make services discaverble you need to set some load balancer to proxy to the pods .
I can do API https call only when I make
kubectl with port forwarding but this is only for testing
I missing Documentation how to set up this load balancer supporting https in vault context in real k8s
Did you change the API address? Itās usually the same address as the UI. {url:8200}/ui ā is the UI and {url:8200}/ is the API. If you changed it then you need to setup a second LB.
The reason for allowing you to change it is to allow you to use an externally facing UI but an internally-only facing API address. If youāre using kubernetes then everything is external and you need to control the access via the LB not by this configuration.
Thanks for the answer
i didnāt change anything, iāv got it working but only via kubectl port forwarding form my local PC .
the setup using LB in real Kubernetes supporting SSL this is what is missing from the documentation.
how to connect those end to end .
That doesnāt make any sense unless youāre using a different IP/name. SSL certificates are bound to either the IP address and/or hostname. When you define your listen stanza and declare your SSL certificates that entire ip address is being used by both the API and the UI ā¦ in fact the UI is making API calls itself.
But once again what i try to achieve is access from k8s to external client without using kubectl port forwarding, there is no way as i understand to do it without configuring LB
I hope this isnāt a distraction from this discussion, as this is exactly what Iām trying to setup, but having issues with the endpoints. Various activities in the vault agent injector workflow seem to reference different endpoints for Vault.
For now, Iāve setup a loadbalancer pointing to my instance group, listening on port 8200. I setup the LB manually for now, but will see about automating the LB portion later. I then have DNS for the TLS cert I created pointing to that LB.
I configured the annotation on my test deployment to āvault.hashicorp.com/tls-server-name: āURLFORVAULTSERVERāā. It appears to work for the first part of the workflow, but when it tries to āgetā the kv secret, it reverts to talking to the internal k8s service (āvaultservice.vaultns.svc.ā). Is there a setting I can apply at the helm chart level to make it use the same DNS globally?
Thanksā¦I appreciate that. Yeah, Iāve seen some various pieces of information for the k8s injector feature, and not sure which is the most complete.
My config looks similar to yours, but I havenāt configured any CA settings, because Iām using letsencrypt to get the TLS certs, and the built-in CAs seem to work. The issue with this, though, is that letsencrypt canāt hand out any non-internet routable domains, which is what seems to be causing the issues when the workflow references the internal k8s service URL (vault.vault.svc for instance).
And, if we were to attempt to use this as a central vault to our other k8s clusters, I wouldnāt be able to route to an internal service in those separate clusters. So, getting all the pieces of the injector to talk to a common endpoint is where Iām currently stuck.
Iām learning this as well, so Iāll let you know if I find definitive info on that.
It appears to be tied to the webhook, but what it literally is doing, Iām not sure. Iām guessing that it is a controller of some sort for when the annotations invoke the engine, and it then does the work of patching the pod to add the sidecar which manages the secrets for the pod.