How to expose vault HTTPS API that installed in k8s to outside clients

hey
i have a problem finding explanations on how to expose HTTPS API to outside clients.
The [Vault on Kubernetes Reference Architecture] (Vault on Kubernetes Reference Architecture | Vault - HashiCorp Learn)
page ends with a short explanation in the end of this page which leave with no more info on how to
configure vault HTTPS API to be accesble to the outside world .
this tutorial using disable TLS
or this tutorial
which working on minicube , which act differently then real k8s and its also without TLS
the funny thing is that the main tutorial configuration the vault server cluster in k8s do use TLS

I just want to make curl API with TLS that will give me the password i set
How do i do this ?

I donā€™t know k8s well enough to answer, but isnā€™t https just another ingress? Doesnā€™t matter if itā€™s UI or API.
As long as you have the listener defined in vaultā€™s config itā€™ll answer.

Hey thanks for answering , but no, to make services discaverble you need to set some load balancer to proxy to the pods .
I can do API https call only when I make
kubectl with port forwarding but this is only for testing
I missing Documentation how to set up this load balancer supporting https in vault context in real k8s

1 Like

I think the latest Helm chart has LB support.
Might try going thru the examples here and find what you need

1 Like

Thanks for your answer, this is for the UI i need for the vault HTTPS API

Did you change the API address? Itā€™s usually the same address as the UI. {url:8200}/ui ā†’ is the UI and {url:8200}/ is the API. If you changed it then you need to setup a second LB.
The reason for allowing you to change it is to allow you to use an externally facing UI but an internally-only facing API address. If youā€™re using kubernetes then everything is external and you need to control the access via the LB not by this configuration.

Thanks for the answer
i didnā€™t change anything, iā€™v got it working but only via kubectl port forwarding form my local PC .
the setup using LB in real Kubernetes supporting SSL this is what is missing from the documentation.
how to connect those end to end .

That doesnā€™t make any sense unless youā€™re using a different IP/name. SSL certificates are bound to either the IP address and/or hostname. When you define your listen stanza and declare your SSL certificates that entire ip address is being used by both the API and the UI ā€¦ in fact the UI is making API calls itself.

Please post your config.

this is my config:

global:
  enabled: true
  tlsDisable: false
  extraEnvironmentVars:
    VAULT_CACERT: /vault/userconfig/vault-tls/vault.ca
server:
  extraVolumes:
  - type: secret
    name: vault-tls

  ha:
    enabled: true
    replicas: 3
    raft:
      enabled: true
      setNodeId: false
      config: |
        ui = true

        listener "tcp" {
          address = "0.0.0.0:8200"
          cluster_address = "0.0.0.0:8201"
          tls_cert_file = "/vault/userconfig/vault-tls/foo.crt"
          tls_key_file = "/vault/userconfig/vault-tls/foo.key"
          tls_client_ca_file = "/vault/userconfig/vault-tls/foo.ca"
        }

        storage "raft" {
          path = "/vault/data"
        }

        service_registration "kubernetes" {}

But once again what i try to achieve is access from k8s to external client without using kubectl port forwarding, there is no way as i understand to do it without configuring LB

I hope this isnā€™t a distraction from this discussion, as this is exactly what Iā€™m trying to setup, but having issues with the endpoints. Various activities in the vault agent injector workflow seem to reference different endpoints for Vault.

For now, Iā€™ve setup a loadbalancer pointing to my instance group, listening on port 8200. I setup the LB manually for now, but will see about automating the LB portion later. I then have DNS for the TLS cert I created pointing to that LB.

I configured the annotation on my test deployment to ā€˜vault.hashicorp.com/tls-server-name: ā€œURLFORVAULTSERVERā€ā€™. It appears to work for the first part of the workflow, but when it tries to ā€œgetā€ the kv secret, it reverts to talking to the internal k8s service (ā€œvaultservice.vaultns.svc.ā€). Is there a setting I can apply at the helm chart level to make it use the same DNS globally?

Please share and you are welcome to share any info in this discussion
as i discover that there is no info about the subject on the net or vault docs

Thanksā€¦I appreciate that. Yeah, Iā€™ve seen some various pieces of information for the k8s injector feature, and not sure which is the most complete.

My config looks similar to yours, but I havenā€™t configured any CA settings, because Iā€™m using letsencrypt to get the TLS certs, and the built-in CAs seem to work. The issue with this, though, is that letsencrypt canā€™t hand out any non-internet routable domains, which is what seems to be causing the issues when the workflow references the internal k8s service URL (vault.vault.svc for instance).

And, if we were to attempt to use this as a central vault to our other k8s clusters, I wouldnā€™t be able to route to an internal service in those separate clusters. So, getting all the pieces of the injector to talk to a common endpoint is where Iā€™m currently stuck.

Thanks , ok one thing I want to know .
What is the job of the injector what his job ?
I can see hanging there but what itā€™s actual job ?

Iā€™m learning this as well, so Iā€™ll let you know if I find definitive info on that.

It appears to be tied to the webhook, but what it literally is doing, Iā€™m not sure. Iā€™m guessing that it is a controller of some sort for when the annotations invoke the engine, and it then does the work of patching the pod to add the sidecar which manages the secrets for the pod.