K8s Vault - New deployment - CLI on pod fails "403 permisison denied" - SOLVED

Hello,
I am trying to get a new deployment of Vault operational within my K8s cluster. What I have working so far:

  1. 1 x external (OS based Vault) used for only auto-unseal
  2. 5 x node K8s Vault deployed
  3. Vault pods are running
  4. Initialize Vault
  5. Vault auto-unseals itself via the external Vault
  6. Perform vault login with the root token

At this phase, I am trying to build a CA for Cert-Manager to use. When I exec into pod vault-0 (active HA node), I cannot run any vault commands beside “vault status” as I recieve the following error message:

Error enabling audit device: Error making API request.

URL: PUT https://127.0.0.1:8200/v1/sys/audit/file
Code: 403. Errors:

* permission denied
  • Cannot enable auditing within pod
  • No logs are created under /vault/logs
  • Pod logs only show system level events (pods joining Raft, etc…). Nothing about access or permission denied
  • Tried changing the ENV VAULT_ADDR to

I can enable PKI engine, add secrets via the UI. So looks like the Vault app is working. Just seems to be something with the pod/CLI.

Install procedure:
Basically followed this tutorial (Vault on Kubernetes)

Environment:

  • Kubernetes: kubeadm v1.24.2
  • Vault:
    • version: v1.12.0 via Helm v0.22.1

Outputs:

  • Pods

  NAME                                    READY   STATUS    RESTARTS   AGE     IP                NODE                               NOMINATED NODE   READINESS GATES
vault-0                                 1/1     Running   0          4d18h   192.168.193.124   k8-clst01-wkr01.tlb.net   <none>           <none>
vault-1                                 1/1     Running   0          4d18h   192.168.142.72    k8-clst01-wkr05.tlb1.net   <none>           <none>
vault-2                                 1/1     Running   0          4d18h   192.168.70.151    k8-clst01-wkr02.tlb1.net   <none>           <none>
vault-3                                 1/1     Running   0          4d18h   192.168.238.57    k8-clst01-wkr03.tlb1.net   <none>           <none>
vault-4                                 1/1     Running   0          4d18h   192.168.43.254    k8-clst01-wkr04.tlb1.net   <none>           <none>
vault-agent-injector-6f9f84c6ff-254jd   1/1     Running   0          4d18h   192.168.43.250    k8-clst01-wkr04.tlb1.net   <none>           <none>
vault-agent-injector-6f9f84c6ff-7fgvj   1/1     Running   0          4d18h   192.168.193.123   k8-clst01-wkr01.tlb1.net   <none>           <none>
vault-agent-injector-6f9f84c6ff-dwvb2   1/1     Running   0          4d18h   192.168.70.129    k8-clst01-wkr02.tlb1.net   <none>           <none>
vault-agent-injector-6f9f84c6ff-lfh4z   1/1     Running   0          4d18h   192.168.238.61    k8-clst01-wkr03.tlb1.net   <none>           <none>
vault-agent-injector-6f9f84c6ff-tv8kt   1/1     Running   0          4d18h   192.168.142.105   k8-clst01-wkr05.tlb1.net   <none>           <none>
vault-csi-provider-4jcc7                1/1     Running   0          4d18h   192.168.142.68    k8-clst01-wkr05.tlb1.net   <none>           <none>
vault-csi-provider-bdmdr                1/1     Running   0          4d18h   192.168.70.142    k8-clst01-wkr02.tlb1.net   <none>           <none>
vault-csi-provider-k7xfw                1/1     Running   0          4d18h   192.168.238.62    k8-clst01-wkr03.tlb1.net   <none>           <none>
vault-csi-provider-ldxfz                1/1     Running   0          4d18h   192.168.193.127   k8-clst01-wkr01.tlb1.net   <none>           <none>
vault-csi-provider-xp6rz                1/1     Running   0          4d18h   192.168.43.253    k8-clst01-wkr04.tlb1.net   <none>           <none>
  • PVC

NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS            AGE
audit-vault-0   Bound    pvc-cd88c61a-fee9-4b7c-a7c6-6d824d8ae48a   10Gi       RWO            rook-ceph-block-hdd7k   33m
audit-vault-1   Bound    pvc-cc1a5490-570a-4dd0-86cd-0a6734e4b0b5   10Gi       RWO            rook-ceph-block-hdd7k   33m
audit-vault-2   Bound    pvc-bcb9dc8b-ddeb-42c3-905f-ab037bbf08d5   10Gi       RWO            rook-ceph-block-hdd7k   33m
audit-vault-3   Bound    pvc-1365e1eb-6017-4d5e-9a11-7af0e77219a6   10Gi       RWO            rook-ceph-block-hdd7k   33m
audit-vault-4   Bound    pvc-d9035cc1-3cb3-44b5-8729-a3bf47f83496   10Gi       RWO            rook-ceph-block-hdd7k   33m
data-vault-0    Bound    pvc-8dfc7311-be12-49d2-b009-221fc6d0a558   10Gi       RWO            rook-ceph-block-hdd7k   33m
data-vault-1    Bound    pvc-d11f2372-d156-483c-9ae5-babcd2e075f0   10Gi       RWO            rook-ceph-block-hdd7k   33m
data-vault-2    Bound    pvc-fdde9b59-6173-41ee-aa85-48588a3573df   10Gi       RWO            rook-ceph-block-hdd7k   33m
data-vault-3    Bound    pvc-d462e7e9-2f82-437f-b9e2-64498cb3b20a   10Gi       RWO            rook-ceph-block-hdd7k   33m
data-vault-4    Bound    pvc-1f07e410-5ab3-4aae-b372-3d5a0ba506c5   10Gi       RWO            rook-ceph-block-hdd7k   33m
  • Vault status

Key                      Value
---                      -----
Recovery Seal Type       shamir
Initialized              true
Sealed                   false
Total Recovery Shares    5
Threshold                3
Version                  1.12.0
Build Date               2022-10-10T18:14:33Z
Storage Type             raft
Cluster Name             vault-cluster-87f386ab
Cluster ID               2cef5433b-2tf8-1f75-b528-a87f68fdf37a
HA Enabled               true
HA Cluster               https://vault-0.vault-internal:8201
HA Mode                  active
Active Since             2022-11-21T14:22:52.898530738Z
Raft Committed Index     54
Raft Applied Index       54
  • Helm overrides.yaml

# Hashicorp Vault v1.12.0 Helm chart based on v0.22.1
#
# Modifed:
#   Date: Nov. 4, 2022 @ 1200
#   By: Brandt Winchell
---
global:
  # enabled is the master enabled switch. Setting this to true or false
  # will enable or disable all the components within this chart by default.
  enabled: true

  # TLS for end-to-end encrypted transport
  tlsDisable: false

injector:
  # True if you want to enable vault agent injection.
  # @default: global.enabled
  enabled: "-"

  replicas: 5

  # If true, will enable a node exporter metrics endpoint at /metrics.
  metrics:
    enabled: true

  # Extra annotations to attach to the webhook
  webhookAnnotations:
    cert-manager.io/inject-ca-from: "{{ .Release.Namespace }}/vault-injector-selfsigned-ca"

  certs:
    # secretName is the name of the secret that has the TLS certificate and
    # private key to serve the injector webhook. If this is null, then the
    # injector will default to its automatic management mode that will assign
    # a service account to the injector to generate its own certificates.
    secretName: vault-sec-injector-tls

  resources:
    requests:
      memory: 256Mi
      cpu: 250m
    limits:
      memory: 256Mi
      cpu: 250m


server:
  # If true, or "-" with global.enabled true, Vault server will be installed.
  # See vault.mode in _helpers.tpl for implementation details.
  enabled: "-"


  # Configure the logging verbosity for the Vault server.
  # Supported log levels include: trace, debug, info, warn, error
  logLevel: "info"

  # Configure the logging format for the Vault server.
  # Supported log formats include: standard, json
  logFormat: "standard"

  resources:
     requests:
       memory: 2Gi
       cpu: 1000m
     limits:
       memory: 4Gi
       cpu: 2000m

  # Ingress allows ingress services to be created to allow external access
  # from Kubernetes to access Vault pods.
  # If deployment is on OpenShift, the following block is ignored.
  # In order to expose the service, use the route section below
  ingress:
    enabled: true
    labels: {}
      # traffic: external
    annotations:
      nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"

    # Optionally use ingressClassName instead of deprecated annotation.
    # See: https://kubernetes.io/docs/concepts/services-networking/ingress/#deprecated-annotation
    ingressClassName: nginx

    # As of Kubernetes 1.19, all Ingress Paths must have a pathType configured. The default value below should be sufficient in most cases.
    # See: https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types for other possible values.
    pathType: Prefix

    # When HA mode is enabled and K8s service registration is being used,
    # configure the ingress to point to the Vault active service.
    activeService: true
    hosts:
      - host: "k8-clst01-vault-vault.tlb1.tlb1.net"
        paths: []

  # Used to define custom readinessProbe settings
  readinessProbe:
    enabled: true
    # If you need to use a http path instead of the default exec
    path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"

    # When a probe fails, Kubernetes will try failureThreshold times before giving up
    failureThreshold: 2
    # Number of seconds after the container has started before probe initiates
    initialDelaySeconds: 5
    # How often (in seconds) to perform the probe
    periodSeconds: 5
    # Minimum consecutive successes for the probe to be considered successful after having failed
    successThreshold: 1
    # Number of seconds after which the probe times out.
    timeoutSeconds: 3
  # Used to enable a livenessProbe for the pods
  livenessProbe:
    enabled: true
    path: "/v1/sys/health?standbyok=true"
    # When a probe fails, Kubernetes will try failureThreshold times before giving up
    failureThreshold: 2
    # Number of seconds after the container has started before probe initiates
    initialDelaySeconds: 60
    # How often (in seconds) to perform the probe
    periodSeconds: 5
    # Minimum consecutive successes for the probe to be considered successful after having failed
    successThreshold: 1
    # Number of seconds after which the probe times out.
    timeoutSeconds: 3

  # extraEnvironmentVars is a list of extra environment variables to set with the stateful set. These could be
  # used to include variables required for auto-unseal.
  extraEnvironmentVars:
    VAULT_CACERT: /vault/userconfig/tls-certs/tls.crt
    VAULT_SKIP_VERIFY: true
    VAULT_TOKEN: hvs.XXXXXXXXXXXXXXXXXXXXXXX

  # volumes is a list of volumes made available to all containers. These are rendered
  # via toYaml rather than pre-processed like the extraVolumes value.
  # The purpose is to make it easy to share volumes between containers.
  volumes:
    - name: vol-vault-sec-vaultsrv-tls
      secret:
        secretName: vault-sec-vaultsrv-tls

  # volumeMounts is a list of volumeMounts for the main server container. These are rendered
  # via toYaml rather than pre-processed like the extraVolumes value.
  # The purpose is to make it easy to share volumes between containers.
  volumeMounts:
    - name: vol-vault-sec-vaultsrv-tls
      mountPath: "/vault/userconfig/tls-certs"
      readOnly: true


  # Enables a headless service to be used by the Vault Statefulset
  service:
    enabled: true

  # This configures the Vault Statefulset to create a PVC for data
  # storage when using the file or raft backend storage engines.
  # See https://www.vaultproject.io/docs/configuration/storage/index.html to know more
  dataStorage:
    enabled: true
    # Size of the PVC created
    size: 10Gi
    # Location where the PVC will be mounted.
    mountPath: "/vault/data"
    # Name of the storage class to use.  If null it will use the
    # configured default Storage Class.
    storageClass: rook-ceph-block-hdd7k
    # Access Mode of the storage device being used for the PVC
    accessMode: ReadWriteOnce
    # Annotations to apply to the PVC
    annotations: {}

  # This configures the Vault Statefulset to create a PVC for audit
  # logs.  Once Vault is deployed, initialized, and unsealed, Vault must
  # be configured to use this for audit logs.  This will be mounted to
  # /vault/audit
  # See https://www.vaultproject.io/docs/audit/index.html to know more
  auditStorage:
    enabled: true
    # Size of the PVC created
    size: 10Gi
    # Location where the PVC will be mounted.
    mountPath: "/vault/audit"
    # Name of the storage class to use.  If null it will use the
    # configured default Storage Class.
    storageClass: rook-ceph-block-hdd7k
    # Access Mode of the storage device being used for the PVC
    accessMode: ReadWriteOnce
    # Annotations to apply to the PVC
    annotations: {}

  # Run Vault in "standalone" mode. This is the default mode that will deploy if
  # no arguments are given to helm. This requires a PVC for data storage to use
  # the "file" backend.  This mode is not highly available and should not be scaled
  # past a single replica.
  standalone:
    enabled: false

  # Run Vault in "HA" mode. There are no storage requirements unless the audit log
  # persistence is required.  In HA mode Vault will configure itself to use Consul
  # for its storage backend.  The default configuration provided will work the Consul
  # Helm project by default.  It is possible to manually configure Vault to use a
  # different HA backend.
  ha:
    enabled: true
    replicas: 5

    # Enables Vault's integrated Raft storage.  Unlike the typical HA modes where
    # Vault's persistence is external (such as Consul), enabling Raft mode will create
    # persistent volumes for Vault to store data according to the configuration under server.dataStorage.
    # The Vault cluster will coordinate leader elections and failovers internally.
    raft:

      # Enables Raft integrated storage
      enabled: true
      # Set the Node Raft ID to the name of the pod
      setNodeId: true

      # Note: Configuration files are stored in ConfigMaps so sensitive data
      # such as passwords should be either mounted through extraSecretEnvironmentVars
      # or through a Kube secret.  For more information see:
      # https://www.vaultproject.io/docs/platform/k8s/helm/run#protecting-sensitive-vault-configurations
      config: |
        ui = true
        disable_mlock = true

        listener "tcp" {
          address            = "[::]:8200"
          cluster_address    = "[::]:8201"
          tls_client_ca_file = "/vault/userconfig/tls-certs/ca.crt"
          tls_cert_file      = "/vault/userconfig/tls-certs/tls.crt"
          tls_key_file       = "/vault/userconfig/tls-certs/tls.key"
          tls_min_version    = "tls12"
          
          // Enable unauthenticated metrics access (necessary for Prometheus Operator)
          telemetry {
            unauthenticated_metrics_access = "true"
          }
        }

        seal "transit" {
          // Connection configuration
          address         = "https://hashivault43.tlb1.net:8200"
          tls_skip_verify = "true"

          // Key configuration
          key_name        = "k8-clst01-vault-vault01-autounseal"
          mount_path      = "transit/"
          disable_renewal = "false"
          
        }

        storage "raft" {
          path = "/vault/data"
            retry_join {
            leader_api_addr         = "https://vault-0.vault-internal:8200"
            leader_ca_cert_file     = "/vault/userconfig/tls-certs/ca.crt"
            leader_client_cert_file = "/vault/userconfig/tls-certs/tls.crt"
            leader_client_key_file  = "/vault/userconfig/tls-certs/tls.key"
          }
          retry_join {
            leader_api_addr         = "https://vault-1.vault-internal:8200"
            leader_ca_cert_file     = "/vault/userconfig/tls-certs/ca.crt"
            leader_client_cert_file = "/vault/userconfig/tls-certs/tls.crt"
            leader_client_key_file  = "/vault/userconfig/tls-certs/tls.key"
          }
          retry_join {
            leader_api_addr         = "https://vault-2.vault-internal:8200"
            leader_ca_cert_file     = "/vault/userconfig/tls-certs/ca.crt"
            leader_client_cert_file = "/vault/userconfig/tls-certs/tls.crt"
            leader_client_key_file  = "/vault/userconfig/tls-certs/tls.key"
          }
          retry_join {
            leader_api_addr         = "https://vault-3.vault-internal:8200"
            leader_ca_cert_file     = "/vault/userconfig/tls-certs/ca.crt"
            leader_client_cert_file = "/vault/userconfig/tls-certs/tls.crt"
            leader_client_key_file  = "/vault/userconfig/tls-certs/tls.key"
          }
          retry_join {
            leader_api_addr         = "https://vault-4.vault-internal:8200"
            leader_ca_cert_file     = "/vault/userconfig/tls-certs/ca.crt"
            leader_client_cert_file = "/vault/userconfig/tls-certs/tls.crt"
            leader_client_key_file  = "/vault/userconfig/tls-certs/tls.key"
          }

          autopilot {
            cleanup_dead_servers           = "true"
            last_contact_threshold         = "200ms"
            last_contact_failure_threshold = "10m"
            max_trailing_logs              = 250000
            min_quorum                     = 5
            server_stabilization_time      = "10s"
          }
        }

        service_registration "kubernetes" {}


# secrets-store-csi-driver-provider-vault
csi:
  # True if you want to install a secrets-store-csi-driver-provider-vault daemonset.
  #
  # Requires installing the secrets-store-csi-driver separately, see:
  # https://github.com/kubernetes-sigs/secrets-store-csi-driver#install-the-secrets-store-csi-driver
  #
  # With the driver and provider installed, you can mount Vault secrets into volumes
  # similar to the Vault Agent injector, and you can also sync those secrets into
  # Kubernetes secrets.
  enabled: true

  image:
    repository: "hashicorp/vault-csi-provider"
    tag: "1.2.0"
    pullPolicy: IfNotPresent

  resources:
    requests:
      cpu: 50m
      memory: 128Mi
    limits:
      cpu: 50m
      memory: 128Mi
...
  • Pod ENV list:

POD_IP=192.168.142.124
VAULT_CACERT=/vault/userconfig/tls-certs/tls.crt
VAULT_PORT_8201_TCP_PROTO=tcp
VAULT_ACTIVE_PORT_8200_TCP=tcp://172.16.41.209:8200
VAULT_STANDBY_PORT_8201_TCP_ADDR=172.16.77.15
VAULT_STANDBY_SERVICE_PORT_HTTPS_INTERNAL=8201
VAULT_ACTIVE_PORT=tcp://172.16.41.209:8200
VAULT_SERVICE_HOST=172.16.158.201
VAULT_ACTIVE_SERVICE_PORT=8200
KUBERNETES_SERVICE_PORT=443
VAULT_ACTIVE_PORT_8201_TCP=tcp://172.16.41.209:8201
KUBERNETES_PORT=tcp://172.16.0.1:443
HOST_IP=10.253.50.208
VAULT_STANDBY_PORT_8200_TCP_PORT=8200
VAULT_AGENT_INJECTOR_SVC_PORT_443_TCP_PORT=443
VAULT_STANDBY_PORT_8201_TCP_PORT=8201
VAULT_AGENT_INJECTOR_SVC_PORT_443_TCP_PROTO=tcp
VAULT_STANDBY_PORT_8200_TCP_PROTO=tcp
VAULT_STANDBY_SERVICE_PORT_HTTPS=8200
HOSTNAME=vault-0
VAULT_STANDBY_PORT_8201_TCP_PROTO=tcp
VAULT_ADDR=https://127.0.0.1:8200
VAULT_STANDBY_SERVICE_HOST=172.16.77.15
SHLVL=1
HOME=/home/vault
VAULT_API_ADDR=https://192.168.142.124:8200
OLDPWD=/
VAULT_PORT_8200_TCP=tcp://172.16.158.201:8200
VAULT_PORT=tcp://172.16.158.201:8200
VAULT_PORT_8201_TCP=tcp://172.16.158.201:8201
VAULT_SERVICE_PORT=8200
SKIP_CHOWN=true
VAULT_AGENT_INJECTOR_SVC_PORT_443_TCP=tcp://172.16.25.127:443
VAULT_AGENT_INJECTOR_SVC_SERVICE_PORT_HTTPS=443
VAULT_STANDBY_PORT_8200_TCP=tcp://172.16.77.15:8200
VAULT_SKIP_VERIFY=true
VAULT_STANDBY_SERVICE_PORT=8200
VAULT_STANDBY_PORT_8201_TCP=tcp://172.16.77.15:8201
VAULT_STANDBY_PORT=tcp://172.16.77.15:8200
VAULT_AGENT_INJECTOR_SVC_SERVICE_HOST=172.16.25.127
VERSION=
NAME=vault
VAULT_K8S_POD_NAME=vault-0
TERM=xterm
KUBERNETES_PORT_443_TCP_ADDR=172.16.0.1
VAULT_AGENT_INJECTOR_SVC_PORT=tcp://172.16.25.127:443
VAULT_AGENT_INJECTOR_SVC_SERVICE_PORT=443
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
VAULT_K8S_NAMESPACE=vault
VAULT_CLUSTER_ADDR=https://vault-0.vault-internal:8201
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
VAULT_RAFT_NODE_ID=vault-0
VAULT_ACTIVE_PORT_8200_TCP_ADDR=172.16.41.209
VAULT_ACTIVE_PORT_8201_TCP_ADDR=172.16.41.209
VAULT_TOKEN=hvs.XXXXXXXXXXX
VAULT_LOG_LEVEL=info
VAULT_ACTIVE_SERVICE_PORT_HTTPS_INTERNAL=8201
VAULT_ACTIVE_PORT_8200_TCP_PORT=8200
KUBERNETES_PORT_443_TCP=tcp://172.16.0.1:443
VAULT_ACTIVE_PORT_8201_TCP_PORT=8201
VAULT_ACTIVE_SERVICE_PORT_HTTPS=8200
VAULT_ACTIVE_PORT_8200_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_HTTPS=443
VAULT_LOG_FORMAT=standard
VAULT_ACTIVE_PORT_8201_TCP_PROTO=tcp
VAULT_ACTIVE_SERVICE_HOST=172.16.41.209
KUBERNETES_SERVICE_HOST=172.16.0.1
PWD=/tmp
VAULT_PORT_8200_TCP_ADDR=172.16.158.201
VAULT_PORT_8201_TCP_ADDR=172.16.158.201
VAULT_SERVICE_PORT_HTTPS_INTERNAL=8201
SKIP_SETCAP=true
VAULT_PORT_8200_TCP_PORT=8200
VAULT_AGENT_INJECTOR_SVC_PORT_443_TCP_ADDR=172.16.25.127
VAULT_SERVICE_PORT_HTTPS=8200
VAULT_STANDBY_PORT_8200_TCP_ADDR=172.16.77.15
VAULT_PORT_8201_TCP_PORT=8201
VAULT_PORT_8200_TCP_PROTO=tcp

You appear not to be logged in to Vault within the pod.

Just running commands within the Vault pod doesn’t give you any special permissions. You still need to log in as normal.

step 6. perform vault login
I get the successful message back what I am expecting.

user@k8-clst01-ctr01:~$ kubectl exec -it -n vault vault-0 -- /bin/sh
/ $ vault login
Token (will be hidden):
WARNING! The VAULT_TOKEN environment variable is set! The value of this
variable will take precedence; if this is unwanted please unset VAULT_TOKEN or
update its value accordingly.

Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.

Key                  Value
---                  -----
token                hvs.XXXXXXXXXXXX
token_accessor       XXXXXXXXXXXXX
token_duration       ∞
token_renewable      false
token_policies       ["root"]
identity_policies    []
policies             ["root"]

You are apparently not logged in with this token when you attempt to manage the audit devices.

So with my OS based Vaults, I run these steps in order:

  1. SSH into Vault node
  2. Set ENV for VAULT_ADDR & VAULT_SKIP_VERIFY (as using self-signed)
  3. Run ‘vault login’
    a. enter the root token into the prompt
  4. Enter whatever CLI commands I want (audit, auth, policy, etc…)

The VAULT_TOKEN ENV is setup with the ‘vault login’ process. So subsequent CLI actions are not required to login again.

With this K8s build, that is not the case. I never get any commands run expect:

  • vault operator init
  • vault status

Any other ‘vault’ command comes back with the above error.

So am I missing something with the K8s build where the ENV is set wrong or you need to login with every command?

Thanks

Actually no it is not - it’s impossible for a child process like vault login to affect the environment of the parent shell. The token is actually written to ~/.vault-token.

If you intend to make use of vault login, the VAULT_TOKEN environment variable should not be set. That is what Vault is warning you about here:

1 Like

Thanks.
That lead me down the rabbit hole to understand this better.

That is what was happening to me. I was setting the ENV (VAULT_TOKEN) via the Helm chart for the seal stanza to use to auto-unseal the vault.

Because of that, both were set but the ENV will take precedence (as you mentioned).
So now I have the token set within the seal stanza. Since it is locked down via policy, not a huge concern there.

I also learned that if you use the vault login which injects the helper file with the token, it does not clear (at all or over time. I could not find a situation where it auto cleared that file and made you login again). So the next person that exec into the pod, will already have access to your Vault. Not good.
Why it is recommended to set the environment variable once you gain shell access to Vault. That way, when you exit the shell, you have to provide the token again to gain access.

Please bear in mind that execing into a Vault pod is not generally how people would interact with a production Vault.

All communication is via the Vault HTTP API, which is accessible just fine from outside the pod, and being inside the pod does not give you any special privileges.

Execing into the pod is just a convenient shortcut for working with a Vault instance in Kubernetes that you haven’t finished setting up an ingress or exposed service for, or if you don’t have a management workstation with the Vault CLI installed.

Equally, you generally would not interact with a production Vault using a non-expiring root token.

Once Vault is in production, usually all root tokens are revoked, and all access is via acquiring time-limited tokens from an auth method login operation.