Upgrade Version from 1.12.1 to 1.12.2 not working as intended

I am trying to update Vault version from 1.12.1 to 1.12.2 using helm by changing the values.yaml file to the newer version tag i.e. from 1.12.1 to 1.12.2 which is running in AKS. The process is successful and the image that gets picked up by the pod is 1.12.2, after deleting the pods and letting them recreate themselves with the updated version the vault-version is still showing up as 1.12.1 and further confirmed by checking the version within the pod that the version does not get updated… Is this a bug or am I missing something that the new version requires?

The image looks fine to me:

$ docker run --rm -it hashicorp/vault:1.12.2 /bin/vault version
Vault v1.12.2 (415e1fe3118eebd5df6cb60d13defdc01aa17b03), built 2022-11-23T12:53:46Z

At the current level of details you’ve given, that’s really all that can be said.

If you’d like further help, you’ll need to share more detailed, precise information about what you are doing and what you see on your screen.

I am upgrading Vault version to 1.12.2 from 1.12.1 using vault helm chart. Within my values.yaml I change the following values:

injector.image.tag=1.1.0
injector.agentImage.tag=1.12.2
server.image.tag=1.12.2

The steps I’m executing against my helm chart:

helm upgrade --install vault . --namespace vault --version=0.22.1
kubectl delete pods --selector="vault-active=false"
kubectl delete pods --selector="vault-active=active"

Within the pod I can see that the image thats being picked up is 1.12.2 but the vault-version label is still showing 1.12.1. See below:

Name:             vault-0
Namespace:        vault
Priority:         0
Service Account:  vault
Start Time:       Fri, 16 Dec 2022 18:56:06 -0500
Labels:           app.kubernetes.io/instance=vault
                  app.kubernetes.io/name=vault
                  component=server
                  controller-revision-hash=vault-7d5784c448
                  helm.sh/chart=vault-0.21.0
                  statefulset.kubernetes.io/pod-name=vault-0
                  vault-active=false
                  vault-initialized=true
                  vault-perf-standby=false
                  vault-sealed=false
                  vault-version=1.12.1
Annotations:      <none>
Status:           Running
IP:               
IPs:
  IP:          
Controlled By:  StatefulSet/vault
Containers:
  vault:
    Container ID:  containerd://63ecf7abd3c669086ccc8d8a6be055c5528eaee17b0316e970ef4a779b537d30
    Image:         vscojfrogrhel.com/infra-images/vault/vault:1.12.2
images/vault/vault@
    Ports:         8200/TCP, 8201/TCP, 8202/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP

Point to note that I am able to downgrade to 1.11.3 successfully but when upgrading to 1.12.2 from 1.11.3 it still installs 1.12.1. instead.

I don’t think you’re managing to upgrade what you think you’re upgrading, then.

One tell-tale is that you’re attempting to use one chart version:

But your pod shows it was deployed from a different chart version:

Fairly new to deploying and maintaining helm charts. At the moment my helm.sh/chart version is vault-0.21.0. Even if I run the helm upgrade command with version 0.21.0 I get the same results. Is the helm chart version somehow tied down to a single vault version?

My understanding was that I can upgrade my vault cluster to the latest version without having to upgrade my helm chart version. Sorry for the confusion this might be causing.

No, it just provides a default version if one is not otherwise overridden in values.

My point is rather, that even when you were intending to deploy chart version 0.22.1, it clearly wasn’t actually being deployed, since the pod metadata didn’t match that.

At this point, I suspect some kind of issue which is simply not visible from the information you’ve provided, and really needs someone who can see more details of your Kubernetes setup and practices.

If you can’t get someone to help locally, it might be a good idea to set up a new Vault in a new Kubernetes cluster, and compare what happens differently there.

Thanks Max, I appreciate all your input on this. We executed the upgrade on our production AKS cluster which has been untouched since v1.11.3. Same results as our test cluster in which it only updated till v1.12.1. Doesn’t seem to be related to cluster configuration, will be taking a deeper look into how helm settings are configured.