Updating consul helm chart in k8 restarts all consul pods

I am using helm chart version 31.1 to deploy consul servers, agents per k8 node, consul mesh gateways, injector and controller. Why when i change the memory requested for the injector pod do all my consul pods get terminated and restarted? I would think only the pod that was impacted by my change would need to be restarted.

I also set pod disruption budgets but the helm upgrade does not seem to be respecting them.

edit: I think it was because in the terraform deploying the helm chart recreate_pods is set to true

Yeah that shouldn’t happen so I think what you said about the Terraform is the issue.