Helm cache size locally is too big

Hi Terraform Team,

I am using Terraform to deploy/manage my cloud environment, where I used a lot of helm charts. As a result, I have a big size of helm cache on my computer locally. I tried to delete these caches and then I found the configuration difference between my local terraform files and state file, when I run terraform plan. Looks like deleting them would cause some configuration changes. Since it’s getting around 30GB, how can I handle this. These caches take too much disk space on my computer.

This feels more like a Helm problem than a Terraform one, but to help confirm that, please can you post the output of terraform plan showing these unexpected differences?

I see 2 differences in one of my projects:

# module.istio.helm_release.kiali_operator will be updated in-place
  ~ resource "helm_release" "kiali_operator" {
        id                         = "kiali-operator"
        name                       = "kiali-operator"
      ~ version                    = "1.46.0" -> "v1.46.0" 

 # module.eks.helm_release.aws_ebs_csi_driver[0] will be updated in-place
  ~ resource "helm_release" "aws_ebs_csi_driver" {
        id                         = "ebs-csi"
        name                       = "ebs-csi"
      ~ version                    = "2.5.1" -> ">=2.0.4, <2.6.0"
        # (26 unchanged attributes hidden)

If I restore the helm cache file, all of these differences will be gone.

I have used Terraform, and Helm, but I’ve never worked with terraform-provider-helm.

However, referring to Terraform Registry it seems that the version attribute is documented as:

  • version - (Optional) Specify the exact chart version to install. If this is not specified, the latest version is installed.

so it would seem the provider is not written to support passing version expressions, and you need to update your .tf source code to explicitly say:

version = "1.46.0"


version = "2.5.1"


That it worked at all with versions with an extra v prefix, or using range expressions, appears to have been accidental, and not fully supported.

@maxb Thank you for your explanation.

The cache files are under: /Users/sam/Library/Caches/helm/repository, there are tons of files, total size is almost 30G.

I just have several questions here:
1: Why is this difference related to the deleting of helm cache? As I mentioned before, I did not have any issue with the existing configuration if I did not delete the helm cache on my local computer.
2: Why is the cache keeping growth? (Almost 30G)
3: I just found another issue. After deleting (re-naming) the cache, terraform cannot upgrade the helm_release resource any more. It’s so weird. Not sure what role the cache play here. Below is my example, I tested several times. As long as I delete(re-naming) the cache, upgrade will fail 100%.

module.eks.helm_release.aws_ebs_csi_driver[0] will be updated in-place
  ~ resource "helm_release" "aws_ebs_csi_driver" {
        id                         = "ebs-csi"
        name                       = "ebs-csi"
      ~ version                    = "2.5.1" -> "2.13.0"

But I got an error:

failed to download “https://github.com/kubernetes-sigs/aws-ebs-csi-driver/releases/download/helm-chart-aws-ebs-csi-driver-2.13.0/aws-ebs-csi-driver-2.13.0.tgz” at version “2.13.0”

If I put the cache back, everything is back to normal.

Can’t I delete the cache? Since this cache is accumulating, it’s gonna eat all of my disk eventually. What can I do?

Please bear in mind that I’ve never actually used terraform-provider-helm, only Helm and Terraform separately, and am somewhat guessing here…

Total guess here, but maybe terraform-provider-helm is evaluating range expressions against the contents of the local cache. Therefore when there is something they match, they resolve to a specific version, but when they don’t, they continue to be processed by Terraform in their original string representation.

Does Helm incorporate any cache maintenance commands? I’m not sure it does? I think it just continues to save stuff to disk until the user decides to delete some of it manually.

Sorry, no idea at this point - the URL works for me, and I don’t use AWS EKS so I can’t try to reproduce your error with reasonable effort.

I think at this point you would be best served reporting at least your question 1 to the terraform-provider-helm issue tracker - it feels like a bug in the provider to me.

@maxb Thank you so much.