Helm chart components installed w/o the helm chart

Can anyone explain this one, is this a bug?!? I run terraform apply to install two terraform modules that are wrappers for helm charts, one is the CRDs, other is the actual application. The chart for CRDs gets installed, but the regular helm chart, does not, but yet, here’s the weird part, all its components are installed.

The terraform apply (had to run a second time for another reason) fails. It fails because it cannot install the helm chart when its components fail, so the helm fails.

module "fission_crds" {
  source = "../../modules/helm/fission-crds"
}

module "fission" {
  source  = "app.terraform.io/MyCompany/fission/helm" 
  version = "2.0.0"
  primary_domain = "${var.region}.aws.${var.base_domain}"
  fission_repository = "index.docker.io"
  depends_on = [module.fission_crds]
}

Afterwards, no helm chart for fission is installed.

helm ls -A | grep fission
fission-crds                	default    	1

But all of it’s components are installed.

k get all -n fission
NAME                                  READY   STATUS    RESTARTS   AGE
pod/buildermgr-58f45cf5f7-t44cr       1/1     Running   0          39h
pod/executor-65cf66f689-xwj88         1/1     Running   0          39h
pod/kubewatcher-6778dc9959-w7j9g      1/1     Running   0          39h
pod/mqtrigger-keda-849d9c8d8b-zxlf7   1/1     Running   0          39h
pod/router-6d9954f848-d9m2p           1/1     Running   0          39h
pod/storagesvc-5c8bb4fbc5-7dmzk       1/1     Running   0          39h
pod/timer-b496d64b9-4zps7             1/1     Running   0          39h
pod/webhook-54f8479989-tqx89          1/1     Running   0          39h

NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/executor          ClusterIP   172.20.60.50     <none>        80/TCP    39h
service/router            ClusterIP   172.20.145.226   <none>        80/TCP    39h
service/storagesvc        ClusterIP   172.20.32.153    <none>        80/TCP    39h
service/webhook-service   ClusterIP   172.20.100.61    <none>        443/TCP   39h

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/buildermgr       1/1     1            1           39h
deployment.apps/executor         1/1     1            1           39h
deployment.apps/kubewatcher      1/1     1            1           39h
deployment.apps/mqtrigger-keda   1/1     1            1           39h
deployment.apps/router           1/1     1            1           39h
deployment.apps/storagesvc       1/1     1            1           39h
deployment.apps/timer            1/1     1            1           39h
deployment.apps/webhook          1/1     1            1           39h

NAME                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/buildermgr-58f45cf5f7       1         1         1       39h
replicaset.apps/executor-65cf66f689         1         1         1       39h
replicaset.apps/kubewatcher-6778dc9959      1         1         1       39h
replicaset.apps/mqtrigger-keda-849d9c8d8b   1         1         1       39h
replicaset.apps/router-6d9954f848           1         1         1       39h
replicaset.apps/storagesvc-5c8bb4fbc5       1         1         1       39h
replicaset.apps/timer-b496d64b9             1         1         1       39h
replicaset.apps/webhook-54f8479989          1         1         1       39h

Please show us the list of secrets and configmaps in the fission namespace.

Can you reproduce this in a clean Kubernetes cluster?

Have you turned on Terraform debug logging to get more of an idea what the apply that runs helm is doing?

This was clean K8S cluster that was created during the terraform apply.

Secrets/Configmap

$ k get secrets,cm -n fission
NAME                                   TYPE                 DATA   AGE
secret/fission-webhook-certs           Opaque               3      40h
secret/sh.helm.release.v1.fission.v1   helm.sh/release.v1   1      40h

NAME                         DATA   AGE
configmap/feature-config     1      40h
configmap/kube-root-ca.crt   1      40h

Also, elsewhere in code, not sure if useful.

# main.tf
module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 19.5"
  #…
}

module "management_node_group" {
  source  = "terraform-aws-modules/eks/aws//modules/eks-managed-node-group"
  version = "~> 19.5"
  #…
}

module "eks_addons" {
  source = "../../modules/aws/eks_addons"
  #…
}

# provider.tf
provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
}

provider "helm" {
  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
    token                  = data.aws_eks_cluster_auth.cluster.token
  }
}

Well, there’s the secret in which Helm is tracking your fission release, so it looks like everything is working as expected, except the helm list operation.

I wish that was all it is, but terraform actually fails as it detects the helm chart is not installed either. This blocks the terraform from running.

So I am thinking this is kind of a bug, but I am not sure where the bug is, as both Terraform and Helm cannot detect that a helm chart was installed. What is the missing piece that was not installed for the helm chart to not be detected by either system?

I do not think anyone else is going to be able to tell from the information disclosed so far.

Is there any chance you could produce a set of Terraform files and a Helm chart you can share, which reproduces the problem?

FYI - there’s only one system involved - Helm. terraform-provider-helm just embeds a copy of the Helm code.

So what did Terraform helm provider do to install a helm chart that is not detectable by helm?

You’re outright ignoring what I said: