Cleanup or upgrade is not working on kuberenetes, how to fix that?

We have installed consul using helm chart as below.

helm install -f config.yaml --version "0.31.1" consul hashicorp/consul -n consul

Later when we want to install a different version, deleted consul using this command


helm uninstall consul -n consul

And then deleted the complete namespace.

kubectl delete ns consul

But then installing the same release or any other release on the same cluster after cleanup, getting below error.


helm install -f config.yaml --version "0.31.1" consul hashicorp/consul -n consul
helm : W1209 23:28:27.095858    3592 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use 
apiextensions.k8s.io/v1 CustomResourceDefinition
At line:1 char:1
+ helm install -f config.yaml --version "0.31.1" consul hashicorp/consu ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (W1209 23:28:27....ourceDefinition:String) [], RemoteException
    + FullyQualifiedErrorId : NativeCommandError
W1209 23:30:09.027146    5344 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1209 23:30:09.266524    5344 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1209 23:30:09.491669    5344 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1209 23:30:09.720970    5344 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1209 23:30:09.955124    5344 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1209 23:30:10.189302    5344 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1209 23:30:10.417455    5344 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W1209 23:30:10.642720    5344 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
Error: INSTALLATION FAILED: create: failed to create: the server was unable to return a response in the time allotted, but may still be processing the request (post secrets)

My config file:


global:
  name: consul
  datacenter: dc1
  image: hashicorp/consul:1.10.0
  metrics:
    enabled: true
    enableGatewayMetrics: true
server:
  replicas: 1
  securityContext:
    runAsNonRoot: true
#  bootstrapExpect: 1
#  extraConfig: |
#    {
#      "telemetry": {
#        "prometheus_retention_time": "8h",
#        "disable_hostname": true
#      }
#    }
client:
  enabled: true
  securityContext:
    runAsNonRoot: true

#  extraConfig: |
#    {
#      "telemetry": {
#        "prometheus_retention_time": "1m",
#        "disable_hostname": true
#      }
#    }
controller:
  enabled: true
syncCatalog:
  enabled: true
  toConsul: true
  toK8S: false
  default: false
connectInject:
  enabled: true
  default: false
  envoyExtraArgs: "-l debug"
  metrics:
    defaultEnabled: true
    defaultEnableMerging: true
  transparentProxy:
    defaultEnabled: true
ui:
  enabled: true
  service:
    enabled: true
    type: NodePort
#  metrics:
#    enabled: true
#    provider: "prometheus"
#    baseURL: http://prometheus-server
ingressGateways:
  enabled: false
  securityContext:
    runAsNonRoot: true
  defaults:
    replicas: 1
    service:
      type: LoadBalancer
      ports:
        - port: 80
        - port: 443
meshGateway:
  enabled: false
  replicas: 1

Any sugestions?
You can try this on aks test environment and check it once for reference and more details.

Hey @ukreddy-erwin

Which version of helm are you using? From the error message, it looks like the (helm) server did respond in time, which would suggest a pre v3 Helm. We only now support Helm v3 and higher.

version 3 only I am using

I’m not able to reproduce the same error after following your steps. Note though that I had to kubectl create ns consul before the second install.

My helm version is:

$ helm version
version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"dirty", GoVersion:"go1.16.5"}

The second installation worked and all pods are running and ready.

We have one cluster in aks. Where we deployed consul helm chart in consul namespace. It created many CRDs.

The using these CRDs internally created one more namespace applicationns

When we deleted consul, it deleted.

Then, when we trying to delete applicationns, it stuck in terminating state for a long time.

So, followed this link and deleted the namespace.

Now, when I ran " kubectl get ns ", it is not showing but.

kubectl get serviceintentions -n applicationns
NAME           SYNCED   LAST SYNCED   AGE
servi1       True     41d           42d
servi2    True     41d           42d
servi3   True     41d           42d

Please suggest how to cleanup, there are many CRDs like them. Not deleting also.

Even they are not getting deleted also

If you’ve uninstalled Consul then to delete the custom resources the only way is to use kubectl edit and change the finalizers: [....] to an empty array. This will allow them to be deleted by kube.