Kubernetes token using exec plugin for helm provider

I am running terraform inside a pod which uses sts assume role to authenticate with aws. Below are the provider versions.
aws v3.57.0
helm v2.3.0
kubernetes v2.4.1

eks version 1.21

The terraform configurations has aws resource and also the helm release. The helm provider is configured to authenticate using kubernetes eks token.

when i use below helm provider the terraform gets applied successfully:

provider "helm" {

  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
    token                  = data.aws_eks_cluster_auth.cluster.token
  }
}

However when helm provider is configured to use exec plugin ,the terraform apply fails with error: Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials

provider "helm" {
  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
    exec {
      api_version = "client.authentication.k8s.io/v1alpha1"
     args        = ["eks", "get-token", "--cluster-name", data.aws_eks_cluster.cluster.name]
     command     = "aws"
    }
  }
}

I am suppose to use the exec plugin as the token remains valid only for short while. where as exec plugin will fetch the token each time kubernetes is invoked.