EKS terraform apply failing - Error: unexpected state 'FAILED', wanted target 'ACTIVE'. last error: %!s(<nil>)

I am using terraform aws provider to provision EKS cluster and it fails with error

Error: unexpected state ‘FAILED’, wanted target ‘ACTIVE’. last error: %!s()

And I enabled TF_LOGS=TRACE however that also not showing anything helpful. Any one got any ideas what is the issue and how to debug these?

module.eks.aws_eks_cluster.this[0]: Still creating… [14m11s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating… [14m21s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating… [14m31s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating… [14m41s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating… [14m51s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating… [15m1s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating… [15m11s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating… [15m21s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating… [15m31s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating… [15m41s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating… [15m51s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating… [16m1s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating… [16m11s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating… [16m21s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating… [16m31s elapsed]
module.eks.aws_eks_cluster.this[0]: Still creating… [16m41s elapsed]

Error: unexpected state ‘FAILED’, wanted target ‘ACTIVE’. last error: %!s()

on .terraform/modules/eks/terraform-aws-modules-terraform-aws-eks-908c656/cluster.tf line 9, in resource “aws_eks_cluster” “this”:
9: resource “aws_eks_cluster” “this” {

module.eks.aws_eks_cluster.this[0]: Still creating… [30m40s elapsed]
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.output.cluster_version” is waiting for “module.eks.aws_eks_cluster.this[0]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.output.cluster_id” is waiting for “module.eks.aws_eks_cluster.this[0]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.aws_security_group_rule.workers_ingress_cluster_https[0]” is waiting for “module.eks.local.worker_security_group_id”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “helm_release.traefik-devint” is waiting for “kubernetes_namespace.devint”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.output.kubeconfig” is waiting for “module.eks.data.template_file.kubeconfig[0]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.data.template_file.userdata[1]” is waiting for “module.eks.aws_eks_cluster.this[0]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.local.worker_security_group_id” is waiting for “module.eks.aws_security_group.workers[0]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “kubernetes_namespace.devint (prepare state)” is waiting for “provider.kubernetes”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.kubernetes_config_map.aws_auth (prepare state)” is waiting for “provider.kubernetes”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.output.kubeconfig_filename” is waiting for “module.eks.local_file.kubeconfig[0]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “helm_release.kube2iam (prepare state)” is waiting for “provider.helm”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.data.null_data_source.node_groups[0]” is waiting for “module.eks.kubernetes_config_map.aws_auth[0]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.output.config_map_aws_auth” is waiting for “module.eks.kubernetes_config_map.aws_auth[0]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.output.cluster_certificate_authority_data” is waiting for “module.eks.aws_eks_cluster.this[0]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “data.helm_repository.stable (prepare state)” is waiting for “provider.helm”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.data.template_file.userdata[0]” is waiting for “module.eks.aws_eks_cluster.this[0]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.output.workers_asg_names” is waiting for “module.eks.aws_autoscaling_group.workers[1]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “output.kubectl_config” is waiting for “module.eks.output.kubeconfig”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.local_file.kubeconfig[0]” is waiting for “module.eks.data.template_file.kubeconfig[0]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.random_pet.workers[0]” is waiting for “module.eks.aws_launch_configuration.workers[1]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “data.aws_eks_cluster.cluster” is waiting for “module.eks.output.cluster_id”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “kubernetes_namespace.devint” is waiting for “module.eks.output.workers_user_data”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “helm_release.aws-cluster-autoscaler (prepare state)” is waiting for “provider.helm”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.aws_autoscaling_group.workers[0]” is waiting for “module.eks.random_pet.workers[1]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.null_resource.wait_for_cluster[0]” is waiting for “module.eks.aws_eks_cluster.this[0]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.aws_security_group_rule.workers_ingress_self[0]” is waiting for “module.eks.local.worker_security_group_id”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “helm_release.traefik-devint (prepare state)” is waiting for “provider.helm”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “helm_release.external-dns (prepare state)” is waiting for “provider.helm”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.output.cluster_oidc_issuer_url” is waiting for “module.eks.aws_eks_cluster.this[0]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “provisioner.local-exec (close)” is waiting for “module.eks.null_resource.wait_for_cluster[0]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.output.cluster_arn” is waiting for “module.eks.aws_eks_cluster.this[0]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.module.node_groups.var.cluster_name” is waiting for “module.eks.data.null_data_source.node_groups[0]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “provider.helm (close)” is waiting for “helm_release.kube2iam”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “provider.local (close)” is waiting for “module.eks.local_file.kubeconfig[0]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.aws_security_group.workers[0]” is waiting for “module.eks.aws_eks_cluster.this[0]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “output.cluster_endpoint” is waiting for “module.eks.output.cluster_endpoint”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “provider.aws (close)” is waiting for “module.eks.aws_security_group_rule.cluster_https_worker_ingress[0]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “root” is waiting for “meta.count-boundary (EachMode fixup)”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “provider.kubernetes” is waiting for “data.aws_eks_cluster.cluster”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.output.workers_asg_arns” is waiting for “module.eks.aws_autoscaling_group.workers[1]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.output.worker_security_group_id” is waiting for “module.eks.local.worker_security_group_id”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “meta.count-boundary (EachMode fixup)” is waiting for “data.helm_repository.stable (prepare state)”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.aws_security_group_rule.workers_ingress_cluster[0]” is waiting for “module.eks.local.worker_security_group_id”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.output.workers_user_data” is waiting for “module.eks.data.template_file.userdata[1]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “provider.kubernetes (close)” is waiting for “kubernetes_namespace.devint”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.aws_security_group_rule.cluster_https_worker_ingress[0]” is waiting for “module.eks.local.worker_security_group_id”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “provider.helm” is waiting for “data.aws_eks_cluster.cluster”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.output.cluster_endpoint” is waiting for “module.eks.aws_eks_cluster.this[0]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “output.config_map_aws_auth” is waiting for “module.eks.output.config_map_aws_auth”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.random_pet.workers[1]” is waiting for “module.eks.aws_launch_configuration.workers[0]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “provider.null (close)” is waiting for “module.eks.data.null_data_source.node_groups[0]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.aws_autoscaling_group.workers[1]” is waiting for “module.eks.random_pet.workers[1]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “helm_release.aws-cluster-autoscaler” is waiting for “module.eks.output.cluster_oidc_issuer_url”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.aws_launch_configuration.workers[1]” is waiting for “module.eks.data.template_file.userdata[1]”
2020/03/04 10:06:18 [TRACE] dag/walk: vertex “module.eks.data.template_file.kubeconfig[0]” is waiting for “module.eks.aws_eks_cluster.this[0]”
2020/03/04 10:06:19 [TRACE] dag/walk: vertex “helm_release.external-dns” is waiting for “module.eks.output.config_map_aws_auth”
2020/03/04 10:06:19 [TRACE] dag/walk: vertex “module.eks.kubernetes_config_map.aws_auth[0]” is waiting for “module.eks.null_resource.wait_for_cluster[0]”
2020/03/04 10:06:19 [TRACE] dag/walk: vertex “module.eks.aws_launch_configuration.workers[0]” is waiting for “module.eks.data.template_file.userdata[1]”
2020/03/04 10:06:19 [TRACE] dag/walk: vertex “provider.template (close)” is waiting for “module.eks.data.template_file.userdata[0]”
2020-03-04T10:06:19.702-0500 [DEBUG] plugin.terraform-provider-aws_v2.51.0_x4: 2020/03/04 10:06:19 [ERROR] WaitForState exceeded refresh grace period
2020/03/04 10:06:19 [DEBUG] module.eks.aws_eks_cluster.this[0]: apply errored, but we’re indicating that via the Error pointer rather than returning it: timeout while waiting for state to become ‘ACTIVE’ (last state: ‘CREATING’, timeout: 30m0s)
2020/03/04 10:06:19 [TRACE] module.eks: eval: *terraform.EvalMaybeTainted
2020/03/04 10:06:19 [TRACE] EvalMaybeTainted: module.eks.aws_eks_cluster.this[0] encountered an error during creation, so it is now marked as tainted
2020/03/04 10:06:19 [TRACE] module.eks: eval: *terraform.EvalWriteState
2020/03/04 10:06:19 [TRACE] EvalWriteState: recording 5 dependencies for module.eks.aws_eks_cluster.this[0]
2020/03/04 10:06:19 [TRACE] EvalWriteState: writing current state object for module.eks.aws_eks_cluster.this[0]
2020/03/04 10:06:19 [TRACE] module.eks: eval: *terraform.EvalApplyProvisioners
2020/03/04 10:06:19 [TRACE] EvalApplyProvisioners: aws_eks_cluster.this[0] is tainted, so skipping provisioning
2020/03/04 10:06:19 [TRACE] module.eks: eval: *terraform.EvalMaybeTainted
2020/03/04 10:06:19 [TRACE] EvalMaybeTainted: module.eks.aws_eks_cluster.this[0] was already tainted, so nothing to do
2020/03/04 10:06:19 [TRACE] module.eks: eval: *terraform.EvalWriteState
2020/03/04 10:06:19 [TRACE] EvalWriteState: recording 5 dependencies for module.eks.aws_eks_cluster.this[0]
2020/03/04 10:06:19 [TRACE] EvalWriteState: writing current state object for module.eks.aws_eks_cluster.this[0]
2020/03/04 10:06:19 [TRACE] module.eks: eval: *terraform.EvalIf
2020/03/04 10:06:19 [TRACE] module.eks: eval: *terraform.EvalIf
2020/03/04 10:06:19 [TRACE] module.eks: eval: *terraform.EvalWriteDiff
2020/03/04 10:06:19 [TRACE] module.eks: eval: *terraform.EvalApplyPost
2020/03/04 10:06:19 [ERROR] module.eks: eval: *terraform.EvalApplyPost, err: timeout while waiting for state to become ‘ACTIVE’ (last state: ‘CREATING’, timeout: 30m0s)
2020/03/04 10:06:19 [ERROR] module.eks: eval: *terraform.EvalSequence, err: timeout while waiting for state to become ‘ACTIVE’ (last state: ‘CREATING’, timeout: 30m0s)
2020/03/04 10:06:19 [TRACE] [walkApply] Exiting eval tree: module.eks.aws_eks_cluster.this[0]
2020/03/04 10:06:19 [TRACE] vertex “module.eks.aws_eks_cluster.this[0]”: visit complete
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.output.cluster_version” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.output.cluster_oidc_issuer_url” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.output.cluster_endpoint” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.data.template_file.userdata[0]” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.null_resource.wait_for_cluster[0]” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.data.template_file.userdata[1]” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.output.cluster_certificate_authority_data” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.aws_security_group.workers[0]” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “provisioner.local-exec (close)” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.output.workers_user_data” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.output.cluster_arn” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.data.template_file.kubeconfig[0]” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.output.cluster_id” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.output.kubeconfig” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “provider.template (close)” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “output.cluster_endpoint” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “output.kubectl_config” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.local.worker_security_group_id” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.aws_security_group_rule.workers_egress_internet[0]” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.output.worker_security_group_id” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.aws_security_group_rule.workers_ingress_self[0]” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “data.aws_eks_cluster_auth.cluster” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.aws_security_group_rule.cluster_https_worker_ingress[0]” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.aws_security_group_rule.workers_ingress_cluster_https[0]” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.aws_launch_configuration.workers[0]” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.aws_launch_configuration.workers[1]” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “data.aws_eks_cluster.cluster” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.random_pet.workers[0]” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.local_file.kubeconfig[0]” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.random_pet.workers[1]” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “provider.kubernetes” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “provider.helm” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.aws_security_group_rule.workers_ingress_cluster[0]” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “provider.local (close)” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “helm_release.kube2iam (prepare state)” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “kubernetes_namespace.devint (prepare state)” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.aws_autoscaling_group.workers[0]” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.output.kubeconfig_filename” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “provider.random (close)” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “helm_release.traefik-devint (prepare state)” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.kubernetes_config_map.aws_auth (prepare state)” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “helm_release.aws-cluster-autoscaler (prepare state)” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “data.helm_repository.stable (prepare state)” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “helm_release.external-dns (prepare state)” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.kubernetes_config_map.aws_auth[0]” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.aws_autoscaling_group.workers[1]” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.output.workers_asg_names” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.data.null_data_source.node_groups[0]” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.output.workers_asg_arns” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.output.config_map_aws_auth” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “provider.aws (close)” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “module.eks.module.node_groups.var.cluster_name” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “provider.null (close)” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “output.config_map_aws_auth” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “kubernetes_namespace.devint” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “helm_release.aws-cluster-autoscaler” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “helm_release.traefik-devint” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “helm_release.kube2iam” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “helm_release.external-dns” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “provider.kubernetes (close)” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “provider.helm (close)” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “meta.count-boundary (EachMode fixup)” errored, so skipping
2020/03/04 10:06:19 [TRACE] dag/walk: upstream of “root” errored, so skipping
2020/03/04 10:06:19 [TRACE] statemgr.Filesystem: have already backed up original terraform.tfstate to terraform.tfstate.backup on a previous write
2020/03/04 10:06:19 [TRACE] statemgr.Filesystem: state has changed since last snapshot, so incrementing serial to 324
2020/03/04 10:06:19 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate

Error: timeout while waiting for state to become ‘ACTIVE’ (last state: ‘CREATING’, timeout: 30m0s)

on .terraform/modules/eks/terraform-aws-modules-terraform-aws-eks-908c656/cluster.tf line 9, in resource “aws_eks_cluster” “this”:
9: resource “aws_eks_cluster” “this” {

2020/03/04 10:06:19 [TRACE] statemgr.Filesystem: removing lock metadata file .terraform.tfstate.lock.info
2020/03/04 10:06:19 [TRACE] statemgr.Filesystem: unlocking terraform.tfstate using fcntl flock
2020-03-04T10:06:19.719-0500 [DEBUG] plugin: plugin process exited: path=/Users/personaleks/Public/aws/terraform-dev/.terraform/plugins/darwin_amd64/terraform-provider-null_v2.1.2_x4 pid=47632
2020-03-04T10:06:19.719-0500 [DEBUG] plugin: plugin process exited: path=/Users/personaleks/Public/aws/terraform-dev/.terraform/plugins/darwin_amd64/terraform-provider-local_v1.4.0_x4 pid=47635
2020-03-04T10:06:19.719-0500 [DEBUG] plugin: plugin exited
2020-03-04T10:06:19.719-0500 [DEBUG] plugin: plugin exited
2020-03-04T10:06:19.720-0500 [DEBUG] plugin: plugin process exited: path=/Users/personaleks/Public/aws/terraform-dev/.terraform/plugins/darwin_amd64/terraform-provider-random_v2.2.1_x4 pid=47633
2020-03-04T10:06:19.720-0500 [DEBUG] plugin: plugin exited
2020-03-04T10:06:19.720-0500 [DEBUG] plugin: plugin process exited: path=/Users/personaleks/Public/aws/terraform-dev/.terraform/plugins/darwin_amd64/terraform-provider-template_v2.1.2_x4 pid=47630
2020-03-04T10:06:19.720-0500 [DEBUG] plugin: plugin exited
2020-03-04T10:06:19.723-0500 [DEBUG] plugin: plugin process exited: path=/Users/personaleks/Public/aws/terraform-dev/.terraform/plugins/darwin_amd64/terraform-provider-aws_v2.51.0_x4 pid=47631
2020-03-04T10:06:19.723-0500 [DEBUG] plugin: plugin exited
2020-03-04T10:06:19.724-0500 [DEBUG] plugin: plugin process exited: path=/Users/personaleks/Public/aws/terraform-dev/terraform pid=47628
2020-03-04T10:06:19.724-0500 [DEBUG] plugin: plugin exited

Hello.

I just got the same error, when updating through terraform. It works fine with AWS console though.

Did you ever find a solution? :thinking:

Cheers!