> terragrunt apply
Acquiring state lock. This may take a few moments...
Releasing state lock. This may take a few moments...
ERRO[0014] 1 error occurred:
* exit status 1
I don’t use terragrunt myself - and there are no details in the error output at all to explain what’s going wrong. You would be able to draw on a larger pool of people who could possibly answer, if you could remove terragrunt from the example entirely, and show an error from terraform itself.
You have used count in your module block. That means module.eks_infra is now a list. Expressions such as module.eks_infra.cluster_endpoint are therefore now wrong, and need to be written as module.eks_infra[0].cluster_endpoint.
The expression module.eks_infra.*.cluster_name is also wrong in context, as it will result in a list, and a nested list makes no sense in a list of command line arguments. That needs to use the [0]. style too.
On top of these issues, I don’t understand what you’re trying to do here - you’re configuring your kubernetes provider with both a tokenand an exec block?
It is also not clear what you expect to happen re Kubernetes access when var.enable_eks is false.
Lastly, I must warn you, trying to create a Kubernetes cluster, and manipulate objects inside that cluster via the Kubernetes API, in the same Terraform configuration is subject to lots of caveats, and IMO generally best avoided entirely.
Unfortunately I can’t unuse terragrunt as it’s a shared project which I just joined.
Regarding the * comment,I switched to [0] and still getting the same results …
About what I’m trying achieve :
I have a module (lets say it’s name is main) which “calls” other modules, e.g rds, eks_infra and ecs modules.
The main module is given some enable/disable flags, e.g enable_rds,enable_eks to conditionally enable it’s child modules
My issue is how should I configure the kubernetes provider if it’s dependant on the eks_infra module ?
Let’s say sometimes var.enable_eks would be true → I want to have the eks_infra module.
And sometimes var.enable_eks would be false → I DON’T want to have the eks_infra module.
I hope I made my intentions clear, once again thanks for your detailed comment
I totally understand, but unfortunately, as I’m a new contributor to this project can’t move too many things around
I use the kubernetes provider to provision addons like karpenter, configure Ingress class and so on, all which require the kubernetes provider to be configured at the cluster created at eks_infra module
Oh… so when var.enable_eks is false, you have no use for the kubernetes provider at all? You’re just trying to feed it meaningless blank values, and will never make use of it?
That could be a problem. There’s no first-class supported Terraform way to do what you want to do, so you’re basically hoping to find some kind of stub configuration that is sufficient to convince the provider not to error. I’m not even sure if that’s possible.
I guess it might suffice to just set all of the provider settings to null in that case, similar to what you’re already doing.
Problems moving forward:
You didn’t address what I said about conflicting token and exec settings in your provider block.
I know nothing about terragrunt, so as long as it’s failing with a meaningless error, my recommendation is always going to be to try to reproduce the problem without it.
Tried that, meaning inserting null values to the provider based on the current module outputs.
As you mentioned, it did caused problems and unwanted behaviors…
We decided to move the entire EKS related resources (infra and entities and providers) to a separate module, meaning no enable_eks flags → only apply or destroy to activate/deactivate the module