TypeError: Cannot read properties of undefined (reading 'required_providers')

After upgrade cdktf to 17.3 my stack fails to synth with the error TypeError: Cannot read properties of undefined (reading 'required_providers')

To summarize

const app = new App()
const stack = new TerraformStack(app, 'stack');
const eks1 = new EKSCluster(stack, 'eks1', {
...
}
const eks2 = new EKSCluster(stack, 'eks2', {
...
}
app.synth();

I originally made app to make a stack, and I added multiple eks clusters to that one stack. The problem is now that I have a stack called “stack” in cdktf.out that does not have “required_providers” since it’s not actual a terraform_stack

the cdk.tf.out of “stack” is this

  "//": {
    "metadata": {
      "backend": "local",
      "stackName": "anedot",
      "version": "0.17.3"
    },
    "outputs": {
    }
  }
}

I tried adding a provider to the “stack” terraformstack. Doing that does allow me to run cdktf synth successfully however if I try to deploy any of the eks-stacks cdktf see it as being an entirely new stack and says it needs to destroy and recreate everything.

It was suggested that I look into the terraform migrate documentation but that does not apply to this situation. I am not trying to move resources from one stack to another, i am trying to move entire stacks to another stack.

The structure is currently
app → stack (no provider) → stack-eks1(with provider),stack-eks2(with provider)

But I need to go to either one of the following

  1. app → stack (with provider) → stack-eks1(with provider),stack-eks2(with provider)
  2. app → stack-eks1(with provider),stack-eks2(with provider)

The issue is doing either will make cdktf think stack-eks1 needs to be destroyed and rebuilt entirely. I need to be able to update the higher level stack/app while making stack-eks1,stack-eks2 remain the same.

I wouldn’t expect adding a provider to a stack to cause any changes. That said, while nested stacks are theoretically supported, they have not been tested and bugs are definitely possible.

There are two main reasons I could see for stack wide changes.
One would be the stored stated getting lost. In the example stack given you are using local state which can lead to issues if the state file is deleted or even if the name of it no longer matches the expected name.
The other issue would be with the generated resource identifiers changing. This can happen with restructuring pretty easily. This can be worked around by overriding the logical id per resource or also by overriding the algorithm at the stack level.

Blockquote
This can be worked around by overriding the logical id per resource or also by overriding the algorithm at the stack level.

@jsteinich Could you explain how to do this? And perhaps give an example at the code level?
I do think this is my issue but when I look at the tfstate file I can’t see how the logical id is being determined

@jsteinich sorry to bother you, but any feedback would be great

Overriding per resource would look like:

resource.overrideLogicalId('my_resource_id');

Overriding at the stack level would involve making a subclass of TerraformStack and overriding the allocateLogicalId function. You’d likely be removing the stack index check from the default implementation to match existing resource ids.

actually am fixing the issue by running terraform mv on each resource. for some reason the ID naming convention drastically changed when upgrading to new version of cdktf/terraform. might be bug of nested stacks

There was (is) a longstanding difference between running a cdktf app directly vs running it through the cli. As a result, some pieces that had been deprecated for several versions and subsequently removed inadvertently caught users off guard.