I am sorry about the confusion.
below is an image on how our git is handled atm.
Each WL has a different state, vars, data, and main .tf files. We utilize seperate .tf files that represent each instance that we create with TF, (as in there is no instances in the main.tf) See below
A while back, we did not have that cyber directory and I created it, but pulled the main, state, vars, data tf files from there and pasted it into cyber, not knowing anything. I adjusted the vars and data to reflect what AWS has for that WL.
Present day, I am attempting to import because our team members created a lot of resources outside of TF, and we want to track them and throw them in an S3 to be able to collaborate together.
Right now, when I init in the cyber folder on our server, TRFM, this is what happens:
I then do a validate to verify that everything is good to go, and it shows valid.
I TF plan and it gets stuck, after waiting 5 mins i stop it.
I then use the lock false option and it gets stuck again. I then remove the TEST file just to get a state file that shows the information on it.
Here is the first result that I received that was sus to me. below was what populated after my plan:
for the information above, the fences-default references to the terraformvars which has the SG listed that is the proper Name in our AWS, for this WorkLoad. I deleted that data link and this was the response after another TF plan:
I did the apply to get the state file, state list is as follows:
In that state file, it refences another WL vpc id, Admin. Everything else in that state file shows the Admin WorkLoad infrastructure.
Things to note: we have different kmskeyids per Workload, which are annotated in the tfvars file, along with the variables that are only in said WorkLoad (WL). I also go to the other WLs in their respective folders, and i do a TF plan and it shows that it cannot find the security group, like the Cyber one. I am assuming it is also trying to get the information from the Admin WL. The main.tf files do not change throughout WLs, but each WL has their own. I was using it for terraform import for the admin workload, and it worked flawlessly for that WL.