State file pulling data from other folder

Good afternoon,

I am new to Terraform and am trying to figure out a problem that has got me by the legs. We currently have different Work Loads (Dev, Stage, Prod, Cyber, Admin, etc) and each Work load has a different state file. Back in the day before I knew anything about Terraform, I noticed that our Git didn’t have a repo for Cyber, so I thought I could just copy pasta it from Admin and it will all work out, with just the different .tf files. I adjusted our tfvars to match what AWS had though. Fast forward to today, I am trying to update the state file for Cyber (the one I copied the Terraform files to from our admin) and some reason when I run Terraform init to get the state, it is showing information from the admin WL (VPC, SGs and all). They are in different folders though. I am lost at this point. Any and all help would be greatly appreciated.

Please have a read through Guide to asking for help in this forum

In particular, you need to show us details of what you did and what happened as a result, not just give us a brief recap that may conceal useful details.

I am sorry about the confusion.
below is an image on how our git is handled atm.
image
Each WL has a different state, vars, data, and main .tf files. We utilize seperate .tf files that represent each instance that we create with TF, (as in there is no instances in the main.tf) See below


A while back, we did not have that cyber directory and I created it, but pulled the main, state, vars, data tf files from there and pasted it into cyber, not knowing anything. I adjusted the vars and data to reflect what AWS has for that WL.

Present day, I am attempting to import because our team members created a lot of resources outside of TF, and we want to track them and throw them in an S3 to be able to collaborate together.
Right now, when I init in the cyber folder on our server, TRFM, this is what happens:


I then do a validate to verify that everything is good to go, and it shows valid.

I TF plan and it gets stuck, after waiting 5 mins i stop it.


I then use the lock false option and it gets stuck again. I then remove the TEST file just to get a state file that shows the information on it.

Here is the first result that I received that was sus to me. below was what populated after my plan:

for the information above, the fences-default references to the terraformvars which has the SG listed that is the proper Name in our AWS, for this WorkLoad. I deleted that data link and this was the response after another TF plan:
image

I did the apply to get the state file, state list is as follows:

In that state file, it refences another WL vpc id, Admin. Everything else in that state file shows the Admin WorkLoad infrastructure.

Things to note: we have different kmskeyids per Workload, which are annotated in the tfvars file, along with the variables that are only in said WorkLoad (WL). I also go to the other WLs in their respective folders, and i do a TF plan and it shows that it cannot find the security group, like the Cyber one. I am assuming it is also trying to get the information from the Admin WL. The main.tf files do not change throughout WLs, but each WL has their own. I was using it for terraform import for the admin workload, and it worked flawlessly for that WL.

main.tf
image

Please copy & paste code & output rather than using screenshots as they are much harder to deal with.

Are you using remote state? If so, when you created the new repository did you also adjust the location you were storing the remote state? It sounds a it like this wasn’t done and therefore two repos are sharing the same state…

1 Like

my files are as this:
data.tf

 data "aws_subnet" "subnet_1" {
  filter {
    name = "tag:Alias"
    values = [var.subnet_1]
  }
}

 data "aws_subnet" "subnet_2" {
  filter {
    name = "tag:Alias"
    values = [var.subnet_2]
  }
}

data "aws_subnet" "subnet_3" {
  filter {
    name = "tag:Alias"
    values = [var.subnet_3]
  }
}

data "aws_security_group" "fences-connections" {
  filter {
    name = "tag:Name"
    values = [var.fences-connections]
  }
}

data "aws_security_group" "fences-default" {
  filter {
    name = "tag:Name"
    values = [var.fences-default]
  }
}

data "aws_vpc" "vpc" {
}

main.tf

terraform {
        required_providers {
                aws = {
                        source = "hashicorp/aws"
                        version = "~> 3.0"
                }
        }
}

provider "aws" {
        region = "us-gov-east-1"
        profile = "default"
}

terraform.tfvars

aws_region="us-east-1"
ami_lin="ami-for-linux"
ami_win="ami-for-windows"
fences_keypair="fences-prod-ec2-keypair"
kmskeyid="arn:kms-key"
network="10.0.XXX"
iamrole_ec2="ec2"
iamrole_wlmv2="ec2-workload-manager"
wlname="XXXCbr"
wlid="XX"
invgrp="cyber"
subnet_1="Private_Subnet_1"
subnet_2="Private_Subnet_2"
subnet_3="Private_Subnet_3"
fences-connections="WL-SG"
fences-default="custom-default-SG"
customernumber="0204"

variables.tf

variable "aws_region" {
}

variable "ami_lin" {
}

variable "ami_win" {
}

variable "fences_keypair" {
}

variable "kmskeyid" {
}

variable "network" {
}

variable "iamrole_ec2" {
}

variable "iamrole_wlmv2" {
}

variable "wlname" {
}

variable "wlid" {
}

variable "invgrp" {
}

variable "subnet_1" {
}

variable "subnet_2" {
}

variable "subnet_3" {
}

variable "fences-connections" {
}

variable "fences-default" {
}

variable "customernumber" {
}

we are not using remote states, yet. I need to get this figured out first. with only those files in the folder, I am doing a TF apply to get/create a new state file, the state it creates is the information from another folder.
the only difference between these tf files are the terrraform.tfvars file, it contains the approptiate information related to that WorkLoad.