Detaching a root Volume after instance creation

Hi,
Regarding to terraform aws.
I am trying to create a volume from snapshot and attach it as a root volume.

Is there a possibility to do detach the root volume when instance is created and use volume attachment to attach as a root volume.

Please let me know if there are options to do it .

data “aws_ebs_snapshot” “test” {
most_recent = true
owners = [“self”]

filter {
name = “tag:Name”
values = [“Prakash-test-snap”]
}
}

######aws_instance######
resource “aws_instance” “test-dr-instance” {
count = 1
ami = “{data.aws_ami.test-ami.id}" instance_type = "m5.large" key_name = "{var.key_pair_name}”
vpc_security_group_ids = ["{var.security_id}"] subnet_id = "{var.subnet_id}"
availability_zone = “${var.az2}”
tags {
Name = “test-dr-inst”
Customer = “Prakash-test”
Environment = “test”
}

private_ip = “10.246.10.1”
associate_public_ip_address = false
lifecycle {
ignore_changes = [“ami”]

}
}

resource “aws_volume_attachment” “testing-dr” {
device_name = “/dev/sda1”
volume_id = “{aws_ebs_volume.test-dr.id}" instance_id = "{aws_instance.test-dr-instance.id}”

}

####Ebs_volume####
resource “aws_ebs_volume” “test-dr” {
availability_zone = “{var.az2}" snapshot_id = "{data.aws_ebs_snapshot.test.id}”
type = “gp2”

tags = {
Name = “test-dr”
Project = “${var.Project}”
}
}

The above script givens an error the /dev/sda1 is already in use

Hi @bnprakash,

The root volume for an instance is normally created automatically based on an EBS snapshot attached to the selected AMI. Swapping to a different root volume while the instance is already running sounds like a strange requirement that would in effect be like unplugging the main hard drive of your server while it’s running. In principle you can set things up to make that work (making sure everything is loaded into RAM already, for example) but it’s not a common or straightforward operation.

Perhaps instead you could consider creating an AMI that has a very simple root image which is set up to poll until a second volume becomes attached dynamically and then, once it is attached, use pivot_root to swap to that new volume as the root filesystem while leaving the original volume still attached but now idle.

The above process is similar to how a Linux system might boot from an initrd, aside from the extra requirement of waiting for the secondary volume to be dynamically attached. It’s quite an advanced system configuration that will require some detailed understanding of the Linux boot process to pull off, so I can’t give specific guidance on it here, but I suspect it’s the closest you could get to your intended goal on EC2. Note that this is now more of a Linux configuration question than a Terraform question; you can use Terraform to attach the volume but what happens after that is up to the software running in the EC2 instance.

If you think about your underlying goal rather than this specific strategy to achieve it then you might find a less complicated solution. If you can find a solution that doesn’t require swapping out the root filesystem at all (e.g. one that can use a secondary filesystem in a more “normal” way) then I expect you’ll be able to achieve that more easily with existing components, rather than having to develop a new base AMI yourself.

Thank you apparentlymart for the qucik reply .

I am trying this on a Windows instance.
Have my lifecycle configured for Snapshots , So I want to check if this option is available to help me instead of creating ami and use it.

I created an instance for TF. Detached the volume manually and tried the script again which is working perfectly.

But working on automating all steps. please help if you have any ideas.

Hi @apparentlymart,
my case is different but linked to root device also. I asked also on terraform aws forum (#14955) but until now nobody helped me .

This case like that of @bnprakash regerds root device, but we need a param to tell terraform to re-attach original root device (i think before starting EC2) after re-creation so for each destroy/create cycle for other EC2 config modifing hash of the resource we achieve root persistence. I think Terraform can do that easily having all id needed for that and I can see any drawback.

This is particularly usefull for Windows machine saving some data on root (C:) device even if instructed to install on secondary device (E:, …).

I’m in the wrong way ? There are other viable solutions ?