EBS Volume Being Read as Root Device - amazon-web-services

One of my instances was preventing me from logging so I stopped it, detached its volume, spun up a new instance, and attached the old volume. The problem is, the old volume is being treated as the root device. Because of this, I still cannot log in to the new instance (although I can do so if I don't attach the old volume).
Is there any way to fix this issue using the AWS Management Console?

It seems like you have attached your old volume in "/dev/sda1". Deattach your old volume and attach it to "/dev/sdf".

This is caused by the filesystems on each volume having the same label (the value returned by e.g. e2label /dev/xvda1). The server correctly starts booting from the first volume, and then the bootloader, there, sees the second volume having the label it anticipates for the root volume, and continues booting with the second volume as root. This an os-level setting, not visible to the AWS infrastructure.
Workaround: don't attach the second volume until after the instance has booted. EBS volumes can be attached to an instance at any time -- they don't have to be present when the instance is started. After unmount, they can also be detached at any time, with the instance still running.

To resolve this, I had to make a snapshot of the old volume first. I then created a new AMI using that snapshot. I included the old volume as an extra storage so that it's explicitly defined not to be treated as a root device.
I then created a new instance using that AMI. I was able to finally log in to the new instance. From there, I just mounted the volume.

Related

EC2 - New Instance Vs (remount) EBS-backed instance

I have a an Ubuntu-14.04 EC2 instance running with EBS volume
I regularly take snapshot
I launched a new instance Ubuntu-16.04.
I detach the root volume
I created a EBS volume from snapshot above
I re-attached the volume.
I see all the data and my servers seem to work on the new instance. For eg. mongo, app servers etc.
My question is (other than app data):
What are the differences between the new instance and the instance launched via an existing EBS?
Is the existing-EBS-launched instance supposed to work like the old instance without any changes, out-of-the-box?
What are the differences between the new instance and the instance launched via an existing EBS?
Answer- first of all, understand what EBS is, in a simple language, it is a block storage volume for use with Amazon EC2 Instance.
So Whenever you launch a new Instance via an existing EBS, all the stuff/ any manual changes on the disk which you have done previously will be automatically reflected in your new Instance, as you are using the same disk (Block Storage). It's just when you want any kind of modification like wants to change the key pair at that time we detach the volume, do the modifications and again attach the volume (disk).
Is the existing-EBS-launched instance supposed to work like the old instance without any changes, out-of-the-box?
Answer- yes Existing EBS launched instance work as the old instance, its just what kind of modification you have provided to the new instance. Suppose while launching you have changed the Type of instance, key pair, attach different security group. So all these changes will be reflected and all the manual operations done on Disk will remains same.

Reattaching an EBS volume to a new instance that was previously accidentally deleted

Just want to preface this question by saying I've checked a few other similar questions but none really answered mine.
So the situation (hypothetically) is this: I have an EC2 instance running with one EBS volume as his root device. I forgot to turn on termination protection, oops, and I accidentally delete my server. Luckily, I set my EBS volume to persist after termination (sidequestion, can you verify this setting without using the API?)
Now, I have an AMI that is a week old. So I want to create a new EC2 instance, but I want to attach the orphaned EBS volume to it, since that has the newest data, settings and whatnot. How can I achieve this?
Am I missing some information here? Is the EC2 instance just a shell, where the EBS volume is essentially my server? Should I just take a snapshot from my EBS volume, create an AMI from that and then launch a new instance that will be the same as the orphaned one?
And while I'm here asking questions, one for the road; You can either create a volume or an image from a snapshot, why would you prefer one over the other?
Thank you in advance.
I've found an indirect answer to one of my many questions.
You can reattach a root device volume. On a Linux instance that has no EBS volumes attached, when attaching the EBS volume, name it /dev/xvda and it will register as the root device.

cloning an amazon machine instance

I have two amazon machine instances running.Both of them are m3.xlarge instances. One of them has the right software and configuration that I want to use.I want to create a snapshot of the EBS volume for that machine and use that as the EBS volue to boot the second machine from. Can I do that and expect it to work without shutting down the first machine.
It is well described in the AWS documentation...
"You can take a snapshot of an attached volume that is in use. However, snapshots only capture data that has been written to your Amazon EBS volume at the time the snapshot command is issued. This might exclude any data that has been cached by any applications or the operating system. If you can pause any file writes to the volume long enough to take a snapshot, your snapshot should be complete. However, if you can't pause all file writes to the volume, you should unmount the volume from within the instance, issue the snapshot command, and then remount the volume to ensure a consistent and complete snapshot.
I have amazon as well, with 3 different clusters. With one of my clusters after setting up 25 of them I realized there was a small issue in the configuration and had live traffic going to them so I couldn't' shut down.
You can snapshot the first machines volume while it's still running, I had to do this myself. It took a little while, but ultimately it worked out. Please note that amazon cannot guarantee the consistency of the disk when doing this.
I did a snapshot of the entire thing, fixed what needed to be fixed, and spooled up 25 new servers and terminated the other 25 ( easier than modifying volumes, etc ).. But you can create a new volume with the new snapshot, and attach it to an instance and do what needs to be done to get it to boot off that volume without much of a headache.
Being that I went the easy route of spooling up new instances after my snapshot was complete, I can't walk you through on how to get a running instance to boot off a new volume.

EC2 Instance doesn't start after Reattaching the Volume

I'm trying to lunch an instance from backup snapshots.
I follow the procedure here :
Goto the snapshot section of the aws tools.
Create a volume from the snapshot.
Create an ec2 instance (make sure it's an EBS backed instance, if it's the same kind as the original snapshot you'll be fine)
Stop the instance
Detach the existing EBS volume from the instance
Attach the volume you just created, make sure you give it the same name as the instance that was attached.
Start the instance back up.
Not quite sure what is EBS backed instance.
Every thing works fine,But after I reattach the volume,The instance I created cant get start,when i press start,it pending for awhile then stopped again.
What maybe the problem?
Thanks in advance.
I also have done this before. It works. When you reattach the volume check the name, it should be identical to the original root volume name. If the root volume name is different it can't start.
This worked for me.
Basically the "Device" attachment information that is auto-populated wasn't right.
When I tried starting the EC2 instance the error read.
The error ,clearly states that volume isn't attached at(/dev/xvda)
Now, navigate back to volumes. Undo the previous attached volume.
Attach the volume again by providing the "Device" info as provided in the error message, /dev/xvda in this case.
Before detaching the volume, note down the volume id and device name which is actually mounted right now.
Then while reattaching another volume, you need to make above device name only otherwise it will not start.

LAMP server on EC2 (Amazon Linux Micro Instance)

I've launched an instance of the Basic 32-bit Amazon Linux AMI which has an 8GB volume as it's root device. If I terminate it, the EBS volume is destroyed as well. What I'd like to know is whether or not my data is protected (for example, the apache document root, or MySQL data) if the server crashes? A lot of tutorials seem to indicate that another EBS volume should be created and my data stored on that, but I'm not really seeing why two EBS volumes are needed?
Or is the current setup okay for a web server setup?
Many thanks in advance for your help!
When you spin an EC2 instance up, the root volume is ephemeral - that is, when the instance is terminated, the root volume is destroyed** (taking any data you put there with it). It doesn't matter how you partition that ephemeral volume and where you tuck your data on it - when it is destroyed, everything contained in that volume is lost.
So if the data in the volume is entirely transient and fully recoverable/retrievable from somewhere else the next time you need it, there's no problem; terminate the instance, then spin a new one up and re-acquire the data you need to carry on working.
However, if the data is NOT transient, and needs to be persisted so that work can carry on after an instance crash (and by crash, I mean something that terminates the instance or otherwise renders it inoperable and unrecoverable) then your data MUST NOT be on the root volume, but should be on another EBS volume which is attached to the instance. If and when that instance terminates or breaks irretrievably, your data is safe on that other volume - it can then be re-attached to a new instance for work to continue.
** the exception is where your instance is EBS-backed and you swapped root volumes - in this case, the root volume is left behind after the instance terminates because it wasn't part of the 'package' created by the AMI when you started it.
The other volume would be needed in case your server gets broken and you cannot start it. In such case you would just remove initial server, create a second one and attach the additional storage to the new server. You cannot attach root volume of one server to another.