EC2 Instance doesn't start after Reattaching the Volume - amazon-web-services

I'm trying to lunch an instance from backup snapshots.
I follow the procedure here :
Goto the snapshot section of the aws tools.
Create a volume from the snapshot.
Create an ec2 instance (make sure it's an EBS backed instance, if it's the same kind as the original snapshot you'll be fine)
Stop the instance
Detach the existing EBS volume from the instance
Attach the volume you just created, make sure you give it the same name as the instance that was attached.
Start the instance back up.
Not quite sure what is EBS backed instance.
Every thing works fine,But after I reattach the volume,The instance I created cant get start,when i press start,it pending for awhile then stopped again.
What maybe the problem?
Thanks in advance.

I also have done this before. It works. When you reattach the volume check the name, it should be identical to the original root volume name. If the root volume name is different it can't start.

This worked for me.
Basically the "Device" attachment information that is auto-populated wasn't right.
When I tried starting the EC2 instance the error read.
The error ,clearly states that volume isn't attached at(/dev/xvda)
Now, navigate back to volumes. Undo the previous attached volume.
Attach the volume again by providing the "Device" info as provided in the error message, /dev/xvda in this case.

Before detaching the volume, note down the volume id and device name which is actually mounted right now.
Then while reattaching another volume, you need to make above device name only otherwise it will not start.

Related

My AWS EC2 EBS backups: are they volume or instance backups?

I have created some EBS backups over the years, but I can't remember if they were volume or instance backups. Is there some way to tell by looking at one or more field(s) in the list, e.g., at https://ap-southeast-1.console.aws.amazon.com/ec2/v2/home?region=ap-southeast-1#Snapshots:sort=desc:startTime, or in the detailed "description" when I click on one of the snapshots? (the detailed description looking as in the snapshot below, for example) Unfortunately, there isn't a field that says "EBS backup type" that takes a value of "instance" or "volume". As indicated in this stackoverflow question, for example, both types are stored as "EBS Snapshots", so as I understand it then, both will appear mixed together in the same list of EBS snapshots.
Most of the previous questions, e.g., this stackoverflow question, or other pages I've found from searching, have been about the differences between volume and instance backups, and how one might choose one or the other. However, I'm not asking about that, but just if there is any way I can tell what type my previous backups are. Or do I just have to tag the type myself or put it as part of the description string?
UPDATE
From looking at the VolumeID of the snapshot (vol-0565abe0e54ad4adf in the image, for example), I'm guessing that if an existing ec2 instance is using that volume, then that particular snapshot was an instance snapshot? But it could also have been a volume snapshot of that volume?
UPDATE 2
It appears there is some confusion regarding what I'm referring to (from the answers and comments posted so far). I'm not using DLM, but the EC2 console (see image below, and "Snapshot" is the place I navigate to.
Then, when I click on "Create snapshot", I see the following, which shows the options of volume and instance (the first question). This may be a new option, as I don't remember seeing it before.
An EBS snapshot is a backup of a single EBS volume. The EBS snapshot contains all the data stored on the EBS volume at the time the EBS snapshot was created.
An AMI image is a backup of an entire EC2 instance. Associated with an AMI image are EBS snapshots. Those EBS snapshots are the backups of the individual EBS volumes attached to the EC2 instance at the time the AMI image was created.
To get Snapshots associated with still running Volumes, attempt to match their VolumeID with the VolumeID of still running Volumes. Output the SnapshotID of matches.
A snapshot is performed on a single volume, these will always be a backup of the individual volume rather than th complete ec2 instance.
To restore this snapshot, you would restore it to create a new EBS volume that could then be attached to an EC2 instance.
If however your instance is running a single volume you can go one step further. Instead of launching as an EBS volume you can instead create an AMI from the snapshot. This AMI can then be used to launch further instances using the base image taking from the snapshot.
I suspect you are using Data Lifecycle Manager (DLM), not exactly AWS Backup, because you are getting snapshots, AWS Backup work with vault, so you would not see snapshot.
If this is the case, DLM only work with volumes, so you only get backup of your volumes, not instances.
With AWS Backup you can have both, backup of your volumes and/or backup of your instances.
They will be contained inside a vault when backup happens, when necessary you will need to restore it from vault, which will gives you an AMI or a volumes, depending on which kind of backup you did.
Thanks for your update!
I got your point, the instance option there is just a helper to facilitate your life, imagine that you have an instance with 2 volumes and you want to create a snapshot of both volumes, in this case you could go to this screen and create one
snapshot each time (refering volume id on each time), or you can do it once refering the instance id and console will get both volumes for you and create both snapshots.
Doesn't matter which option you choose there, it will just create snapshot from volume, it will not do anything about your instance. If you want you can add a tag in your snapshot to refer to your instance, but it is just a meta-data.
So in your case you are just creating "backups" of your volumes!
If you lose your volume you can restore it, but if you lose your instance you will have to recreate your instance again (with all details) manually.
If you want to create a "backup" from your instance you need to create an image, which will give you and AMI, not a snapshot.
AMI will "backup" your instance details and will create a snapshot from all instance volumes (not ephemeral ones).

Root Device and Block Devices difference

Can someone help me to understand the difference between Root device and Block devices for EC2 instance. You can see a snapshot i posted below.
What i tried to achieve is :
I created a snapshot of the attached volume of the EC2.
Detached the volume from instance.
Deleted the volume.
Created a new volume from the snapshot.
Reattached the newly created volume to the instance.
But it only attaches to the Block Devices but not to the root devices. And results in the failure to launch the instance.
My apologies if my question is wrong.
Awaiting your reply.
Thanks in advance.
The Root device is the EBS volume for the AMI in which your instance is based on. This will contain the operating system. If not configured then, AWS will use the default values of the AMI.
You can configure additional Block device entries optionally to mount additional volumes on the instance, besides the root volume. It can be empty or from a snapshot.

EBS Volume Being Read as Root Device

One of my instances was preventing me from logging so I stopped it, detached its volume, spun up a new instance, and attached the old volume. The problem is, the old volume is being treated as the root device. Because of this, I still cannot log in to the new instance (although I can do so if I don't attach the old volume).
Is there any way to fix this issue using the AWS Management Console?
It seems like you have attached your old volume in "/dev/sda1". Deattach your old volume and attach it to "/dev/sdf".
This is caused by the filesystems on each volume having the same label (the value returned by e.g. e2label /dev/xvda1). The server correctly starts booting from the first volume, and then the bootloader, there, sees the second volume having the label it anticipates for the root volume, and continues booting with the second volume as root. This an os-level setting, not visible to the AWS infrastructure.
Workaround: don't attach the second volume until after the instance has booted. EBS volumes can be attached to an instance at any time -- they don't have to be present when the instance is started. After unmount, they can also be detached at any time, with the instance still running.
To resolve this, I had to make a snapshot of the old volume first. I then created a new AMI using that snapshot. I included the old volume as an extra storage so that it's explicitly defined not to be treated as a root device.
I then created a new instance using that AMI. I was able to finally log in to the new instance. From there, I just mounted the volume.

Reattaching an EBS volume to a new instance that was previously accidentally deleted

Just want to preface this question by saying I've checked a few other similar questions but none really answered mine.
So the situation (hypothetically) is this: I have an EC2 instance running with one EBS volume as his root device. I forgot to turn on termination protection, oops, and I accidentally delete my server. Luckily, I set my EBS volume to persist after termination (sidequestion, can you verify this setting without using the API?)
Now, I have an AMI that is a week old. So I want to create a new EC2 instance, but I want to attach the orphaned EBS volume to it, since that has the newest data, settings and whatnot. How can I achieve this?
Am I missing some information here? Is the EC2 instance just a shell, where the EBS volume is essentially my server? Should I just take a snapshot from my EBS volume, create an AMI from that and then launch a new instance that will be the same as the orphaned one?
And while I'm here asking questions, one for the road; You can either create a volume or an image from a snapshot, why would you prefer one over the other?
Thank you in advance.
I've found an indirect answer to one of my many questions.
You can reattach a root device volume. On a Linux instance that has no EBS volumes attached, when attaching the EBS volume, name it /dev/xvda and it will register as the root device.

Changing Root EBS volume from Volume created from EBS Snapshot

I created snapshot, from volume(Root volume).
After one day, I need to set my snapshot as root volume.
I followed these steps :
For that I created a volume from snapshot.
My working instance was stopped.
Detached root volume of my instance
Attached the volume generated from snapshot as /dev/sda1
and then instance started
I have a problem while starting my instance, it was not starting and no errors occurred.
Can you guys please check the process and let me know if anything else required