Can someone help me to understand the difference between Root device and Block devices for EC2 instance. You can see a snapshot i posted below.
What i tried to achieve is :
I created a snapshot of the attached volume of the EC2.
Detached the volume from instance.
Deleted the volume.
Created a new volume from the snapshot.
Reattached the newly created volume to the instance.
But it only attaches to the Block Devices but not to the root devices. And results in the failure to launch the instance.
My apologies if my question is wrong.
Awaiting your reply.
Thanks in advance.
The Root device is the EBS volume for the AMI in which your instance is based on. This will contain the operating system. If not configured then, AWS will use the default values of the AMI.
You can configure additional Block device entries optionally to mount additional volumes on the instance, besides the root volume. It can be empty or from a snapshot.
Related
I need to reduce my Windows EBS root volume.
We are able to increase the root volume by using AWS Console. So I have followed some document to decrease my Windows EBS root volume:
I created a new 30GB volume and attached it to my existing instance
I then used xcopy to copy the data from C:\ to D:\ which is newly attached volume
After completion of copying data, I then stopped the instance and detached both the EBS root volume and the newly added volume.
Then I reattached the newly created volume to the stopped EC2 instance as /dev/sda1
Now I am trying to start my Windows EC2 instance, but it fails to start. It is coming again and again to stopped state.
Note: I received error message like Sharing Violation after data copied.
One of my instances was preventing me from logging so I stopped it, detached its volume, spun up a new instance, and attached the old volume. The problem is, the old volume is being treated as the root device. Because of this, I still cannot log in to the new instance (although I can do so if I don't attach the old volume).
Is there any way to fix this issue using the AWS Management Console?
It seems like you have attached your old volume in "/dev/sda1". Deattach your old volume and attach it to "/dev/sdf".
This is caused by the filesystems on each volume having the same label (the value returned by e.g. e2label /dev/xvda1). The server correctly starts booting from the first volume, and then the bootloader, there, sees the second volume having the label it anticipates for the root volume, and continues booting with the second volume as root. This an os-level setting, not visible to the AWS infrastructure.
Workaround: don't attach the second volume until after the instance has booted. EBS volumes can be attached to an instance at any time -- they don't have to be present when the instance is started. After unmount, they can also be detached at any time, with the instance still running.
To resolve this, I had to make a snapshot of the old volume first. I then created a new AMI using that snapshot. I included the old volume as an extra storage so that it's explicitly defined not to be treated as a root device.
I then created a new instance using that AMI. I was able to finally log in to the new instance. From there, I just mounted the volume.
Just want to preface this question by saying I've checked a few other similar questions but none really answered mine.
So the situation (hypothetically) is this: I have an EC2 instance running with one EBS volume as his root device. I forgot to turn on termination protection, oops, and I accidentally delete my server. Luckily, I set my EBS volume to persist after termination (sidequestion, can you verify this setting without using the API?)
Now, I have an AMI that is a week old. So I want to create a new EC2 instance, but I want to attach the orphaned EBS volume to it, since that has the newest data, settings and whatnot. How can I achieve this?
Am I missing some information here? Is the EC2 instance just a shell, where the EBS volume is essentially my server? Should I just take a snapshot from my EBS volume, create an AMI from that and then launch a new instance that will be the same as the orphaned one?
And while I'm here asking questions, one for the road; You can either create a volume or an image from a snapshot, why would you prefer one over the other?
Thank you in advance.
I've found an indirect answer to one of my many questions.
You can reattach a root device volume. On a Linux instance that has no EBS volumes attached, when attaching the EBS volume, name it /dev/xvda and it will register as the root device.
I'm trying to lunch an instance from backup snapshots.
I follow the procedure here :
Goto the snapshot section of the aws tools.
Create a volume from the snapshot.
Create an ec2 instance (make sure it's an EBS backed instance, if it's the same kind as the original snapshot you'll be fine)
Stop the instance
Detach the existing EBS volume from the instance
Attach the volume you just created, make sure you give it the same name as the instance that was attached.
Start the instance back up.
Not quite sure what is EBS backed instance.
Every thing works fine,But after I reattach the volume,The instance I created cant get start,when i press start,it pending for awhile then stopped again.
What maybe the problem?
Thanks in advance.
I also have done this before. It works. When you reattach the volume check the name, it should be identical to the original root volume name. If the root volume name is different it can't start.
This worked for me.
Basically the "Device" attachment information that is auto-populated wasn't right.
When I tried starting the EC2 instance the error read.
The error ,clearly states that volume isn't attached at(/dev/xvda)
Now, navigate back to volumes. Undo the previous attached volume.
Attach the volume again by providing the "Device" info as provided in the error message, /dev/xvda in this case.
Before detaching the volume, note down the volume id and device name which is actually mounted right now.
Then while reattaching another volume, you need to make above device name only otherwise it will not start.
I am trying to stop a Amazon EC2 instance and get the warning message
Warning: Please note that any data on the ephemeral storage of your instance will be lost when it is stopped.
My Question
What data is stored in ephemeral storage of an Amazon EC2 instance?
Basically, root volume (your entire virtual system disk) is ephemeral, but only if you choose to create AMI backed by Amazon EC2 instance store.
If you choose to create AMI backed by EBS then your root volume is backed by EBS and everything you have on your root volume will be saved between reboots.
If you are not sure what type of volume you have, look under EC2->Elastic Block Store->Volumes in your AWS console and if your AMI root volume is listed there then you are safe. Also, if you go to EC2->Instances and then look under column "Root device type" of your instance and if it says "ebs", then you don't have to worry about data on your root device.
More details here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/RootDeviceStorage.html
Anything that is not stored on an EBS volume that is mounted to the instance will be lost.
For example, if you mount your EBS volume at /mystuff, then anything not in /mystuff will be lost. If you don't mount an ebs volume and save stuff on it, then I believe everything will be lost.
You can create an AMI from your current machine state, which will contain everything in your ephemeral storage. Then, when you launch a new instance based on that AMI it will contain everything as it is now.
Update: to clarify based on comments by mattgmg1990 and glenn bech:
Note that there is a difference between "stop" and "terminate". If you "stop" an instance that is backed by EBS then the information on the root volume will still be in the same state when you "start" the machine again. According to the documentation, "By default, the root device volume and the other Amazon EBS volumes attached when you launch an Amazon EBS-backed instance are automatically deleted when the instance terminates" but you can modify that via configuration.
To be clear and answer #Dean's question: EBS-type root storage doesn't seem to be ephemeral. Data is persistent across reboots and actually it doesn't make any sense to use ebs-backed root volume which is 'ephemeral'. This wouldn't be different from image-based root volume.
For EC2 instance
Stop & Start != Reboot
so for ephemeral storage (Instance Store)
Stop cause data lost
Reboot will not
According to AWS documentation [https://aws.amazon.com/premiumsupport/knowledge-center/instance-store-vs-ebs/] instance store volumes is not persistent through instance stops, terminations, or hardware failures.
Any AMI created from instance stored disk doesn't contain data present in instance store so all instances launched by this AMI will not have data stored in instance store. Instance store can be used as cache for applications running on instance, for all persistent data you should use EBS.
ephemeral is just another name of root volume when you launch Instance from AMI backed from Amazon EC2 instance store
So Everything will be stored on ephemeral.
if you have launched your instance from AMI backed by EBS volume then your instance does not have ephemeral.
refer: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#instance-store-volumes
The data in an instance store persists only during the lifetime of
its associated instance. If an instance reboots (intentionally or
unintentionally), data in the instance store persists. However,
data in the instance store is lost under any of the following
circumstances:
- The underlying disk drive fails
- The instance stops
- The instance hibernates
- The instance terminates