attaching a previous EBS Volume to a new EC2 Linux Instance - amazon-web-services

I ran into a problem the other day while cloning a github repo, and all of a sudden my EC2 Instance (EC2 A) became completely unusable. My question is: how can I re-attach an EBS Volume from an EC2 Instance that I terminated to a new EC2 Instance that I just created?
Step-by-Step of the problem:
0) broke my first EC2 Instance (EC2 A).
1) created a snapshot of the EBS Volume (EBS Volume A) attached to EC2 A.
2) stopped EC2 A.
3) detached EBS Volume A.
4) terminated EC2 A.
Then...
5) created a brand new EC2 Instance (EC2 B) with a new EBS Volume automatically created (EBS Volume B), which is currently attached to EC2 B.
6) set it all up (apache, mysql, php, other plugins, etc...)
7) Now I want to access my data from EBS Volume A. I do not care about anything in EBS Volume B. Please Advise...
Thank you so much for your time!

Yes, you can attach an existing EBS volume to an EC2 instance. There are a number of ways to do this depending on your tools of preference. I prefer the command line tools, so I tend to do something like:
ec2-attach-volume --instance-id i-XXXXXXXX /dev/sdh --device vol-VVVVVVVV
You could also do this in the AWS console:
https://console.aws.amazon.com/ec2/home?#s=Volumes
Right click on the volume, then select [Attach Volume]. Select the instance and enter the device (e.g., /dev/sdh).
After you have attached the volume to the instance, you will want to ssh to the instance and mount the volume with a command like:
sudo mkdir -m000 /vol2
sudo mount /dev/xvdh /vol2
You can then access your old data and configuration under /vol2
Note: The EBS volume and the EC2 instance must be in the same region and in the same availability zone to make the attachment.

Related

How can I attach a persistent EBS volume to an EC2 Linux launch template that is used in an autoscaling group?

To Clarify my Autoscaling group removes all instances and their root EBS volumes during inactive hours, then once inside active hours recreates and installs all necessary base programs. However I have a smaller EBS volume that is persistent and holds code and data I do not want getting wiped out during down times. I am currently manually attaching via the console and mounting every time I am working inside active hours using the commands below.
sudo mkdir userVolume
sudo mount /dev/xvdf userVolume
How can I automatically attach and mount this volume to a folder? This is all for the sake of minimizing cost and uptime to when I can actually be working on it.
Use this code:
#!/bin/bash
OUTPUT=$(curl http://169.254.169.254/latest/meta-data/instance-id)
aws ec2 attach-volume --volume-id vol-xxxxxxxxxxxx --device /dev/xvdf --instance-id $OUTPUT --region ap-southeast-1
Set your volume ID and region.
Refer this link for further details: https://aws.amazon.com/premiumsupport/knowledge-center/ec2-linux-spot-instance-attach-ebs-volume/

Mount EBS volume to a running AWS instance with a script

I'd like to dynamically mount and unmount an EBS volumes to a running AWS instance using a script and was wondering whether this is achievable, both on linux and windows instances, and if so, what is the expected duration of such operation.
Using AWS CLI and Bourne shell script.
attach-volume
Attaches an EBS volume to a running or stopped instance and exposes it
to the instance with the specified device name.
aws ec2 attach-volume --volume-id vol-1234567890abcdef0 --instance-id i-01474ef662b89480 --device /dev/sdf
detach-volume
Detaches an EBS volume from an instance. Make sure to unmount any file
systems on the device within your operating system before detaching
the volume.
aws ec2 detach-volume --volume-id vol-1234567890abcdef0
--------------------------------------------------------------------------
Use Python and Boto3 which has APIs to attach and detach volumes.
attach_volume
Attaches an EBS volume to a running or stopped instance and exposes it
to the instance with the specified device name.
import boto3
client = boto3.client('ec2')
response = client.attach_volume(
DryRun=True|False,
VolumeId='string',
InstanceId='string',
Device='string'
)
detach_volume
Detaches an EBS volume from an instance. Make sure to unmount any file
systems on the device within your operating system before detaching
the volume.
response = client.detach_volume(
DryRun=True|False,
VolumeId='string',
InstanceId='string',
Device='string',
Force=True|False
)

why does my website stops loading on aws ec2 instance randomly once in a while?

I am running a t2.micro ec2 instance on us-west-2a and instance's state is all green.
When I access my website it stops loading once in a while. Even if I reboot it, the website still doesn't load. When I stop an instance and then relaunch it, it shows 1/2 status checks failed.
ALARM TYPE: awsec2-i-20aaa52c-High-Network-Out
I also faced same type of issue.
EC2 instances were failing Instance Status Checks after a stop/start. I was able to take a look on my side at the System logs available to support and I could confirm that the system was having a kernel panic and was unable to boot from the root volume.
So I launched new EC2 temporary instance so we can attach the EBS root volumes of each EC2 instance . Here we modified the grub configuration file so it can load from a previous kernel.
The following commands:
1. Mount the EBS volume as a secondary volume into mnt folder: $ sudo mount /dev/xvdf1 /mnt
2. Backup the grub.cfg file: sudo cp /mnt/boot/grub2/grub.cfg grub.cfg_backup
3. Edit the grub.cfg file: sudo vim /mnt/boot/grub2/grub.cfg
4. Here we commented # all the lines for the first entry loading the new kernel.
Then you attached the original EBS volumes back to the original EC2 instances and these EC2 instances were able to successfully boot.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstances.html#FilesystemKernel

Migrate from t1.micro to t2.micro Amazon AWS

There is no way to migrate from t1.micro to t2.micro on Amazon directly, I know it.
So, is it gonna be work:
Detach EBS volume from current t1
Create new t2.micro instance
Attach EBS vol to the new t2
Is it safe for a data?
This method is actually easier than the one on the amazon forums. It also provides a step by step procedure with images.
http://jsianes.blogspot.jp/2014/07/aws-convert-t1-instances-to-t2.html
The basic idea is to:
Shutdown the t1 instance (1) and detach the volume (A)
Launch a new t2 instance (2), shut it down and detach the volume (B)
Use a temporary t2 instance (3), attach both volumes to it
Copy B boot module somewhere and erase all contents of B
Copy all the contents from A to B
Copy back the boot module to B
Terminate 3 and enjoy 2!
Note: If you are rebinding an Elastic IP you will need to replace the ssh keys associated with the previous host
ssh-keygen -f "/home/user/.ssh/known_hosts" -R <IP>
ssh-keygen -R <IP>
Detaching the EBS and attaching it to the new T2 instance, won't work as the EBS volume is still using PV instead of the HVM if this is a root volume. You can follow the steps ChrisC used in the below link which has been verified to work.
https://forums.aws.amazon.com/thread.jspa?threadID=155526

Having issues adding ephemeral storage to an AWS EBS instance running Ubuntu

I am having problems adding ephemeral storage into my existing EBS backed instance. I have a small instance running on 8GB EBS root-device, and I would like to add ephemeral storage into this instance and run it as a medium instance.
The procedure I have tried which did not work for me:
1) Took a snapshot from the instance EBS volume.
2) Registered new AMI based on the snapshot using ec2-api-tools:
ec2-register -a x86_64 -n "My AMI with ephemeral storage" --kernel <AKI-ID> --root-device-name "/dev/sda1" -b "/dev/sda1=<SNAP-ID>:8:true:standard" -b "/dev/sdc=ephemeral1"
3) Launched new medium instance with the new AMI I just created:
ec2-run-instances <AMI-ID> -t m1.medium --kernel <AKI-ID> -k <MY_KEY_NAME> -g default -b "/dev/sdc=ephemeral1"
4) SSH:ed into my new instance after it started up and the ephemeral storage is nowhere to be found (checked with fdisk -l for example). The root device is fine and correct, but eve nif trying out ephemeral0 instead of 1 did not change anything.
Apparently there is nothing in the API that tells you when you exceed your instance store mappings. A medium instance can only have 1 ephemeral drive. In fact /dev/sdc may only be able to mapped in large instances and up:
http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/InstanceStorage.html#StorageOnInstanceTypes