default instance storage for m1.small does not exist - amazon-web-services

I ran df -h and got:
/dev/xvde1 6.0G 1.9G 4.1G 32% /
none 828M 0 828M 0% /dev/shm
and cat /etc/fstab:
LABEL=_/ / ext4 defaults 1 1
/dev/xvdb /mnt ext3 defaults,context=system_u:object_r:usr_t:s0 0 0
none /proc proc defaults 0 0
none /sys sysfs defaults 0 0
none /dev/pts devpts gid=5,mode=620 0 0
none /dev/shm tmpfs defaults 0 0
/dev/sda3 none swap sw,comment=cloudconfig 0 0
output of lsblk:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvde1 202:65 0 6G 0 disk /
xvde3 202:67 0 896M 0 disk [SWAP]
I suppose /dev/xvdb to be my instance storage of around 160 GB. However, I do not see this device when I run ls -a on /dev/.
Does any one know how I can get this instance storage mounted?
thanks so much

So,
Your lsblk output shows that your instance does not have an extra drive attached. So what you see is correct.
m1.small support attaching an extra instance drive of 160 GB. but that does not mean that will be attached automatically when you provision a m1.small instance.
While provisioning an instance you have to Manually select the option to attach the drive.
In your case, you seem to have skipped this step. Hence the instance got provisioned but it does not have 160 Gb drive.
On a side note, you cannot attach these drives once the instance is provisioned. So, In other words, either you can create and attach an EBS volume, OR just Create a new instance and select to attach this drive while creating this instance.
Please check the screen shot below to understand what I am talking about:
As shows in above screen shot, you have to click on "Add New Volume" button while provisioning the instance.
Once you click on that, you will see your 160GB drive being added as below:
Make sure you select "Instance Store 0" under "Type" column as the Volume in question that you are looking for is an "Instance Store" Volume.
Once the instance is provisioned, run lsblk again and you will notice your new volume listed there.

If you upgraded from a micro instance (or any instance that only has EBS storage), it won't be included by default.
In any case, you can add it by creating a snapshot of your instance and relaunching. You can add it on the Snapshot page, or in the Launch wizard when launching from an existing AMI.
Open EC2 Web Admin Console.
Click on Instances on the left, select your instance, click the "Actions" dropdown, then select "Create Image".
On the snapshot page enter a name and description. You can add the Instance Store here if you like or later when you launch the AMI. Click the Add Volume button. Select "Instance Store 0" as the type, and whatever device path you like. Then click "Create Image".
Go to AMIs under the images section on the left site of the EC2 Admin Console.
Wait for the AMI status of your new image to be "available".
Select the new AMI, click Launch at the top and go through the launch wizard (go through the steps, so you can set it up with the same security group and key pair as your current server).
On step 4 you have a chance to modify the volumes again. Make sure "Instance Store 0" is listed as the second volume.
Finish wizard and wait for launch. When you login df -h will show the volume.
Some recommend stopping the instance before the snapshot, but I've never had a problem. In my experience (using an ELB in front) the server is usually unavailable for up to 10 minutes while the snap is made.
[ec2-user#ABCD ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.9G 2.3G 5.6G 29% /
tmpfs 829M 0 829M 0% /dev/shm
/dev/xvdb 147G 188M 140G 1% /media/ephemeral0
More info here:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#Using_AddingDefaultLocalInstanceStorageToAMI

Related

how to make an attached EBS volume accessible?

I changed my instance type, and now previously attached volumes are not available at startup. How do I attach and mount volumes?
In the volume info in the AWS console:
Attachment information i-e85c62d0 (hongse):/dev/sdf (attached)
however there is nothing at /dev/sdf on the instance.
I tried to mount it following the info on the AWS site such as:
ubuntu#hongse:~$ sudo mkdir /ebs1
ubuntu#hongse:~$ sudo mount /dev/sdf /ebs1
mount: special device /dev/sdf does not exist
but failed.
What other steps could I try to mount an existing volume?
Ok, you're using ubuntu in AWS with an ECS volume. Try this:
ubuntu#hostname1:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 128G 0 disk
└─xvda1 202:1 0 128G 0 part /
xvdf 202:80 0 1000G 0 disk
Note that xvdf (1TB) drive is not mounted in my example.
You will want to type the following to mount your disk:
ubuntu#hostname1:~$ sudo mount /dev/xvdf /ebs1
NOTE: I don't know the reason why the AWS console shows /dev/sdf and the actual host shows /dev/xvdf, but that's the way it is.

Is the EBS volume mounted ? and where?

In my EC2 instance, that is attached to a volume EBS of 100GB, I run this command:
[ec2-user ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 100G 0 disk
└─xvda1 202:1 0 8G 0 part /
Here is the file /etc/fstab:
UUID=ue9a1ccd-a7dd-77f8-8be8-08573456abctkv / ext4 defaults 1 1
I want to understand: why only the volume of 8GB has a mount point ?
Also, is the fact of mounting a volume on root '/', means that all the content of root is being stored on EBS volume?
I want to understand: why only the volume of 8GB has a mount point ?
Because additional volumes are not formatted/mounted by default. AWS does not know whether you'd like to have ext4 or NTFS or something else as well as which mount point you'd like to have.
Also, is the fact of mounting a volume on root '/', means that all the content of root is being stored on EBS volume?
Yes if you have EBS-backed instance (unlike so-called instance-backed instances) and if you do not have other volumes mounted (not to be confused with 'attached')
p.s. as far as I see, you initially had created 8GB volume then resized it via AWS console to 100GB. Pls note you resized the EBS volume (xvda) but did not resize the partition (xvda1). AWS will not resize it automatically by the same reason: it doesn't know how you're going to use the extra space.

AWS EBS volume attachement using snapshot

I am trying with AWS EBS volume. I created an EC2 Server using the AMI rancheros-v0.7.1-hvm-1. Then I attached volume and mounted to /var/lib/docker folder. Then I run few docker images on that server and I am able to access those applications also.
Later I created a snapshot of the volume. and launched another server using the same AMI and attached an EBS volume from the snapshot I created earlier and mounted to /var/lib/docker folder.
After that I ssh to the second server and did docker PS. But no docker Images are running there.
When I do df -kh command on first server the output was:
Filesystem Size Used Available Use% Mounted on
/dev/xvdb 29.4G 1.2G 26.7G 4% /var/lib/docker
/dev/xvdb 29.4G 1.2G 26.7G 4% /var/lib/docker/overlay
overlay 29.4G 1.2G 26.7G 4% /var/lib/docker/overlay
.........
And followed by the docker images running.
But when I did the same command on the second server I got the output like this:
Filesystem Size Used Available Use% Mounted on
/dev/xvdb 29.4G 44.1M 27.8G 0% /var/lib/docker
/dev/xvdb 29.4G 44.1M 27.8G 0% /var/lib/docker/overlay
No docker images running also.
In both servers the use% are different.
Can any one tell me how can I check both are similar, and the snapshot contains all the data in the EBS volume? If the snapshot contains the similar data in the volume, then the second server should contain the docker images. But in my case its not happening.
This is the user data I gave while creating the EC2 server.
#!/bin/sh
sudo mkfs.ext4 /dev/xvdb
mkdir -p /var/lib/docker
echo "/dev/xvdb /var/lib/docker ext4 defaults 0 0" >> /etc/fstab
mount /dev/xvdb /var/lib/docker -t ext4
chown -R 1000 /var/lib/docker
Can anyone tell me a solution for this?
It worked now.
the server which I created from snapshot I should not create the file system. I have to remove this command
sudo mkfs.ext4 /dev/xvdb
from the user data. just create the folder and mount it. then it worked.

EC2 machine abruptly terminating right after start

I was in the process of shrinking the size of the EBS volume attached to my ec2 machine. There are many tutorials on how to do this (here is the one I used).
The gist is that, I create a snapshot of the actual drive, and small empty volume. I then copy the content of the snapshot to the small drive via sudo rsync -aHAXxSP /mnt/snap/ /mnt/small.
When checking the contents of the small drive, it seems like it contains all of the folders/files in the original drive.
At the end, after detaching all volumes and attaching the smallest volume, when starting the ec2 instance, it terminates after initializing.
When I get the reason for the termination it says Instance initiated shutdown:
AI2s-MBP-7:~ i-danielk$ aws ec2 describe-instances --instance-ids i-e0ef0910
RESERVATIONS 645962089403 r-440c95a5
GROUPS sg-4cdf6427 zfei_profiler
INSTANCES 0 x86_64 tOGHd1424650611612 True xen ami-146e2a7c i-e0ef0910 r3.2xlarge zfei_profiler 2015-08-23T01:56:40.000Z /dev/xvda ebs hvm
BLOCKDEVICEMAPPINGS /dev/xvda
EBS 2015-08-23T01:55:56.000Z False attached vol-1d08fcf0
MONITORING disabled
PLACEMENT us-east-1c default
SECURITYGROUPS sg-4cdf6427 zfei_profiler
STATE 80 stopped
STATEREASON Client.InstanceInitiatedShutdown Client.InstanceInitiatedShutdown: Instance initiated shutdown
TAGS Name WIKI_TACL_MERGE
Any idea how can I figure out the real reason for my instance shutdown?
Update1: this is really weird ... the "Instance Settings > Get System Log" shows nothing. Here is a screenshot:
Update2: Both cases (original volume, and the small volume) were added as /dev/xvda (I tried /dev/sda but it didn't accept it).
Update3: Here is the content /boot/grub/grub.conf. Based on what I see, there is nothing problematic (and nothing needs to be changed).
[ec2-user#ip-10-167-76-117 target]$ cat /boot/grub/grub.conf
# created by imagebuilder
default=0
timeout=1
hiddenmenu
title Amazon Linux 2014.09 (3.14.33-26.47.amzn1.x86_64)
root (hd0,0)
kernel /boot/vmlinuz-3.14.33-26.47.amzn1.x86_64 root=LABEL=/ console=ttyS0 LANG=en_US.UTF-8 KEYTABLE=us
initrd /boot/initramfs-3.14.33-26.47.amzn1.x86_64.img
title Amazon Linux 2014.09 (3.14.27-25.47.amzn1.x86_64)
root (hd0,0)
kernel /boot/vmlinuz-3.14.27-25.47.amzn1.x86_64 root=LABEL=/ console=ttyS0
initrd /boot/initramfs-3.14.27-25.47.amzn1.x86_64.img
And this is /etc/fstab:
[ec2-user#ip-10-167-76-117 target]$ cat /etc/fstab
#
LABEL=/ / ext4 defaults,noatime 1 1
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0

How can I re-download the pem file in AWS EC2?

I made a key pair pem file called "test.pem", and I downloaded to my PC.
I made a new instance with this pem file.
Now I am in a different pc, and I don't have this pem file in my local, and my previous pc is in the middle of the sea (shipping).
How can I re-download the "test.pem" file again?
No, you cannot download .pem file again. You can download the .pem file ONLY once and that is when you create a new key-pair.
You can not download such security key files more than once.
You can reuse them for multiple instances.
The best thing you can do is:
Download it
store it at S3 (Of course in a private access bucket.)
You can recover you machine even if you lost pem file, there is a way:
1.create new instance with same region and VPC.
2.stop old machine (do not terminate).
3.Goto EBS , detach the root volume of old machine.
4.Now time to attach new volume to new instance(/dev/sdf).but this newly
attached volume will
be secondary for new instance because it will have its default root
volume.
5.Login to new machine and follow below steps:
# mount /dev/xvdf1 /mnt
# cp /root/.ssh/authorized_keys /mnt/root/.ssh/
# umount /mnt
detach secondary volume from new instance.
Attach this volume back to old instance.
login back to old machine using pem file you got at time of creation of
new instance for recovery.
You cannot re download the pem file or psk file again.
SOLUTION
Go to network and security --> key-pair
Create new Key pair , SAVE IT NOW
Delete the original one
You are good to go
My notes from doing this recently.
There's a few traps for the unwary and tools which might be unfamiliar to some.
Step 1) Detach your root volume from your machine using AWS console.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-detaching-volume.html
Amazon EC2 console > dashboard > instances > select instance > copy instance id i-06d2680a4d94c4f59 (29-5-22_flask_gunicorn_nginx)
instance must be in stopped state. (check dashboard as takes a few seconds after telling to stop)
Amazon EC2 console > dashboard > volumes > select volume with matching instance ID. vol-02e720595d57d3591
'actions' dropdown > detach
Step 2) Launch a fresh EC2 instance(Not from your old machine AMI)
(same region and Virtual private cloud)
take note of the new instance id : i-blah
Step 3) Attach your old volume to new EC2 machine
amazon EC2 console > EC2 > Volumes > selected volume.
refresh page after disconnecting the volume to update the 'actions' drop down > 'attach volume'
select the new EC2 instance
wait & refresh page until "Attached Instances" shows "attached"
Volume ID vol-blah
Attached Instances i-blah : /dev/sdf (attached)
Step 4) Now login to new ec2 machine and mount the old EBS volume
list the available disks
lsblk
#------------------------
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 26.6M 1 loop /snap/amazon-ssm-agent/5163
loop1 7:1 0 55.5M 1 loop /snap/core18/2344
loop2 7:2 0 61.9M 1 loop /snap/core20/1405
loop3 7:3 0 79.9M 1 loop /snap/lxd/22923
loop4 7:4 0 43.6M 1 loop /snap/snapd/15177
xvda 202:0 0 8G 0 disk
├─xvda1 202:1 0 7.9G 0 part /
├─xvda14 202:14 0 4M 0 part
└─xvda15 202:15 0 106M 0 part /boot/efi
xvdf 202:80 0 8G 0 disk
├─xvdf1 202:81 0 7.9G 0 part
├─xvdf14 202:94 0 4M 0 part
└─xvdf15 202:95 0 106M 0 part
#------------------------
sudo file -s /dev/xvda
/dev/xvda: DOS/MBR boot sector, extended partition table (last)
sudo file -s /dev/xvdf
/dev/xvdf: DOS/MBR boot sector, extended partition table (last)
#check file type used on existing volume
mount | grep "^/dev"
/dev/xvda1 on / type ext4 (rw,relatime,discard,errors=remount-ro)
/dev/xvda15 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
sudo mkfs -t ext4 /dev/xvdf
sudo mkdir /newvolume
sudo mount /dev/xvdf /newvolume/
#check file type used on existing volume
mount | grep "^/dev"
/dev/xvda1 on / type ext4 (rw,relatime,discard,errors=remount-ro)
/dev/xvda15 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
/dev/xvdf on /newvolume type ext4 (rw,relatime)
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 26.6M 1 loop /snap/amazon-ssm-agent/5163
loop1 7:1 0 55.5M 1 loop /snap/core18/2344
loop2 7:2 0 61.9M 1 loop /snap/core20/1405
loop3 7:3 0 79.9M 1 loop /snap/lxd/22923
loop4 7:4 0 43.6M 1 loop /snap/snapd/15177
xvda 202:0 0 8G 0 disk
├─xvda1 202:1 0 7.9G 0 part /
├─xvda14 202:14 0 4M 0 part
└─xvda15 202:15 0 106M 0 part /boot/efi
xvdf 202:80 0 8G 0 disk /newvolume
#check the disk space to validate the volume mount.
cd /newvolume
df -h .
Step 5) Now go to that partition then visit home directory inside that machine and go to .ssh folder.
cd ~/.ssh
cat authorized_keys
Step 6) Now generate a new private and public key. Then paste public key into authorized_keys file.
nb: since login was setup during this ec2 instance creation, this step already complete.
Step 7) Once you done with above steps, detach that volume from this ec2 machine.
stop the new instance, wait for instance to stop.
detach new volume, wait for volume to detach.
Step 8) Now attach this volume to your old machine as root volume
attach new volume to old instance as /dev/sda1 (see prompt as this attaches as root), wait for volume to attach.
start new instance, wait for start to complete.
nb: this error will occur if new volume not attached as root.
Failed to start the instance i-06dblah
Invalid value 'i-06dblah' for instanceId. Instance does not have a volume attached at root (/dev/sda1)
Step 9) Now try to login to your old machine with the newly generated key.
cd ~
sudo ssh-keygen -f "/root/.ssh/known_hosts" -R "xxx.xxx.xxx.xxx"
sudo ssh -i my_key_filename.pem ubuntu#xxx.xxx.xxx.xxx
can then remount your original volume and copy across files to retrieve.
it's worth noting here : all ssh access should be locked down to specific IP's and should be using key rotation tools. It's likely ppl will find this SE post due to their security being breached.
Login to old instance via ftp/sftp and download the key to your pc.