Restoring an AWS Snapshot and using it - amazon-web-services

I have a snapshot and I can start a new EC2 and attach it but I cannot access the snapshot on the EBS drive without formatting it which destroys the data. Here is the command list I am using:
cat /proc/partitions
sudo mke2fs -F -t ext4 /dev/xvdf
sudo mount /dec/xvdf /data
This works but the EBS is empty due to the mke2fs.
How can I just mount the EBS snapshot and hence use it under /data?

Don't use mke2fs. As you have observed, that reformats the volume. If, for some reason, you don't think you can otherwise mount it, you may be mounting the wrong logical entity.
Identify the device or partition with lsblk and then mount it, as above.
If there are partitions, you need to supply the partition, not the device, as an argument to mount.
/dev/xvde # device
/dev/xvde1 # first partition

Related

EBS volume shows no mount point

UseCase: Launch an AWS cloud9 environment that has added EBS of 500GB. This environment will be extensively used to build and publish dockers by developers.
So I did start an m.5.large instance-based environment and attached an EBS volume of 500GB.
Attachment information: i-00xxxxxxb53 (aws-cloud9-dev-6f2xxxx3bda8c):/dev/sdf
This is my total storage and I do not see 500GB volume.
On digging further, it looks like the EBS volume is attached but not at the correct mount point.
EC2 EBS configuration
Question: What should be the next step in order to use this EBS volume?
Question: What should be done in order to make use attached EBS for docker building?
Question: What should be the most efficient instance type for docker building?
Tyoe df -hT here you will find the Filesystem type of root if it is xfs or ext4
If say root (/) is xfs, run the following command for the 500 GiB Volume
$ mkfs -t xfs /dev/nvme1n1
If root (/) is ext4,
$ mkfs -t ext4 /dev/nvme1n1
Create a directory in root say named mount
$ mkdir /mount
Now mount the 500 GiB Volume to /mount
$ mount /dev/nvme1n1 /mount
Now it will be mounted and can viewed in df -hT
Also make sure to update the /etc/fstab so mount remains stable if there is a reboot
To update first find UUID if 500 GiB EBS Volume
$ blkid /dev/nvme1n1
Note down the UUID from the output
Now go to /etc/fstab using editor of your choice
$ vi /etc/fstab
There must be an entry already for /, update for mount and save the file, (note replace xfs with ext4 if filesytem is ext4)
UUID=<Add_UUID_here_without_""> /mount xfs defaults 0 0
Now finally run mount command
$ mount -a
Generally, if its new EBS volume you have to format it manually and then mount it, also manually. From docs:
After you attach an Amazon EBS volume to your instance, it is exposed as a block device. You can format the volume with any file system and then mount it.
The instruction for doing this in the link above.

New volume in ec2 instance not reflecting

I have a volume of 250 gb initially in one of my ec2 instance. I have increased the volume to 300gb. I could able to see that 300gb using the below command
But when i do df-h command i could not see the 300gb volume. Please help i am new to AWS
New volumes should be formatted to be accessible. Resized existing volumes should also be modified (resized) from the inside of the operating system.
The general information on how to do this safely (e.g. with snapshots) is given in the following AWS documentation:
Making an Amazon EBS volume available for use on Linux
Extending a Linux file system after resizing a volume
Based on the discussion in comments, two commands were used to successfully solve the problem:
sudo growpart /dev/xvda 1
sudo resize2fs /dev/xvda1
Let's solve step by step
After increasing the EBS volume size, connect to your instance over SSH to check the EBS volume size.
ssh ubuntu#<Public IP> -i <Key Pair>
Now use the df command to list all the filesystems mounted on your disk.
sudo df -hT
I hope you are seeing that The root filesystem size (/dev/xvda1) is still (X GB here X={Your EC2 storage size}), and its type is ext4. Now use the lsblk command in the terminal to check whether the disk has an extended partition.
sudo lsblk
The root volume (/dev/xvda) has a partition (/dev/xvda1). The size of the volume is 20 GB, but the size of the partition is still X GB. Now use the growpart command in the terminal to extend the partition size.
sudo growpart /dev/xvda 1
Again use the lsblk command in the terminal to verify if the partitions size extended.
sudo lsblk
So far, the volume size and the partition size have been extended. Use the df command to check if the root filesystem has been extended or not.
sudo df -hT
The size of the root filesystem is still X GB, and it needs to be extended. To extend different types of filesystems, different commands are used.
In order to extend an ext4 filesystem, the resize2fs command is used.
sudo resize2fs /dev/xvda1
Now again, list all the filesystems on your EC2 instance using the df command.
sudo df -hT
I hope you are seeing the changes after running the resize2fs command, the size of the filesystem is increased.

knife-ec2 not expanding volume at bootstrap

How can I create a larger than 8GB boot partition volume using knife-ec2 on an AWS hvm ami at boostrap?
In the old instance type of m1, i could just add --ebs-size 50 then run resize2fs after the system boot strapped.
When doing a new hvm ami (a t2 instance):
knife ec2 server create scott-base -N scott-base -r "role[base]" -I ami-57cfc412 --ebs-size 50
it will create the 50GB volume, but i cannot expand it after I login.
I see this during the build:
Warning: 50GB EBS volume size is larger than size set in AMI of 8GB.
Use file system tools to make use of the increased volume size.
And when I run resize2fs, this is what I get
[root#scott-base ~ ] resize2fs /dev/xvda
resize2fs 1.41.12 (17-May-2010)
resize2fs: Device or resource busy while trying to open /dev/xvda
Couldn't find valid filesystem superblock
I know I can go through the whole process of unmounting, copying and bringing it back up. I also know i can just add a volume after the fact, but I have to believe there is an easier way at bootstrap to get a larger ebs volume than 8GB.
[root#scott-base ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 50G 0 disk
└─xvda1 202:1 0 8G 0 part /
You are trying to apply resize2fs command to device reference /dev/xvda which is not file system itself, you can divide devices into partions, where you create file system (ext3,ext4,etc). You do have partition with filesystem on /dev/xvda1 partion, where you want to use resize2fs. Please read the documentation about devices and partions in linux.
The solution was related to the AMI itself. It turns out that some ami's are just not equipped with the ability to expand online. Our solution:
Launch the ami with a larger partition, knowing it would only default 8GB
Use the cloud-init and dracut module to increase the size during the next reboot
yum install -y cloud-init
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
yum install -y dracut-modules-growroot.noarch cloud-utils-growpart
dracut -f -v
Create a a personal image from that instance
Use the personal image to launch a new instance. At boot, it will be the larger size

Attach a EBS volume with boto

I was trying to add several EBS to an EC2 instance, I use something like that:
block_map = BlockDeviceMapping()
xvdf = EBSBlockDeviceType()
xvdf.delete_on_termination = True
xvdf.size = opts.ebs_vol_size
block_map['/dev/xvdf'] = xvdf
req = conn.request_spot_instances(key_name=opts.key_pair,
price=opts.price,
image_id=ami,
security_groups=[instance_group],
instance_type=opts.instance_type,
block_device_map=block_map,
count=count
)
EBS are created as I could see them within the EC2 instance in the AWS console. Beside that, I'm 100% sure they are created as I can list them with the lsblk command once I log into the EC2 instance. I also added a couple of entries to my /etc/fstab so that the EBS volumes at mounted at creation time.
However, they are not mounted. If I run the command mount -a the following error shows up:
mount: wrong fs type, bad option, bad superblock on /dev/xvdf,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
So, it seems EBS volumes are created but not formatted with EBSBlockDeviceType. After I formatted, I can run mount -a again and they are already mounted.
My question is, if is possible to create and format a volume within the EBSBlockDeviceType() constructor, so that I can mount it.
Another option I think I might have is attach an already formatted EBS snapshot usign the snapshot_id field in the boto.ec2.blockdevicemapping.BlockDeviceType class.
Thank you!
mount command fails for a newly allocated volume because there is no file system on it. BlockDeviceType (or EBSBlockDeviceType) does not have an option to chose a file system for the underlying EBS volume. Once the volume is allocated, user can create a file system of choice.
However, for a volume created from a formatted EBS snapshot (created from a volume having file system), there is no need to create file system again. You can use file -s <device name> to find out if the device already has a file system.
More details at: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html

Add EBS to Ubuntu EC2 Instance

I'm having problem connecting EBS volume to my Ubuntu EC2 Instance.
Here's what I did:
From the Amazon AWS Console, I created a EBS 150GB volume and attached it to an Ubuntu 11.10 EC2 instance. Under the EBS volume properties, "Attachment" shows: "[my Ubuntu instance id]:/dev/sdf (attached)"
Tried mounting the drive on the Ubuntu box, and it told me "mount: /dev/sdf is not a block device"
sudo mount /dev/sdf /vol
So I checked with fdisk and tried to mount from the new location and it told me it wasn't the right file system.
sudo fdisk -l
sudo mount -v -t ext4 /dev/xvdf /vol
the error:
mount: wrong fs type, bad option, bad superblock on /dev/xvdf, missing
codepage or helper program, or other error In some cases useful info
is found in syslog - try dmesg | tail or so
"dmesg | tail" told me it gave the following error:
EXT4-fs (sda1): VFS: Can't find ext4 filesystem
I also tried putting the configurations into /etc/fstab file as instructed on http://www.webmastersessions.com/how-to-attach-ebs-volume-to-amazon-ec2-instance, but still gave same not the right file system error.
Questions:
Q1: Based on point 1 (above), why was the volume mapped to 'dev/sdf' when it's really mapped to '/dev/xvdf'?
Q2: What else do I need to do to get the EBS volume loaded? I thought it'll just take care of everything for me when I attach it to a instance.
Since this is a new volume, you need to format the EBS volume (block device) with a file system between step 1 and step 2. So the entire process with your sample mount point is:
Create EBS volume.
Attach EBS volume to /dev/sdf (EC2's external name for this particular device number).
Format file system /dev/xvdf (Ubuntu's internal name for this particular device number):
sudo mkfs.ext4 /dev/xvdf
Only format the file system if this is a new volume with no data on it. Formatting will make it difficult or impossible to retrieve any data that was on this volume previously.
Mount file system (with update to /etc/fstab so it stays mounted on reboot):
sudo mkdir -m 000 /vol
echo "/dev/xvdf /vol auto noatime 0 0" | sudo tee -a /etc/fstab
sudo mount /vol
Step 1: create volume
step 2: attach to your instance root volume
step 3: run sudo resize 2fs -p /dev/xvde
step 4: restart apache2 sudo service apache2 restart
step 4: run df -h
You can see total volume attached to your instance.