Add EBS to Ubuntu EC2 Instance - amazon-web-services

I'm having problem connecting EBS volume to my Ubuntu EC2 Instance.
Here's what I did:
From the Amazon AWS Console, I created a EBS 150GB volume and attached it to an Ubuntu 11.10 EC2 instance. Under the EBS volume properties, "Attachment" shows: "[my Ubuntu instance id]:/dev/sdf (attached)"
Tried mounting the drive on the Ubuntu box, and it told me "mount: /dev/sdf is not a block device"
sudo mount /dev/sdf /vol
So I checked with fdisk and tried to mount from the new location and it told me it wasn't the right file system.
sudo fdisk -l
sudo mount -v -t ext4 /dev/xvdf /vol
the error:
mount: wrong fs type, bad option, bad superblock on /dev/xvdf, missing
codepage or helper program, or other error In some cases useful info
is found in syslog - try dmesg | tail or so
"dmesg | tail" told me it gave the following error:
EXT4-fs (sda1): VFS: Can't find ext4 filesystem
I also tried putting the configurations into /etc/fstab file as instructed on http://www.webmastersessions.com/how-to-attach-ebs-volume-to-amazon-ec2-instance, but still gave same not the right file system error.
Questions:
Q1: Based on point 1 (above), why was the volume mapped to 'dev/sdf' when it's really mapped to '/dev/xvdf'?
Q2: What else do I need to do to get the EBS volume loaded? I thought it'll just take care of everything for me when I attach it to a instance.

Since this is a new volume, you need to format the EBS volume (block device) with a file system between step 1 and step 2. So the entire process with your sample mount point is:
Create EBS volume.
Attach EBS volume to /dev/sdf (EC2's external name for this particular device number).
Format file system /dev/xvdf (Ubuntu's internal name for this particular device number):
sudo mkfs.ext4 /dev/xvdf
Only format the file system if this is a new volume with no data on it. Formatting will make it difficult or impossible to retrieve any data that was on this volume previously.
Mount file system (with update to /etc/fstab so it stays mounted on reboot):
sudo mkdir -m 000 /vol
echo "/dev/xvdf /vol auto noatime 0 0" | sudo tee -a /etc/fstab
sudo mount /vol

Step 1: create volume
step 2: attach to your instance root volume
step 3: run sudo resize 2fs -p /dev/xvde
step 4: restart apache2 sudo service apache2 restart
step 4: run df -h
You can see total volume attached to your instance.

Related

EBS volume shows no mount point

UseCase: Launch an AWS cloud9 environment that has added EBS of 500GB. This environment will be extensively used to build and publish dockers by developers.
So I did start an m.5.large instance-based environment and attached an EBS volume of 500GB.
Attachment information: i-00xxxxxxb53 (aws-cloud9-dev-6f2xxxx3bda8c):/dev/sdf
This is my total storage and I do not see 500GB volume.
On digging further, it looks like the EBS volume is attached but not at the correct mount point.
EC2 EBS configuration
Question: What should be the next step in order to use this EBS volume?
Question: What should be done in order to make use attached EBS for docker building?
Question: What should be the most efficient instance type for docker building?
Tyoe df -hT here you will find the Filesystem type of root if it is xfs or ext4
If say root (/) is xfs, run the following command for the 500 GiB Volume
$ mkfs -t xfs /dev/nvme1n1
If root (/) is ext4,
$ mkfs -t ext4 /dev/nvme1n1
Create a directory in root say named mount
$ mkdir /mount
Now mount the 500 GiB Volume to /mount
$ mount /dev/nvme1n1 /mount
Now it will be mounted and can viewed in df -hT
Also make sure to update the /etc/fstab so mount remains stable if there is a reboot
To update first find UUID if 500 GiB EBS Volume
$ blkid /dev/nvme1n1
Note down the UUID from the output
Now go to /etc/fstab using editor of your choice
$ vi /etc/fstab
There must be an entry already for /, update for mount and save the file, (note replace xfs with ext4 if filesytem is ext4)
UUID=<Add_UUID_here_without_""> /mount xfs defaults 0 0
Now finally run mount command
$ mount -a
Generally, if its new EBS volume you have to format it manually and then mount it, also manually. From docs:
After you attach an Amazon EBS volume to your instance, it is exposed as a block device. You can format the volume with any file system and then mount it.
The instruction for doing this in the link above.

Unable to mount a volume on an EC2 instance

I attached a volume to an EC2 instance and now when i'm trying to mount it i'm getting this error :
sudo mount /dev/xvdf /mnt/tmp
mount: /mnt/tmp: wrong fs type, bad option, bad superblock on /dev/xvdf, missing codepage or helper program, or other error.
What's the problem ?
I was using a Amazon Linux 2 image so i have to mount the volume using this command
sudo mount -o nouuid /dev/xvdf1 /mnt/tempvol
And now it works.
You have to create filesystem on this volume:
mkfs.ext4 /dev/xvdf
And then you can mount it.
Please remember, that it will format this volume
Please check for the mount partition name (/dev/xvdf), the name should be /dev/xvdf1 like this if the disk is partitioned or you can try with below command:
sudo mount /dev/xvdf /vol -t ext4

Auto mounting EFS on EC2 instance

I created an EC2 instance and an EFS, and was able to mount EFS properly on the instance.
I need to auto mount in case the server is rebooted.
According to the documentation, i do that following in /etc/fstab
fs-xxxxxxxx:/ /mnt/efs efs defaults,_netdev 0 0
Using the EFS file system ID in place of xxxxxxxx
But when I reboot the server, EFS does not get mounted, and I save to remount it again
What should I do here?
I'm posting here a more detailed solution since this thread seems to show up near the top for related queries from search engine.
There are two methods to mount an Amazon EFS: "Amazon EFS mount helper" (amazon-efs-utils) and "NFS client" (nfs-utils).
Examples below shows how to mount manually and automatically with each method. Before using, replace the text [value] with your own values.
==============================
===============
Mounting with "Amazon EFS mount helper"
===============
To mount with "Amazon EFS mount helper" manually, you issue the following command format into CLI:
sudo mount -t efs [fs-XXXXXXXX]:/ /path/to/mount/dir
=====
To mount with "Amazon EFS mount helper" automatically, you insert the following line into /etc/fstab
[fs-XXXXXXXX]:/ /path/to/mount/dir efs defaults,_netdev 0 0
===============
Mounting with "NFS client"
===============
To mount with "NFS client" manually, you issue either of the following command format into CLI:
Use the command instruction given from "Amazon EC2 mount instructions (from local VPC)" when you click in to view the Elastic File System ID in question under EFS Web Console.
sudo mount -t nfs4 -o nfsvers=4.1,rsize=XXXXXXX,wsize=XXXXXXX,hard,timeo=XXX,retrans=X,noresvport [fs-XXXXXXXX].efs.[REGION].amazonaws.com:/ /path/to/mount/dir
OR
sudo mount -t nfs4 -o defaults,_netdev [fs-XXXXXXXX].efs.[REGION].amazonaws.com:/ /path/to/mount/dir
=====
To mount with "NFS client" automatically, you insert the following line into /etc/fstab
[fs-XXXXXXXX].efs.[REGION].amazonaws.com:/ /path/to/mount/dir nfs4 defaults,_netdev 0 0
==============================
Given the above example format, do you notice your problem?
You thought you've "Amazon EFS mount helper" installed, but based on the manual mount command format you posted in your first comment reply (not opening post), you actually only have "NFS client" installed on your system. You were using "Amazon EFS mount helper" format inside /etc/fstab to auto mount, but the manual mount command that worked for you is in "NFS client" format. Since your system doesn't have "Amazon EFS mount helper" installed, it doesn't understand the auto mount format inside /etc/fstab so auto mount it doesn't work for you.
The manual mount command you posted above that worked for you is only for "NFS client", not for "Amazon EFS mount helper".
mount -t nfs4 -o nfsvers=4.1 ...
Notice the -t parameter above is nfs4, which is the format for "NFS client". If you were using "Amazon EFS mount helper", the -t parameter should be efs.
To solve the problem, you can use either Amazon EFS mount helper (amazon-efs-utils) or NFS client (nfs-utils), but the command format (in CLI or /etc/fstab) and the mount client being used should be consistent.
In other words:
"Amazon EFS mount helper" <=> efs in both CLI and /etc/fstab
"NFS client" <=> nfs4 in both CLI and /etc/fstab
==============================
Installation instructions for mount client software:
===============
If you want to use "Amazon EFS mount helper", use the following installation instructions for Amazon Linux and Other Distros:
https://docs.aws.amazon.com/efs/latest/ug/using-amazon-efs-utils.html
=====
If you want to use "NFS client", use the following installation instructions on your EC2 instance:
On a Red Hat Enterprise Linux or SUSE Linux instance, including Amazon Linux, use this command:
sudo yum install -y nfs-utils
On an Ubuntu instance, use this command:
sudo apt-get install nfs-common
==============================
Once you have the mount client software installed, use the corresponding mounting instructions posted above.
To solve this using NFS4, please follow the instructions below:
On your AWS account, notice the following:
1) Go to your EFS management screen, you should your EFS_WHATEVER... and there is a small triangle next to it, click down to expand.
2) Notice there is a "DNS Name" right in the middle of the screen, it will say something like: "fs-1234567c.efs.us-west-1.amazonaws.com", note that down, this is your mounting point that we will use later on.
3) By default, if you have just created the new instance, then you must allow it to be seen by your servers, trying to connect will freeze since the firewall is blocking your connection. to allow this scroll down until you see your security group, this is something like sg-abcdef
4) Go into your EC2 servers, select the server that you want it to access your EFS and then click on its "security groups", it should take you into the security groups management screen, note down it's security group ID (this is something sg-12345)
4) Now, clear the filter of your VPC management screen to see all of the SGs...
5) Enter your EFS security group (i.e. sg-abcdef) and click the search button, this should bring up the EFS ACL
6) Click on "Inbound Rules" -> EDIT..
7) Click "ADD" and select "EFS" from the list, enter your server's SG (i.e. sg-12345) and describe it as "Server XXX access" if you like.
8) Now the server should be able to see the EFS Mount,
9) Go into your server and install the necessary components by running as ROOT:
apt-get install nfs-common
10) Now, to test the MOUNT, create a new directory... something like: mkdir /mnt/heider
11) Mount the FS using the following command:
mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-1234567c.efs.us-west-1.amazonaws.com:/ /mnt/heider
Please note that your fs-12345c..... is taken from your DNS name as mentioned above.
12) If this work then great, move to the next step, otherwise revise the above to see if you missed anything.
13) Now to auto-mount this, you need to:
14) Edit /etc/fstab
15) Add the following:
fs-1234567c.efs.us-west-1.amazonaws.com:/ /mnt/heider nfs4 defaults,_netdev 0 0
16) Save the file and exit
17) in Linux command shell type:
mount -a
this will test the mounting, if it's mounted then great, rebooting would auto-mount it.
18) This should auto-mount.
19) Reboot to test, all should be there.
I hope this helps.
In order to use the efs file system type, I believe you need to have the amazon-efs-utils package installed. This will install the additional dependencies.
Anyone who has this issue,
instead of
fs-xxxxxxxx:/ /mnt/efs efs defaults,_netdev 0 0
use
{target_ip}:/ /mnt/efs nfs4 defaults,_netdev 0 0
This works fine for me, and it auto mounts on the newly created instances

Restoring an AWS Snapshot and using it

I have a snapshot and I can start a new EC2 and attach it but I cannot access the snapshot on the EBS drive without formatting it which destroys the data. Here is the command list I am using:
cat /proc/partitions
sudo mke2fs -F -t ext4 /dev/xvdf
sudo mount /dec/xvdf /data
This works but the EBS is empty due to the mke2fs.
How can I just mount the EBS snapshot and hence use it under /data?
Don't use mke2fs. As you have observed, that reformats the volume. If, for some reason, you don't think you can otherwise mount it, you may be mounting the wrong logical entity.
Identify the device or partition with lsblk and then mount it, as above.
If there are partitions, you need to supply the partition, not the device, as an argument to mount.
/dev/xvde # device
/dev/xvde1 # first partition

Amazon EC2 EBS CentOS 6.5 Storage Capacity not Added to Instance

I have created a custom CentOS 6.5 image and registered it to AWS as EBS root device type. When I launch an instance, it works perfectly well, except that the storage capacity (instance storage to be included according to the instance type) is not added to the instance.
I made a try booting an instance using the official CentOS 6.5 AMI that is located in the AWS Marketplace, but I got the same result.
Does anyone know the reason, if it is a known issue, or whatever?
Thanks in advance.
First you have to make sure that the instance store is attached at launch time. From the AWS console it should look something like this:
Once you boot the instance you have to create a filesystem in the drive by running:
mkfs.ext4 /dev/sdb
Then you need to mount that drive somewhere in your root filesystem:
mkdir -p /mnt/myinstancestore
mount /dev/sdb /mnt/myinstancestore
You can run these commands to check that your drive is mounted:
df -h
mount
You can also add the mount entry to your /etc/fstab file so that it mounts permanently after every reboot:
/dev/sdb /mnt/myinstancestore ext4 defaults 1 2