UseCase: Launch an AWS cloud9 environment that has added EBS of 500GB. This environment will be extensively used to build and publish dockers by developers.
So I did start an m.5.large instance-based environment and attached an EBS volume of 500GB.
Attachment information: i-00xxxxxxb53 (aws-cloud9-dev-6f2xxxx3bda8c):/dev/sdf
This is my total storage and I do not see 500GB volume.
On digging further, it looks like the EBS volume is attached but not at the correct mount point.
EC2 EBS configuration
Question: What should be the next step in order to use this EBS volume?
Question: What should be done in order to make use attached EBS for docker building?
Question: What should be the most efficient instance type for docker building?
Tyoe df -hT here you will find the Filesystem type of root if it is xfs or ext4
If say root (/) is xfs, run the following command for the 500 GiB Volume
$ mkfs -t xfs /dev/nvme1n1
If root (/) is ext4,
$ mkfs -t ext4 /dev/nvme1n1
Create a directory in root say named mount
$ mkdir /mount
Now mount the 500 GiB Volume to /mount
$ mount /dev/nvme1n1 /mount
Now it will be mounted and can viewed in df -hT
Also make sure to update the /etc/fstab so mount remains stable if there is a reboot
To update first find UUID if 500 GiB EBS Volume
$ blkid /dev/nvme1n1
Note down the UUID from the output
Now go to /etc/fstab using editor of your choice
$ vi /etc/fstab
There must be an entry already for /, update for mount and save the file, (note replace xfs with ext4 if filesytem is ext4)
UUID=<Add_UUID_here_without_""> /mount xfs defaults 0 0
Now finally run mount command
$ mount -a
Generally, if its new EBS volume you have to format it manually and then mount it, also manually. From docs:
After you attach an Amazon EBS volume to your instance, it is exposed as a block device. You can format the volume with any file system and then mount it.
The instruction for doing this in the link above.
Related
I have a volume of 250 gb initially in one of my ec2 instance. I have increased the volume to 300gb. I could able to see that 300gb using the below command
But when i do df-h command i could not see the 300gb volume. Please help i am new to AWS
New volumes should be formatted to be accessible. Resized existing volumes should also be modified (resized) from the inside of the operating system.
The general information on how to do this safely (e.g. with snapshots) is given in the following AWS documentation:
Making an Amazon EBS volume available for use on Linux
Extending a Linux file system after resizing a volume
Based on the discussion in comments, two commands were used to successfully solve the problem:
sudo growpart /dev/xvda 1
sudo resize2fs /dev/xvda1
Let's solve step by step
After increasing the EBS volume size, connect to your instance over SSH to check the EBS volume size.
ssh ubuntu#<Public IP> -i <Key Pair>
Now use the df command to list all the filesystems mounted on your disk.
sudo df -hT
I hope you are seeing that The root filesystem size (/dev/xvda1) is still (X GB here X={Your EC2 storage size}), and its type is ext4. Now use the lsblk command in the terminal to check whether the disk has an extended partition.
sudo lsblk
The root volume (/dev/xvda) has a partition (/dev/xvda1). The size of the volume is 20 GB, but the size of the partition is still X GB. Now use the growpart command in the terminal to extend the partition size.
sudo growpart /dev/xvda 1
Again use the lsblk command in the terminal to verify if the partitions size extended.
sudo lsblk
So far, the volume size and the partition size have been extended. Use the df command to check if the root filesystem has been extended or not.
sudo df -hT
The size of the root filesystem is still X GB, and it needs to be extended. To extend different types of filesystems, different commands are used.
In order to extend an ext4 filesystem, the resize2fs command is used.
sudo resize2fs /dev/xvda1
Now again, list all the filesystems on your EC2 instance using the df command.
sudo df -hT
I hope you are seeing the changes after running the resize2fs command, the size of the filesystem is increased.
I attached a volume to an EC2 instance and now when i'm trying to mount it i'm getting this error :
sudo mount /dev/xvdf /mnt/tmp
mount: /mnt/tmp: wrong fs type, bad option, bad superblock on /dev/xvdf, missing codepage or helper program, or other error.
What's the problem ?
I was using a Amazon Linux 2 image so i have to mount the volume using this command
sudo mount -o nouuid /dev/xvdf1 /mnt/tempvol
And now it works.
You have to create filesystem on this volume:
mkfs.ext4 /dev/xvdf
And then you can mount it.
Please remember, that it will format this volume
Please check for the mount partition name (/dev/xvdf), the name should be /dev/xvdf1 like this if the disk is partitioned or you can try with below command:
sudo mount /dev/xvdf /vol -t ext4
I am trying with AWS EBS volume. I created an EC2 Server using the AMI rancheros-v0.7.1-hvm-1. Then I attached volume and mounted to /var/lib/docker folder. Then I run few docker images on that server and I am able to access those applications also.
Later I created a snapshot of the volume. and launched another server using the same AMI and attached an EBS volume from the snapshot I created earlier and mounted to /var/lib/docker folder.
After that I ssh to the second server and did docker PS. But no docker Images are running there.
When I do df -kh command on first server the output was:
Filesystem Size Used Available Use% Mounted on
/dev/xvdb 29.4G 1.2G 26.7G 4% /var/lib/docker
/dev/xvdb 29.4G 1.2G 26.7G 4% /var/lib/docker/overlay
overlay 29.4G 1.2G 26.7G 4% /var/lib/docker/overlay
.........
And followed by the docker images running.
But when I did the same command on the second server I got the output like this:
Filesystem Size Used Available Use% Mounted on
/dev/xvdb 29.4G 44.1M 27.8G 0% /var/lib/docker
/dev/xvdb 29.4G 44.1M 27.8G 0% /var/lib/docker/overlay
No docker images running also.
In both servers the use% are different.
Can any one tell me how can I check both are similar, and the snapshot contains all the data in the EBS volume? If the snapshot contains the similar data in the volume, then the second server should contain the docker images. But in my case its not happening.
This is the user data I gave while creating the EC2 server.
#!/bin/sh
sudo mkfs.ext4 /dev/xvdb
mkdir -p /var/lib/docker
echo "/dev/xvdb /var/lib/docker ext4 defaults 0 0" >> /etc/fstab
mount /dev/xvdb /var/lib/docker -t ext4
chown -R 1000 /var/lib/docker
Can anyone tell me a solution for this?
It worked now.
the server which I created from snapshot I should not create the file system. I have to remove this command
sudo mkfs.ext4 /dev/xvdb
from the user data. just create the folder and mount it. then it worked.
I have a snapshot and I can start a new EC2 and attach it but I cannot access the snapshot on the EBS drive without formatting it which destroys the data. Here is the command list I am using:
cat /proc/partitions
sudo mke2fs -F -t ext4 /dev/xvdf
sudo mount /dec/xvdf /data
This works but the EBS is empty due to the mke2fs.
How can I just mount the EBS snapshot and hence use it under /data?
Don't use mke2fs. As you have observed, that reformats the volume. If, for some reason, you don't think you can otherwise mount it, you may be mounting the wrong logical entity.
Identify the device or partition with lsblk and then mount it, as above.
If there are partitions, you need to supply the partition, not the device, as an argument to mount.
/dev/xvde # device
/dev/xvde1 # first partition
I'm having problem connecting EBS volume to my Ubuntu EC2 Instance.
Here's what I did:
From the Amazon AWS Console, I created a EBS 150GB volume and attached it to an Ubuntu 11.10 EC2 instance. Under the EBS volume properties, "Attachment" shows: "[my Ubuntu instance id]:/dev/sdf (attached)"
Tried mounting the drive on the Ubuntu box, and it told me "mount: /dev/sdf is not a block device"
sudo mount /dev/sdf /vol
So I checked with fdisk and tried to mount from the new location and it told me it wasn't the right file system.
sudo fdisk -l
sudo mount -v -t ext4 /dev/xvdf /vol
the error:
mount: wrong fs type, bad option, bad superblock on /dev/xvdf, missing
codepage or helper program, or other error In some cases useful info
is found in syslog - try dmesg | tail or so
"dmesg | tail" told me it gave the following error:
EXT4-fs (sda1): VFS: Can't find ext4 filesystem
I also tried putting the configurations into /etc/fstab file as instructed on http://www.webmastersessions.com/how-to-attach-ebs-volume-to-amazon-ec2-instance, but still gave same not the right file system error.
Questions:
Q1: Based on point 1 (above), why was the volume mapped to 'dev/sdf' when it's really mapped to '/dev/xvdf'?
Q2: What else do I need to do to get the EBS volume loaded? I thought it'll just take care of everything for me when I attach it to a instance.
Since this is a new volume, you need to format the EBS volume (block device) with a file system between step 1 and step 2. So the entire process with your sample mount point is:
Create EBS volume.
Attach EBS volume to /dev/sdf (EC2's external name for this particular device number).
Format file system /dev/xvdf (Ubuntu's internal name for this particular device number):
sudo mkfs.ext4 /dev/xvdf
Only format the file system if this is a new volume with no data on it. Formatting will make it difficult or impossible to retrieve any data that was on this volume previously.
Mount file system (with update to /etc/fstab so it stays mounted on reboot):
sudo mkdir -m 000 /vol
echo "/dev/xvdf /vol auto noatime 0 0" | sudo tee -a /etc/fstab
sudo mount /vol
Step 1: create volume
step 2: attach to your instance root volume
step 3: run sudo resize 2fs -p /dev/xvde
step 4: restart apache2 sudo service apache2 restart
step 4: run df -h
You can see total volume attached to your instance.