My current 500GB Amazon EBS Cold HDD (sc1) volumes in /dev/sdf is full. Following the tutorial here (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html#migrate-data-larger-volume), I successfully got a 1.5 TB SC1, mounted in /dev/xvda and attached it to the instance. Please note that the 500 GB sc1 (/dev/sdf) is also attached to the instance.
Sadly, when I turned on the instance, I only see this new 1.5 TB SC1 in /dev/xvda, but not the old 500 GB SC1 in /dev/sdf and the corresponding data. When I do df -h:
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvdg1 1.5T 34G 1.5T 3% /
devtmpfs 7.9G 76K 7.9G 1% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
If I turned off the instance, detached the 1.5 TB SC1 (/dev/xvda) from the instance, kept the attached 500 GB TB SC1 (/dev/sdf) to the instance, and finally restarted the instance, I will see the 500 GB TB SC1 (/dev/sdf) and its data in it again.
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdf 500G 492G 8G 99% /
devtmpfs 7.9G 76K 7.9G 1% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
Is there any way that we can mount both of these 2 volumes and see/transfer data between these 2 volumes in the same instance? Could any guru enlighten? Thanks.
#
Respond to the comment:
The following is the result of "lsblk" when both 500GB and 1.5GB SC1 are attached.
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part
xvdf 202:80 0 500G 0 disk
└─xvdf1 202:81 0 500G 0 part
xvdg 202:96 0 1.5T 0 disk
└─xvdg1 202:97 0 1.5T 0 part /
The following is the content in "/etc/fstab" when both 500GB and 1.5GB SC1 are attached.
LABEL=/ / ext4 defaults,noatime 1 1
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
extra comments:
UUID results
ls -l /dev/disk/by-uuid
total 0
lrwxrwxrwx 1 root root 11 Oct 19 08:54 43c07df6-e944-4b25-8fd1-5ff848b584b2 -> ../../xvdg1
#2016-10-21-update
After trying the following
uuidgen
tune2fs /dev/xvdf1 -U <the uuid generated before>
making sure both volumes are attached to the instance, restarting the instance, only the 500 GB volume show up.
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvdg1 493G 473G 20G 96% /
devtmpfs 7.9G 76K 7.9G 1% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part
xvdf 202:80 0 500G 0 disk
└─xvdf1 202:81 0 500G 0 part /
xvdg 202:96 0 1.5T 0 disk
└─xvdg1 202:97 0 1.5T 0 part /
ls -l /dev/disk/by-uuid
total 0
lrwxrwxrwx 1 root root 11 Oct 20 20:48 43c07df6-e944-4b25-8fd1-5ff848b584b2 -> ../../xvdg1
lrwxrwxrwx 1 root root 11 Oct 20 20:48 a0161cdc-2c25-4d18-9f01-a75c6df54ccd -> ../../xvdf1
Also "sudo mount /dev/xvdg1" doesn't help as well. Could you enlighten? Thanks!
If the 2 disk are cloned and have the same UUID they cannot be mounted at the same time and system while booting will mount the first partition it finds.
Generate a new UUID for your disk
uuidgen
Running this will give you a new UUID - as the name implies it will be unique
Apply the new UUID on your disk
In your case xvdf is not mounted so you can change its UUID
tune2fs /dev/xvdf1 -U <the uuid generated before>
Change your mount point
Its getting better, however as they were clone both disk have the same mount point and that is not possible. you need to update your File System Table.
create a new folder which will be your mount point
mkdir /new_drive
mount the drive to your new mount point
sudo mount /dev/xvdg1 /new_drive
Update /etc/fstab so it will be mounted correctly on next reboot
Update the line about /dev/xvdg1 drive , you have something like
/dev/xvdg1 / ext4 ........
Change the 2nd column
/dev/xvdg1 /new_drive ext4 ........
All your data from the 1.5 TB are accessible in /new_drive. You can verify the file is correct by running mount -a
Please adapt this procedure if you want to change the folder name or if you want to keep the 1.5 TB as root mount point and change the 500 GB drive.
Related
I am Running an AWS ami using a T2.large instance using the US East. I was trying to upload some data and I ran in the terminal:
df -h
and I got this result:
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 799M 8.6M 790M 2% /run
/dev/xvda1 9.7G 9.6G 32M 100% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 799M 0 799M 0% /run/user/1000
I know I have not uploaded 9.7 GB of data to the instance, but I don't know what /dev/xvda1 is or how to access it.
I also assume that all the tmpfs are temporal files, how can I erase those?
Answering some of the questions in the coments, I runned
sudo du -sh /*
And I got:
16M /bin
124M /boot
0 /dev
6.5M /etc
2.7G /home
0 /initrd.img
0 /initrd.img.old
4.0K /jupyterhub_cookie_secret
16K /jupyterhub.sqlite
268M /lib
4.0K /lib64
16K /lost+found
4.0K /media
4.0K /mnt
562M /opt
du: cannot access '/proc/15616/task/15616/fd/4': No such file or directory
du: cannot access '/proc/15616/task/15616/fdinfo/4': No such file or directory
du: cannot access '/proc/15616/fd/4': No such file or directory
du: cannot access '/proc/15616/fdinfo/4': No such file or directory
0 /proc
28K /root
8.6M /run
14M /sbin
8.0K /snap
8.0K /srv
0 /sys
64K /tmp
4.7G /usr
1.5G /var
0 /vmlinuz
0 /vmlinuz.old
When you run out of root filesystem space, and aren't doing anything that you know consumes space, then 99% of the time (+/- 98%) it's a logfile. Run this:
sudo du -s /var/log/* | sort -n
You'll see a listing of all of the sub-directories in /var/log (which is the standard logging destination for Linux systems), and at the end you'll probably see an entry with a very large number next to it. If you don't see anything there, then the next place to try is /tmp (which I'd do with du -sh /tmp since it prints a single number with "human" scaling). And if that doesn't work, then you need to run the original command on the root of the filesystem, /* (that may take some time).
Assuming that it is a logfile, then you should take a look at it to see if there's an error in the related application. If not, you may just need to learn about logrotate.
/dev/xvda1 is your root volume. The AMI you listed has a default root volume size of 20GB as you can see here:
Describe the image and get it's block device mappings:
aws ec2 describe-images --image-ids ami-3b0c205e --region us-east-2 | jq .Images[].BlockDeviceMappings[]
Look at the volume size
{
"DeviceName": "/dev/sda1",
"Ebs": {
"Encrypted": false,
"DeleteOnTermination": true,
"VolumeType": "gp2",
"VolumeSize": 20,
"SnapshotId": "snap-03341b1ff8ee47eaa"
}
}
{
"DeviceName": "/dev/sdb",
"VirtualName": "ephemeral0"
}
{
"DeviceName": "/dev/sdc",
"VirtualName": "ephemeral1"
}
When launched with the correct volume size of 20GB there is plenty of free space (10GB)
root#ip-10-100-0-64:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 488M 0 488M 0% /dev
tmpfs 100M 3.1M 97M 4% /run
/dev/xvda1 20G 9.3G 11G 49% /
tmpfs 496M 0 496M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 496M 0 496M 0% /sys/fs/cgroup
tmpfs 100M 0 100M 0% /run/user/1000
It appears the issue here is the instance was launched with 10GB (somehow, I didn't think this was possible) of storage instead of the default 20GB.
/dev/xvda1 is your disk based storage on the amazon storage system.
It is the only storage on your system have, and it contains your operation system and all data. So I guess most of the space it used by the Ubuntu installation
Remember: T instances at amazon don't have any local disk at all.
I use a public snaphot.
I created one volume with 15G, and another with 25G, both from same snapshot. However, after mounting, df shows both at 8G and full. The lsblk shows the block devices with 15G and 25G. Do I need to give an extra argument when mounting?
How can I mount read/write?
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
..
xvdf 202:80 0 25G 0 disk /data
xvdg 202:96 0 15G 0 disk /data2
df
Filesystem 1K-blocks Used Available Use% Mounted on
..
/dev/xvdf 8869442 8869442 0 100% /data
/dev/xvdg 8869442 8869442 0 100% /data2
mount
..
/dev/xvdf on /data type iso9660 (ro,relatime)
/dev/xvdg on /data2 type iso9660 (ro,relatime)
Probably your volume raw capacity is larger than your filesystem size, so, as #avihoo-mamca suggested, you need to extend your filesystem to the volume size, using resize2fs.
I have an EC2 instance with 2 EBS volumes (each 1000 GB), and I want to shrink their size to the proper size. The question is what is this "proper size".
Here is the locations of the volumes shown to me at the AWS page:
When I login to the machine I get amount of space used, I get:
[ec2-user#ip-xx-xx-xxx-xxx ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 1008G 194G 814G 20% /
devtmpfs 30G 60K 30G 1% /dev
tmpfs 31G 0 31G 0% /dev/shm
If I go inside the /dev and do ls (among lots of things) I would see that:
lrwxrwxrwx 1 root root 5 Aug 20 00:50 root -> xvda1
lrwxrwxrwx 1 root root 4 Aug 20 00:50 sda -> xvda
lrwxrwxrwx 1 root root 5 Aug 20 00:50 sda1 -> xvda1
lrwxrwxrwx 1 root root 4 Aug 20 00:49 sdf -> xvdf
First, why inside dev we have xvda, xvda1, xvdf'? (and why sda, sda1?) The console only talks about sdf and xvda.
Second, in the output of df -h, /dev/xvda1 seems to be only one of the EBS volumes (right?) If so, how can get the disk usage for the other EBS volume?
Update: More info:
[ec2-user#ip-10-144-183-22 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 1T 0 disk
└─xvda1 202:1 0 1024G 0 part /
xvdf 202:80 0 1T 0 disk
[ec2-user#ip-10-144-183-22 ~]$ sudo fdisk -l | grep Disk
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/xvda: 1099.5 GB, 1099511627776 bytes, 2147483648 sectors
Disk label type: gpt
Disk /dev/xvdf: 1099.5 GB, 1099511627776 bytes, 2147483648 sectors
These are virtual mounts. The operating system is responsible for the naming of the drive. For example: the public AMI for centos might show /dev/xvdc for sdb which is the primary drive.
You attached sdf, it shows as /dev/xvdf. It's not showing in df -h likely because it's not formatted and mounted. fdisk -l will likely show it. Format it and mount it, df -h will show it. If it is formatted, it's probably just not mounted.
I am using the official Fedora EC2 cloud AMI for some development work. By default, the root device on these these machines is only 2GB, regardless of which instance type I use. After installing some stuff, it always runs out of space and yum starts to complain.
[fedora#ip-10-75-10-113 ~]$ df -ah
Filesystem Size Used Avail Use% Mounted on
rootfs 2,0G 1,6G 382M 81% /
proc 0 0 0 - /proc
sysfs 0 0 0 - /sys
devtmpfs 1,9G 0 1,9G 0% /dev
securityfs 0 0 0 - /sys/kernel/security
selinuxfs 0 0 0 - /sys/fs/selinux
tmpfs 1,9G 0 1,9G 0% /dev/shm
devpts 0 0 0 - /dev/pts
tmpfs 1,9G 172K 1,9G 1% /run
tmpfs 1,9G 0 1,9G 0% /sys/fs/cgroup
I don't have this problem with other EC2 images, such as CentOS or Ubuntu. Is there any way I can start the AMI with a single, unpartitioned disk? Or am I somehow using it in the wrong way?
When you create the instance, just give yourself a larger EBS volume. I think the default is 5GB (could be very wrong). Just increase it. You can always create an AMI from your current instance and launch based on the AMI with a larger EBS volume if you don't want to duplicate any work you've already done.
I've started an instance which had a -d at the back (this should be a scratch disk).
But on boot the disk space stated is not seen.
It should be:
8 vCPUs, 52 GB RAM, 2 scratch disks (1770 GB, 1770 GB)
But df -h outputs:
Filesystem Size Used Avail Use% Mounted on
rootfs 10G 644M 8.9G 7% /
/dev/root 10G 644M 8.9G 7% /
none 1.9G 0 1.9G 0% /dev
tmpfs 377M 116K 377M 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 753M 0 753M 0% /run/shm
so how does one get to run an instance with boot that's a persistent disk and have scratch disks available?
The thing is that I need high CPU and lots of scratch space.
df does not show scratch disks because they are not formatted and mounted. Issue the following command:
ls -l /dev/disk/by-id/
In the output there will be something like:
lrwxrwxrwx 1 root root ... scsi-0Google_EphemeralDisk_ephemeral-disk-0 -> ../../sdb
lrwxrwxrwx 1 root root ... scsi-0Google_EphemeralDisk_ephemeral-disk-1 -> ../../sdc
Then, you can use mkfs and mount the appropriate disks.
See documentation for more info.