I'm running Amazon EBS based small instance.
This is how my file system looks like:
root#ip-10-49-37-195:~# df --all
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 8256952 1310196 6527328 17% /
proc 0 0 0 - /proc
none 0 0 0 - /sys
fusectl 0 0 0 - /sys/fs/fuse/connections
none 0 0 0 - /sys/kernel/debug
none 0 0 0 - /sys/kernel/security
none 847852 116 847736 1% /dev
none 0 0 0 - /dev/pts
none 852852 0 852852 0% /dev/shm
none 852852 60 852792 1% /var/run
none 852852 0 852852 0% /var/lock
/dev/sda2 153899044 192068 145889352 1% /mnt
I have following questions:
Amazon says that small instance gives you 160GD of disk. Looks like '/mnt' is exactly that declarated space. Then why I dont see that disk in Amazon Management Console, but only small (8GB) disk mounted to the Root?
What will happen with my data in /mnt and in Root if I terminate/stop an instance?
Answering my own question:
1. 160GD of disk is a Instance disk which will be lost after termination or any hardware fall. So, you should consider using another EBS disk if you dont want to loose your data.
Why not to use 8GD EBS device (mounted by default with every EBS based Amazon instance) for storing data (e.g. databases)? Because all EBS devices mounted during launch will also be removed after termination. So, everything you save in /mnt or in any other directory will no survive termination or hardware fail.
There is a trick. Looks like if you detach /mnt (aka /dev/sda2) and then attach it back, it will not be deleted during instance termination. Because it will be marked as being attached after launch.
2. it will be removed
Related
I had to resize the boot disk of my Debian Linux VM from 10GB to 30GB because it was full. After doing just so and stopping/starting my instance it has become useless. I can't enter SSH and i can't access my application. The last backups where from 1 month ago and we will lose A LOT of work if i don't get this to work.
I have read pretty much everything on the internet about resizing disks and repartitioning tables, but nothing seems to work.
When running df -h i see:
Filesystem Size Used Avail Use% Mounted on
overlay 36G 30G 5.8G 84% /
tmpfs 64M 0 64M 0% /dev
tmpfs 848M 0 848M 0% /sys/fs/cgroup
/dev/sda1 36G 30G 5.8G 84% /root
/dev/sdb1 4.8G 11M 4.6G 1% /home
overlayfs 1.0M 128K 896K 13% /etc/ssh/keys
tmpfs 848M 744K 847M 1% /run/metrics
shm 64M 0 64M 0% /dev/shm
overlayfs 1.0M 128K 896K 13% /etc/ssh/ssh_host_dsa_key
tmpfs 848M 0 848M 0% /run/google/devshell
when running sudo lsblk i see:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 40G 0 disk
├─sda1 8:1 35.9G 0 part /var/lib/docker
├─sda2 8:2 0 16M 1 part
├─sda3 8:3 0 2G 1 part
├─sda4 8:4 0 16M 0 part
├─sda5 8:5 0 2G 0 part
├─sda6 8:6 512B 0 part
├─sda7 8:7 0 512B 0 part
├─sda8 8:8 16M 0 part
├─sda9 8:9 0 512B 0 part
├─sda10 8:10 0 512B 0 part
├─sda11 8:11 8M 0 part
└─sda12 8:12 0 32M 0 part
sdb 8:16 0 5G 0 disk
└─sdb1 8:17 0 5G 0 part /home
zram0 253:0 0 768M 0 disk [SWAP]
Before increasing the disk size i did try to add a second disk and i even formatted and mounted it following the google cloud docs, then unmounted it again. (so i edited the fstab and fstab.backup etc..)
Nothing about resizing disks / repartition tables on the google cloud documentation worked for me.. The growpart, fdisk, resize2fs and many other StackOverflow posts did neither.
When trying to access through SSH i get the "Unable to connect on port 22" error as stated here on the google cloud docs
When creating a new Debian Linux instance with a new disk it works fine.
Anybody that can get this up and running for me without losing any data gets 100+ and a LOT OF LOVE......
I have tried to replicate the case scenario, but it didn't get me any VM instance issues.
I have created a VM instance with 10 GB of data and then Stopped it, increased the disk size to 30 GB and started the instance again. You mention that you can't ssh to the instance or access your application. After my test, I could ssh though and enter the instance. So there must be an issue of the procedure that you have followed that corrupted the VM instance or maybe the boot disk.
However there is a workaround to recover the files from the instance that you can't SSH to. I have tested it and it worked for me:
Go to Compute Engine page and then go to Images
Click on '[+] CREATE IMAGE'
Give that image a name and under Source select Disk
Under Source disk select the disk of the VM instance that you have resized.
Click on Save, if the VM of the disk is running, you will get an error. Either stop the VM instance first and do the same steps again or just select the box Keep instance running (not recommended). (I would recommend to stop it first, as also suggested by the error).
After you save the new created image. Select it and click on + CREATE INSTANCE
Give that instance a name and leave all of the settings as they are.
Under Boot Disk you make sure that you see the 30 GB size that you set up earlier when was increasing the disk size and the name should be the name of the image you created.
Click create and try to SSH to the newly created instance.
If all your files were preserved when you were resizing the disk, you should be able to access the latest ones you had before the corruption of the VM.
UPDATE 2nd WORKAROUND - ATTACH THE BOOT DISK AS SECONDARY TO ANOTHER VM INSTANCE
In order to attach the disk from the corrupted VM instance to a new GCE instance you will need to follow these steps :
Go to Compute Engine > Snapshots and click + CREATE SNAPSHOT.
Under Source disk, select the disk of the corrupted VM. Create the snapshot.
Go to Compute Engine > Disks and click + CREATE DISK.
Under Source type go to Snapshot and under Source snapshot chooce your new created snapshot from step 2. Create the disk.
Go to Compute Engine > VM instances and click + CREATE INSTANCE.
Leave ALL the set up as defult. Under Firewall enable Allo HTTP traffic and Allow HTTPS traffic.
Click on Management, security, disks, networking, sole tenancy
Click on Disks tab.
Click on + Attach existing disk and under Disk choose your new created disk. Create the new VM instnace.
SSH into the VM and run $ sudo lsblk
Check the device name of the newly attached disk and it’s primary partition (it will likely be /dev/sdb1)
Create a directory to mount the disk to: $ sudo mkdir -p /mnt/disks/mount
Mount the disk to the newly created directory $ sudo mount -o discard,defaults /dev/sdb1 /mnt/disks/mount
Then you should be able to load all the files from the disk. I have tested it myself and I could recover the files again from the old disk with this method.
df -h command returns
[root#ip-SERVER_IP ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 5.5G 2.0G 74% /
tmpfs 32G 0 32G 0% /dev/shm
cm_processes 32G 0 32G 0% /var/run/cloudera-scm-agent/process
I have a volume with 500GB of disk space.
Now, I installed some stuff in /dev/xvda1 and it keeps saying that:
The Cloudera Manager Agent's parcel directory is on a filesystem with less than 5.0 GiB of its space free. /opt/cloudera/parcels (free: 1.9 GiB (25.06%), capacity: 7.7 GiB)
Similarly:
The Cloudera Manager Agent's log directory is on a filesystem with less than 2.0 GiB of its space free. /var/log/cloudera-scm-agent (free: 1.9 GiB (25.06%), capacity: 7.7 GiB)
From the memory stats, I see that the Filesystem above stuff is installed in must be:
/dev/xvda1
I believe it needs to be resized so as to have more disk space but I don't think I need to expand the volume. I have only installed some stuff and started with it.
So I would like to know what exact steps I need to follow to expand the space in this partition and where exactly is my 500 GB?
I would like to know it step by step since I cannot afford to lose what I have on the server already. Please help
lsblk:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 500G 0 disk
└─xvda1 202:1 0 8G 0 part /
This question should be asked under serverfault.com.
Don't "expand" the volume. It is not common practice to expand root drive in Linux for none-OS folder. Since your stuff is inside /opt/cloudera/parcels , what you need to do is allocate a new partition, mount it, then copy for it. example is shown here : paritioning & moving. (assume /home is your /top/cloudera/parcels".
To check your disk volume , try this sudo lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
(update)
In fact, if you create your EC2 instance under EBS volume, you don't need to allocate 500GB upfront. You can allocate smaller space, then expand the disk space following this step : Expanding the Storage Space of an EBS Volume on Linux. You should create a snapshot for the EBS before perform those task. However, there is a catch of EBS disk IOPS, if you allocate less space, there will be less IOPS given. So if you allocate 500GB , you will get 500x 3 = max 1500 IOPS.
This AWS LVM link gives you a step by step instruction on how to increase the size of Logical Volume in AWS EC2
My current 500GB Amazon EBS Cold HDD (sc1) volumes in /dev/sdf is full. Following the tutorial here (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html#migrate-data-larger-volume), I successfully got a 1.5 TB SC1, mounted in /dev/xvda and attached it to the instance. Please note that the 500 GB sc1 (/dev/sdf) is also attached to the instance.
Sadly, when I turned on the instance, I only see this new 1.5 TB SC1 in /dev/xvda, but not the old 500 GB SC1 in /dev/sdf and the corresponding data. When I do df -h:
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvdg1 1.5T 34G 1.5T 3% /
devtmpfs 7.9G 76K 7.9G 1% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
If I turned off the instance, detached the 1.5 TB SC1 (/dev/xvda) from the instance, kept the attached 500 GB TB SC1 (/dev/sdf) to the instance, and finally restarted the instance, I will see the 500 GB TB SC1 (/dev/sdf) and its data in it again.
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdf 500G 492G 8G 99% /
devtmpfs 7.9G 76K 7.9G 1% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
Is there any way that we can mount both of these 2 volumes and see/transfer data between these 2 volumes in the same instance? Could any guru enlighten? Thanks.
#
Respond to the comment:
The following is the result of "lsblk" when both 500GB and 1.5GB SC1 are attached.
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part
xvdf 202:80 0 500G 0 disk
└─xvdf1 202:81 0 500G 0 part
xvdg 202:96 0 1.5T 0 disk
└─xvdg1 202:97 0 1.5T 0 part /
The following is the content in "/etc/fstab" when both 500GB and 1.5GB SC1 are attached.
LABEL=/ / ext4 defaults,noatime 1 1
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
extra comments:
UUID results
ls -l /dev/disk/by-uuid
total 0
lrwxrwxrwx 1 root root 11 Oct 19 08:54 43c07df6-e944-4b25-8fd1-5ff848b584b2 -> ../../xvdg1
#2016-10-21-update
After trying the following
uuidgen
tune2fs /dev/xvdf1 -U <the uuid generated before>
making sure both volumes are attached to the instance, restarting the instance, only the 500 GB volume show up.
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvdg1 493G 473G 20G 96% /
devtmpfs 7.9G 76K 7.9G 1% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part
xvdf 202:80 0 500G 0 disk
└─xvdf1 202:81 0 500G 0 part /
xvdg 202:96 0 1.5T 0 disk
└─xvdg1 202:97 0 1.5T 0 part /
ls -l /dev/disk/by-uuid
total 0
lrwxrwxrwx 1 root root 11 Oct 20 20:48 43c07df6-e944-4b25-8fd1-5ff848b584b2 -> ../../xvdg1
lrwxrwxrwx 1 root root 11 Oct 20 20:48 a0161cdc-2c25-4d18-9f01-a75c6df54ccd -> ../../xvdf1
Also "sudo mount /dev/xvdg1" doesn't help as well. Could you enlighten? Thanks!
If the 2 disk are cloned and have the same UUID they cannot be mounted at the same time and system while booting will mount the first partition it finds.
Generate a new UUID for your disk
uuidgen
Running this will give you a new UUID - as the name implies it will be unique
Apply the new UUID on your disk
In your case xvdf is not mounted so you can change its UUID
tune2fs /dev/xvdf1 -U <the uuid generated before>
Change your mount point
Its getting better, however as they were clone both disk have the same mount point and that is not possible. you need to update your File System Table.
create a new folder which will be your mount point
mkdir /new_drive
mount the drive to your new mount point
sudo mount /dev/xvdg1 /new_drive
Update /etc/fstab so it will be mounted correctly on next reboot
Update the line about /dev/xvdg1 drive , you have something like
/dev/xvdg1 / ext4 ........
Change the 2nd column
/dev/xvdg1 /new_drive ext4 ........
All your data from the 1.5 TB are accessible in /new_drive. You can verify the file is correct by running mount -a
Please adapt this procedure if you want to change the folder name or if you want to keep the 1.5 TB as root mount point and change the 500 GB drive.
I'm sure the problem appears because of my misunderstanding of Ec2+EBS configuring, so answer might be very simple.
I've created RedHat ec2 instance on the Amazon WS with 30GB EBS storage. But lsblk shows me, that only 6GB of total 30 is available for me:
xvda 202:0 0 30G 0 disk
└─xvda1 202:1 0 6G 0 part /
How can I mount all remaining storage space to my instance?
[UPDATE] commands output:
mount:
/dev/xvda1 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sudo fdisk -l /dev/xvda:
WARNING: GPT (GUID Partition Table) detected on '/dev/xvda'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/xvda: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/xvda1 1 1306 10485759+ ee GPT
resize2fs /dev/xvda1:
resize2fs 1.41.12 (17-May-2010)
The filesystem is already 1572864 blocks long. Nothing to do!
I believe you are experiencing an issue that seems specific to EC2 and RHEL* where the partition won't extend using the standard tools.
If you follow the instructions of this previous answer you should be able to extend the partition to use the full space. Follow the instructions particularly carefully if expanding the root partition!
unable to resize root partition on EC2 centos
If you update your question with the output of fdisk -l /dev/xvda and mount it should help provide any extra information if the following isnt suitable:
I would assume that you could either re-partition xvda to provision the space for another mount point (/var or /home for example) or grow your current root partition into the extra space available - you can follow this guide here to do this
Obviously be sure to back up any data you have on there, this is potentially destructive!
[Update - how to use parted]
The following link will talk you through using GNU Parted to create a partition, you will essentially just need to create a new partition, then I would temporarily mount this to a directory such as /mnt/newhome, copy across all of the current contents of /home (recursively as root keeping permissions with cp -rp /home/* /mnt/newhome), then I would rename the current /home to /homeold, then make sure you have set up Fstab to have the correct entry: (assuming your new partition is /dev/xvda2)
/dev/xvda2 /home /ext4 noatime,errors=remount-ro 0 1
I am using the official Fedora EC2 cloud AMI for some development work. By default, the root device on these these machines is only 2GB, regardless of which instance type I use. After installing some stuff, it always runs out of space and yum starts to complain.
[fedora#ip-10-75-10-113 ~]$ df -ah
Filesystem Size Used Avail Use% Mounted on
rootfs 2,0G 1,6G 382M 81% /
proc 0 0 0 - /proc
sysfs 0 0 0 - /sys
devtmpfs 1,9G 0 1,9G 0% /dev
securityfs 0 0 0 - /sys/kernel/security
selinuxfs 0 0 0 - /sys/fs/selinux
tmpfs 1,9G 0 1,9G 0% /dev/shm
devpts 0 0 0 - /dev/pts
tmpfs 1,9G 172K 1,9G 1% /run
tmpfs 1,9G 0 1,9G 0% /sys/fs/cgroup
I don't have this problem with other EC2 images, such as CentOS or Ubuntu. Is there any way I can start the AMI with a single, unpartitioned disk? Or am I somehow using it in the wrong way?
When you create the instance, just give yourself a larger EBS volume. I think the default is 5GB (could be very wrong). Just increase it. You can always create an AMI from your current instance and launch based on the AMI with a larger EBS volume if you don't want to duplicate any work you've already done.