Growing Amazon EBS Volume sizes [closed] - amazon-web-services

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm quite impressed with Amazon's EC2 and EBS services. I wanted to know if it is possible to grow an EBS Volume.
For example: If I have a 50 GB volume and I start to run out of space, can I bump it up to 100 GB when required?

You can grow the storage, but it can't be done on the fly. You'll need to take a snapshot of the current block, add a new, larger block and re-attach your snapshot.
There's a simple walkthrough here based on using Amazon's EC2 command line tools

You can't simply 'bump in' more space on the fly if you need it, but you can resize the partition with a snapshot.
Steps do to this:
unmount ebs volume
create a ebs snapshot
add new volume with more space
recreate partition table and resize
filesystem
mount the new ebs volume
Look at http://aws.amazon.com/ebs/ - EBS Snapshot:
Snapshots can also be used to instantiate multiple new volumes,
expand the size of a volume or move
volumes across Availability Zones.
When a new volume is created, there is
the option to create it based on an
existing Amazon S3 snapshot. In that
scenario, the new volume begins as an
exact replica of the original volume.
By optionally specifying a different
volume size or a different
Availability Zone, this functionality
can be used as a way to increase the
size of an existing volume or to
create duplicate volumes in new
Availability Zones. If you choose to
use snapshots to resize your volume,
you need to be sure your file system
or application supports resizing a
device.

I followed all the answer, all have something missing with all respect.
If you follow these steps you can grow your EBS volume and keep your data (this is not for the root volume). For simplicity I am suggesting to use AWS consule to create snapshot,... you can do that using AWS command line tools too.
We are not touching the root volume here.
Goto your AWS console:
Shutdown your instance ( it will be for a few minutes only)
Detach the volume you are planning to grow (say /dev/xvdf)
Create a snapshot of the volume.
Make a new volume with a larger size using the snapshot you just created
Attach the new volume to your instance
Start your instance
SSH to your instance:
$ sudo fdisk -l
This gives your something like:
Disk /dev/xvdf: 21.5 GB, 21474836480 bytes
12 heads, 7 sectors/track, 499321 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xd3a8abe4
Device Boot Start End Blocks Id System
/dev/xvdf1 2048 41943039 20970496 83 Linux
Write down Start and Id values. (in this case 2048 and 83)
Using fdisk ,delete the partition xvdf1 and create a new one that starts exactly from the same block (2048). We will give it the same Id (83):
$ sudo fdisk /dev/xvdf
Command (m for help): d
Selected partition 1
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1):
Using default value 1
First sector (2048-41943039, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039):
Using default value 41943039
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 83
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
This step is explained well here: http://litwol.com/content/fdisk-resizegrow-physical-partition-without-losing-data-linodecom
Almost done, we just have to mount the volume and run resize2fs:
Mount the ebs volume: (mine is at /mnt/ebs1)
$ sudo mount /dev/xvdf1 /mnt/ebs1
and resize it:
$ sudo resize2fs -p /dev/xvdf1
resize2fs 1.42 (29-Nov-2011)
Filesystem at /dev/xvdf1 is mounted on /mnt/ebs1; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 2
Performing an on-line resize of /dev/xvdf1 to 5242624 (4k) blocks.
The filesystem on /dev/xvdf1 is now 5242624 blocks long.
ubuntu#ip-xxxxxxx:~$
Done! Use df -h to verify the new size.

As long a you are okay with a few minutes of downtime, Eric Hammond has written a good article on resizing the root disk on a running EBS instance: http://alestic.com/2010/02/ec2-resize-running-ebs-root

All great recommendations, and I thought I'd add this article I found, which relates to expanding a Windows Amazon EC2 EBS instance using the Amazon Web UI tools to perform the necessary changes. If you're not comfortable using CLI, this will make your upgrade much easier.
http://www.tekgoblin.com/2012/08/27/aws-guides-how-to-resize-a-ec2-windows-ebs-volume/
Thanks to TekGoblin for posting this article.

You can now do this through the AWS Management Console. The process is the same as in the other answers but you no longer need to go to the command line.

BTW: As with physical disks, it might be handy to use LVM; ex:
http://www.davelachapelle.ca/guides/ubuntu-lvm-guide/
http://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/
Big advantage: It allows adding (or removing) space dynamically.
It can also easily be moved between/among instances.
Caveats:
it must be configured ahead of time
a simple JBOD setup means you lose everything if you lose one "disk"

My steps:
stop the instance
find the ebs volume attached to the instance and create a snapshot of it
create a new volume with bigger disk space using the above snapshot. Unfortunately the UI on the aws console to create a snapshot is almost unusable because it's listing all the snapshots on aws. Using command line tool is a lot easier, like this:
ec2-create-volume -s 100 --snapshot snap-a31fage -z us-east-1c
detach the existing ebs (smaller) volume from the instance
attach the new (bigger) volume to the instance, and make sure attach it to the same device the instance is expecting (in my case it is /dev/sda1)
start the instance
You are done!
Other than step 3 above, you can do everything using the aws management console.
Also NOTE as mentioned here:
https://serverfault.com/questions/365605/how-do-i-access-the-attached-volume-in-amazon-ec2
the device on your ec2 instance might be /dev/xv* while aws web console tells you it's /dev/s*.

Use command "diskpart" for Windows OS, have a look here : Use http://support.microsoft.com/kb/300415
Following are the steps I followed for a non-root disk (basic not dynamic disk)
Once you have taken a snapshot, dismounted the old EBS volume (say 600GB) and created a larger EBS volume (say 1TB) and mounted this new EBS volume - you would have to let Windows know of the resizing (from 600GB to 1TB) so at command prompt (run as administrator)
diskpart.exe
select disk=9
select volume=Z
extend
[my disk 9,volume labelled Z, was a volume of size 1TB created from an ec2-snapshot of size 600GB - I wanted to resize 600GB to 1TB and so could follow the above steps to do this.]

I highly recommend Logical Volume Manager (LVM) for all EBS volumes, if your operating system supports it. Linux distributions generally do. It's great for several reasons.
Resizing and moving of logical volumes can be done live, so instead of the whole offline snapshot thing, which requires downtime, you could just add create another larger EBS volume, add it to the LVM pool as a physical volume (PV), move the logical volume (LV) to it, remove the old physical volume from the pool, and delete the old EBS volume. Then, you simply resize the logical volume, and resize the filesystem on it. This requires no downtime at all!
It abstracts your storage from your 'physical' devices. Moving partitions across devices without needing downtime or changes to mountpoints/fstab is very handy.
It would be nice if Amazon would make it possible to resize EBS volumes on-the-fly, but with LVM it's not that necessary.

if your root volume is xfs file system then then run this command xfs_growfs /

Related

How to find which EBS volume is storing the data

I have created a ec2 instance and attached 3 ebs volumes to it, I have created a lvm on the 3 ebs volumes and mounted it to /var/lib/pgsql and then installed postgresql source 8.4 and configure it to /var/lib/pgsql/data/data1 and i have created a sample table on it and the data is been store in lvm volumes( /dev/mapper/vg_ebs-lv_ebs) . I want check on which ebs volumes the data is been stored, so that I can remove the rest unused volumes in such that way that it does not affect the postgresql database.
I have attached the photos of the work.
Tried to check the details by pvdisplay, lvdisplay vgdisplay, and tried to make the volume full to 100% still not able to identify which ebs volumes the data is been stored.
Want to know if 1) the data is been stored in all the volumes, 2) in root volumes 2) or any specific volumes the data is stored.
Postgres data is in /var/lib/pgsql which falls under lv_ebs logical volume in vg_ebs volume group.
To provide space for this volume group, you have three physical volumes: /dev/sdf, dev/sdg and /dev/sdh.
In general, LVM decides which data from logical volume to put on which physical volume. Allocation is done by means of extents which are non-overlapping, continuous ranges of bytes, each having a starting offset and a length (similar to partitions). You can get the information about extent allocation of LVs with
lvs -o +seg_pe_ranges
In your case all the space in PVs has been allocated to your LV, so you cannot just remove PV from VG. What you need to do is:
Unmount /var/lib/pgsql
Use resize2fs to shrink your filesystem by at least the size of the PV that you want to remove
Use lvreduce to shrink the size of LV to the reduced size of filesystem
Use pvmove to move any active physical segments off of the PV being removed
Use vgreduce to remove now empty PV from VG
Don't forget to back up your data first.

Google engine compute instance - size of instance disk vs snapshot

My Google compute engine instance disk size is dramatically different from my snapshot size created from this instance. My used disk space is under 40GB, while snapshot size is 123 GB. Below is illustration. My instance is Ubunt 16.04
Here is output of df command
Here is size of snapshot created from this instance:
I expect them to be of approximately the same size. Do you need clear trash bin before creating snapshot or do something else?
Google cloud snapshot size will change depending on the changes made to datas as only the first snapshot will be a full disk snapshot and then all the other will be differential.
This mean that depending on the actions made on the disk you could have a snapshot containing only 1GB of change and the next one could be 200GB for the same 1TB disk.
Google will maintain consistency between snapshot, you can find more information here: https://cloud.google.com/compute/docs/disks/restore-and-delete-snapshots#deleting_snapshot

Launch Instance with smaller root volume storage

I want to launch an instance with custom size of root volume on Amazon EC2.
At step no.4 - Add Storage, the root volume default setting came with a default snapshot and a default 10 size (GiB). Then I lowered the size into 5GiB.
But at the final step, it wouldn't allow me to launch the instance because of only 5GB root volume.
Any idea or solution that I can lower the size when launching an instance?
You cannot create an Amazon EBS volume that is smaller than the snapshot that you want loaded.
You could attempt to make your own AMI by launching with 10GB, attaching a 5GB volume, copying files across, turning it into an AMI, etc but frankly it isn't worth the bother.
If you are merely wishing to save money, then at 10¢ per GB per month, it would only save 50¢ per month.

How to reduce root EBS volume size of an EC2 instance?

I got my website running on an EC2 instance with 1 EBS of 50 GB attached as root volume. My requirement is to reduce the size of this EBS volume to 20 GB from currently 50 GB. I tried lot of ways like creating an image and launching a new instance with EBS volume as 20GB but in the end launch fails with message saying that volume size can only be increased from current size (i.e. 50GB) and cannot be reduced.
May I know if there is any workaround to handle this issue. I have data in root volume but I do not require whole bunch of 50GB space on my root volume. Please assist.
Thank you,
You would need to create a new volume and copy everything across, making sure that the volume remains bootable.
Then: Stop instance, detach volume, attach new volume, boot.
At 10c/GB/month, the reduction of 30GB will save you $3/month. It isn't worth your time.

Mounting a NVME disk on AWS EC2

So I created i3.large with NVME disk on each nodes, here was my process :
lsblk -> nvme0n1 (check if nvme isn't yet mounted)
sudo mkfs.ext4 -E nodiscard /dev/nvme0n1
sudo mount -o discard /dev/nvme0n1 /mnt/my-data
/dev/nvme0n1 /mnt/my-data ext4 defaults,nofail,discard 0 2
sudo mount -a (check if everything is OK)
sudo reboot
So all of this works, I can connect back to the instance. I have 500 GiB on my new partition.
But after I stop and restart the EC2 machines, some of them randomly became inaccessible (AWS warning only 1/2 test status checked)
When I watch the logs of why it is inaccessible it tells me, it's about the nvme partition (but I did sudo mount -a to check if this was ok, so I don't understand)
I don't have the AWS logs exactly, but I got some lines of it :
Bad magic number in super-block while trying to open
then the superblock is corrupt, and you might try running e2fsck with an alternate superblock:
/dev/fd/9: line 2: plymouth: command not found
I have been using "c5" type instances since almost a month, mostly "c5d.4xlarge" with nvme drives. So, here's what has worked for me on Ubuntu instances:
first get the location nvme drive is located at:
lsblk
mine was always mounted at nvme1n1. Then check if it is an empty volume and doens't has any file system, (it mostly doesn't, unless you are remounting). the output should be /dev/nvme1n1: data for empty drives:
sudo file -s /dev/nvme1n1
Then do this to format(if from last step you learned that your drive had file system and isn't an empty drive. skip this and go to next step):
sudo mkfs -t xfs /dev/nvme1n1
Then create a folder in current directory and mount the nvme drive:
sudo mkdir /data
sudo mount /dev/nvme1n1 /data
you can now even check it's existence by running:
df -h
Stopping and starting an instance erases the ephemeral disks, moves the instance to new host hardware, and gives you new empty disks... so the ephemeral disks will always be blank after stop/start. When an instance is stopped, it doesn't exist on any physical host -- the resources are freed.
So, the best approach, if you are going to be stopping and starting instances is not to add them to /etc/fstab but rather to just format them on first boot and mount them after that. One way of testing whether a filesystem is already present is using the file utility and grep its output. If grep doesn't find a match, it returns false.
The NVMe SSD on the i3 instance class is an example of an Instance Store Volume, also known as an Ephemeral [ Disk | Volume | Drive ]. They are physically inside the instance and extremely fast, but not redundant and not intended for persistent data... hence, "ephemeral." Persistent data needs to be on an Elastic Block Store (EBS) volume or an Elastic File System (EFS), both of which survive instance stop/start, hardware failures, and maintenance.
It isn't clear why your instances are failing to boot, but nofail may not be doing what you expect when a volume is present but has no filesystem. My impression has been that eventually it should succeed.
But, you may need to apt-get install linux-aws if running Ubuntu 16.04. Ubuntu 14.04 NVMe support is not really stable and not recommended.
Each of these three storage solutions has its advantages and disadvantages.
The Instance Store is local, so it's quite fast... but, it's ephemeral. It survives hard and soft reboots, but not stop/start cycles. If your instance suffers a hardware failure, or is scheduled for retirement, as eventually happens to all hardware, you will have to stop and start the instance to move it to new hardware. Reserved and dedicated instances don't change ephemeral disk behavior.
EBS is persistent, redundant storage, that can be detached from one instance and moved to another (and this happens automatically across a stop/start). EBS supports point-in-time snapshots, and these are incremental at the block level, so you don't pay for storing the data that didn't change across snapshots... but through some excellent witchcraft, you also don't have to keep track of "full" vs. "incremental" snapshots -- the snapshots are only logical containers of pointers to the backed-up data blocks, so they are in essence, all "full" snapshots, but only billed as incrememental. When you delete a snapshot, only the blocks no longer needed to restore either that snapshot and any other snapshot are purged from the back-end storage system (which, transparent to you, actually uses Amazon S3).
EBS volumes are available as both SSD and spinning platter magnetic volumes, again with tradeoffs in cost, performance, and appropriate applications. See EBS Volume Types. EBS volumes mimic ordinary hard drives, except that their capacity can be manually increased on demand (but not decreased), and can be converted from one volume type to another without shutting down the system. EBS does all of the data migration on the fly, with a reduction in performance but no disruption. This is a relatively recent innovation.
EFS uses NFS, so you can mount an EFS filesystem on as many instances as you like, even across availability zones within one region. The size limit for any one file in EFS is 52 terabytes, and your instance will actually report 8 exabytes of free space. The actual free space is for all practical purposes unlimited, but EFS is also the most expensive -- if you did have a 52 TiB file stored there for one month, that storage would cost over $15,000. The most I ever stored was about 20 TiB for 2 weeks, cost me about $5k but if you need the space, the space is there. It's billed hourly, so if you stored the 52 TiB file for just a couple of hours and then deleted it, you'd pay maybe $50. The "Elastic" in EFS refers to the capacity and the price. You don't pre-provision space on EFS. You use what you need and delete what you don't, and the billable size is calculated hourly.
A discussion of storage wouldn't be complete without S3. It's not a filesystem, it's an object store. At about 1/10 the price of EFS, S3 also has effectively infinite capacity, and a maximum object size of 5TB. Some applications would be better designed using S3 objects, instead of files.
S3 can also be easily used by systems outside of AWS, whether in your data center or in another cloud. The other storage technologies are intended for use inside EC2, though there is an undocumented workaround that allows EFS to be used externally or across regions, with proxies and tunnels.
I just had a similar experience! My C5.xlarge instance detects an EBS as nvme1n1. I have added this line in fstab.
/dev/nvme1n1 /data ext4 discard,defaults,nofail 0 2
After a couple of rebooting, it looked working. It kept running for weeks. But today, I just got alert that instance was unable to be connected. I tried rebooting it from AWS console, no luck looks the culprit is the fstab. The disk mount is failed.
I raised the ticket to AWS support, no feedback yet. I have to start a new instance to recover my service.
In another test instance, I try to use UUID(get by command blkid) instead of /dev/nvme1n1. So far looks still working... will see if it cause any issue.
I will update here if any AWS support feedback.
================ EDIT with my fix ===========
AWS doesn't give me feedback yet, but I found the issue. Actually, in fstab, whatever you mount /dev/nvme1n1 or UUID, it doesn't matter. My issue is, my ESB has some errors in file system. I attached it to an instance then run
fsck.ext4 /dev/nvme1n1
After fixes a couple of file system error, put it in fstab, reboot, no problem anymore!
You may find useful new EC2 instance family equipped with local NVMe storage: C5d.
See announcement blog post: https://aws.amazon.com/blogs/aws/ec2-instance-update-c5-instances-with-local-nvme-storage-c5d/
Some excerpts from the blog post:
You don’t have to specify a block device mapping in your AMI or during the instance launch; the local storage will show up as one or more devices (/dev/nvme*1 on Linux) after the guest operating system has booted.
Other than the addition of local storage, the C5 and C5d share the same specs.
You can use any AMI that includes drivers for the Elastic Network Adapter (ENA) and NVMe
Each local NVMe device is hardware encrypted using the XTS-AES-256 block cipher and a unique key.
Local NVMe devices have the same lifetime as the instance they are attached to and do not stick around after the instance has been stopped or terminated.