What is the best way to move a system partition, ie. /var, /opt to a larger volume that is attached during the EC2 instance creation? For example, I want to create a generic AMI but if I use this AMI for MySQL server then I need to attach a larger volume for /var. Is there a way during the creation process to tell it to provision the instance with /var on the additional volume?
The concern you mentioned doesn't actually exist in real life. You don't need to specify the volume or size of /var or /opt directories. Linux directory structure is mounted to the root storage device. Any sub-directories(e.g. /var, /etc) can expand unless there is available free space in root storage. You don't need extra parrtition for that.
In your case the solution is: while launching new instance from your generic AMI in Step 4: Add Storage choose your desired storage size!
Related
I have a use case to maintain 2 servers, primary and secondary. Use secondary if the primary fails/stops. Only one server can be used at a time. The primary has a lot of activity going on with it, like installation and multiple read-writes. In order to replicate this to the secondary I'll have to create timely snapshots and then restore them when required. In order to skip this, can't I use a shared volume for both the servers, so that I can use the secondary without first having to restore a snapshot from the primary server? I've read about EBS volume type, io2 which supports multi-attach wherein as the name suggests can be attached to multiple instances and used as shared volume, which can't be done with other volume types. Will this create some kind of issue like data corruption if used as root volume. Don't want to use NFS/EFS as it is not compatible with the application I use.
This is AWS specific
You can use an EBS volume for your multi-attach app data needs.
One of the reasons you can’t multi-attach a root volume is that these root volumes are disposed of when an EC2 instance is terminated. AWS recommends that app data is not stored on the root volume due to the ephemeral nature of the root volumes.
AWS documentation for root volumes can be found here https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/RootDeviceStorage.html
I recommend mounting a second volume for your app data.
So I created i3.large with NVME disk on each nodes, here was my process :
lsblk -> nvme0n1 (check if nvme isn't yet mounted)
sudo mkfs.ext4 -E nodiscard /dev/nvme0n1
sudo mount -o discard /dev/nvme0n1 /mnt/my-data
/dev/nvme0n1 /mnt/my-data ext4 defaults,nofail,discard 0 2
sudo mount -a (check if everything is OK)
sudo reboot
So all of this works, I can connect back to the instance. I have 500 GiB on my new partition.
But after I stop and restart the EC2 machines, some of them randomly became inaccessible (AWS warning only 1/2 test status checked)
When I watch the logs of why it is inaccessible it tells me, it's about the nvme partition (but I did sudo mount -a to check if this was ok, so I don't understand)
I don't have the AWS logs exactly, but I got some lines of it :
Bad magic number in super-block while trying to open
then the superblock is corrupt, and you might try running e2fsck with an alternate superblock:
/dev/fd/9: line 2: plymouth: command not found
I have been using "c5" type instances since almost a month, mostly "c5d.4xlarge" with nvme drives. So, here's what has worked for me on Ubuntu instances:
first get the location nvme drive is located at:
lsblk
mine was always mounted at nvme1n1. Then check if it is an empty volume and doens't has any file system, (it mostly doesn't, unless you are remounting). the output should be /dev/nvme1n1: data for empty drives:
sudo file -s /dev/nvme1n1
Then do this to format(if from last step you learned that your drive had file system and isn't an empty drive. skip this and go to next step):
sudo mkfs -t xfs /dev/nvme1n1
Then create a folder in current directory and mount the nvme drive:
sudo mkdir /data
sudo mount /dev/nvme1n1 /data
you can now even check it's existence by running:
df -h
Stopping and starting an instance erases the ephemeral disks, moves the instance to new host hardware, and gives you new empty disks... so the ephemeral disks will always be blank after stop/start. When an instance is stopped, it doesn't exist on any physical host -- the resources are freed.
So, the best approach, if you are going to be stopping and starting instances is not to add them to /etc/fstab but rather to just format them on first boot and mount them after that. One way of testing whether a filesystem is already present is using the file utility and grep its output. If grep doesn't find a match, it returns false.
The NVMe SSD on the i3 instance class is an example of an Instance Store Volume, also known as an Ephemeral [ Disk | Volume | Drive ]. They are physically inside the instance and extremely fast, but not redundant and not intended for persistent data... hence, "ephemeral." Persistent data needs to be on an Elastic Block Store (EBS) volume or an Elastic File System (EFS), both of which survive instance stop/start, hardware failures, and maintenance.
It isn't clear why your instances are failing to boot, but nofail may not be doing what you expect when a volume is present but has no filesystem. My impression has been that eventually it should succeed.
But, you may need to apt-get install linux-aws if running Ubuntu 16.04. Ubuntu 14.04 NVMe support is not really stable and not recommended.
Each of these three storage solutions has its advantages and disadvantages.
The Instance Store is local, so it's quite fast... but, it's ephemeral. It survives hard and soft reboots, but not stop/start cycles. If your instance suffers a hardware failure, or is scheduled for retirement, as eventually happens to all hardware, you will have to stop and start the instance to move it to new hardware. Reserved and dedicated instances don't change ephemeral disk behavior.
EBS is persistent, redundant storage, that can be detached from one instance and moved to another (and this happens automatically across a stop/start). EBS supports point-in-time snapshots, and these are incremental at the block level, so you don't pay for storing the data that didn't change across snapshots... but through some excellent witchcraft, you also don't have to keep track of "full" vs. "incremental" snapshots -- the snapshots are only logical containers of pointers to the backed-up data blocks, so they are in essence, all "full" snapshots, but only billed as incrememental. When you delete a snapshot, only the blocks no longer needed to restore either that snapshot and any other snapshot are purged from the back-end storage system (which, transparent to you, actually uses Amazon S3).
EBS volumes are available as both SSD and spinning platter magnetic volumes, again with tradeoffs in cost, performance, and appropriate applications. See EBS Volume Types. EBS volumes mimic ordinary hard drives, except that their capacity can be manually increased on demand (but not decreased), and can be converted from one volume type to another without shutting down the system. EBS does all of the data migration on the fly, with a reduction in performance but no disruption. This is a relatively recent innovation.
EFS uses NFS, so you can mount an EFS filesystem on as many instances as you like, even across availability zones within one region. The size limit for any one file in EFS is 52 terabytes, and your instance will actually report 8 exabytes of free space. The actual free space is for all practical purposes unlimited, but EFS is also the most expensive -- if you did have a 52 TiB file stored there for one month, that storage would cost over $15,000. The most I ever stored was about 20 TiB for 2 weeks, cost me about $5k but if you need the space, the space is there. It's billed hourly, so if you stored the 52 TiB file for just a couple of hours and then deleted it, you'd pay maybe $50. The "Elastic" in EFS refers to the capacity and the price. You don't pre-provision space on EFS. You use what you need and delete what you don't, and the billable size is calculated hourly.
A discussion of storage wouldn't be complete without S3. It's not a filesystem, it's an object store. At about 1/10 the price of EFS, S3 also has effectively infinite capacity, and a maximum object size of 5TB. Some applications would be better designed using S3 objects, instead of files.
S3 can also be easily used by systems outside of AWS, whether in your data center or in another cloud. The other storage technologies are intended for use inside EC2, though there is an undocumented workaround that allows EFS to be used externally or across regions, with proxies and tunnels.
I just had a similar experience! My C5.xlarge instance detects an EBS as nvme1n1. I have added this line in fstab.
/dev/nvme1n1 /data ext4 discard,defaults,nofail 0 2
After a couple of rebooting, it looked working. It kept running for weeks. But today, I just got alert that instance was unable to be connected. I tried rebooting it from AWS console, no luck looks the culprit is the fstab. The disk mount is failed.
I raised the ticket to AWS support, no feedback yet. I have to start a new instance to recover my service.
In another test instance, I try to use UUID(get by command blkid) instead of /dev/nvme1n1. So far looks still working... will see if it cause any issue.
I will update here if any AWS support feedback.
================ EDIT with my fix ===========
AWS doesn't give me feedback yet, but I found the issue. Actually, in fstab, whatever you mount /dev/nvme1n1 or UUID, it doesn't matter. My issue is, my ESB has some errors in file system. I attached it to an instance then run
fsck.ext4 /dev/nvme1n1
After fixes a couple of file system error, put it in fstab, reboot, no problem anymore!
You may find useful new EC2 instance family equipped with local NVMe storage: C5d.
See announcement blog post: https://aws.amazon.com/blogs/aws/ec2-instance-update-c5-instances-with-local-nvme-storage-c5d/
Some excerpts from the blog post:
You don’t have to specify a block device mapping in your AMI or during the instance launch; the local storage will show up as one or more devices (/dev/nvme*1 on Linux) after the guest operating system has booted.
Other than the addition of local storage, the C5 and C5d share the same specs.
You can use any AMI that includes drivers for the Elastic Network Adapter (ENA) and NVMe
Each local NVMe device is hardware encrypted using the XTS-AES-256 block cipher and a unique key.
Local NVMe devices have the same lifetime as the instance they are attached to and do not stick around after the instance has been stopped or terminated.
On AWS EC2, if we launch and AMI with instance-store, we can attach EBS volume for persisent store. Is the vice-versa possible.
ie.,
Can we add instance-store volume when launching a EBS HVM AMI, the reason behind this is to use it as a swap.
I can't see the option to add Instance store on Storage Configuration, while launching a EBS backed instance.
Please let me know, if there is a method to achieve root volume as EBS and swap volume as instance-store.
many thanks,
Shan
If you are launching an instance class that includes instance store (ephemeral) disks, those should be accessible from Storage Configuration, as in this example, where the instance class provides two ephemeral disks.
See Instance Storage in the EC2 documentation to confirm whether the instance class you're launching includes instance store volumes. Some classes do not, and where that is the case, you can only select EBS as the Volume Type.
If you're launching from an AMI that already contains references to the ephemeral disks, you should see something like this screen shot. If it doesn't include references to the instance store volumes, you can use Add New Volume to include the desired instance store volumes with the new instance. Their sizes are fixed by the specs of the instance class, which is why Size says N/A. Since they are provided at no charge, you should always attach them at launch, even if you have no plan for them, because they can't be added after launching.
AMIs can't be edited, so if you want these included automatically on future launches, you'll need to build a new AMI, which you'll probably also want to be configured (within the OS boot sequence) to create and mount the desired swap space.
I have an ec2 instance that is hosting a CentOS AMI image and the root device is EBS , however it is not EBS optimized.
I have installed a few packages on it now I want to stop and start it again , Amazon documentation says that the EBS data would be available but the instance store data would be lost.
How do I know where(EBS or Instance store) my packages are stored ? I see that package files are in /opr /var /etc directories .
Will I loose my installed packages if I stop and start the Amazon ec2 instance ?
Thanks.
When you create an EBS backed instance (with ephemeral or instance store storage, and it doesn't matter whether it's optimized or not optimized) you don't lose data in your /opt or /var or /etc directory or any of the system data. So you are safe to stop and then restart it. Keep in mind that your internal and public IP addresses change once you restart it.
The only data that you lose is if you have ephemeral volumes which are generally mounted volumes with devices like /dev/sdb, /dev/xvdb, /dev/xvdc, etc.
If you create an instance store "only" instance then you lose everything. However, you will be able to tell if your instance is this type by not having the option to "stop" it. Meaning you can only terminate it. These are the first type of instances that EC2 offered when they started and maybe up until 3-4 years ago were the only ones, so they are not used that much AFAIK unless you need an ephemeral volume as your root volume.
[Edit]
This is what it's supposed to look like for an EBS backed instance (non-optimized):
You will not lose your data if the instance is setup as EBS.
EBS optimised is another option which adds additional IOPS, useful for busy database applications, etc.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm quite impressed with Amazon's EC2 and EBS services. I wanted to know if it is possible to grow an EBS Volume.
For example: If I have a 50 GB volume and I start to run out of space, can I bump it up to 100 GB when required?
You can grow the storage, but it can't be done on the fly. You'll need to take a snapshot of the current block, add a new, larger block and re-attach your snapshot.
There's a simple walkthrough here based on using Amazon's EC2 command line tools
You can't simply 'bump in' more space on the fly if you need it, but you can resize the partition with a snapshot.
Steps do to this:
unmount ebs volume
create a ebs snapshot
add new volume with more space
recreate partition table and resize
filesystem
mount the new ebs volume
Look at http://aws.amazon.com/ebs/ - EBS Snapshot:
Snapshots can also be used to instantiate multiple new volumes,
expand the size of a volume or move
volumes across Availability Zones.
When a new volume is created, there is
the option to create it based on an
existing Amazon S3 snapshot. In that
scenario, the new volume begins as an
exact replica of the original volume.
By optionally specifying a different
volume size or a different
Availability Zone, this functionality
can be used as a way to increase the
size of an existing volume or to
create duplicate volumes in new
Availability Zones. If you choose to
use snapshots to resize your volume,
you need to be sure your file system
or application supports resizing a
device.
I followed all the answer, all have something missing with all respect.
If you follow these steps you can grow your EBS volume and keep your data (this is not for the root volume). For simplicity I am suggesting to use AWS consule to create snapshot,... you can do that using AWS command line tools too.
We are not touching the root volume here.
Goto your AWS console:
Shutdown your instance ( it will be for a few minutes only)
Detach the volume you are planning to grow (say /dev/xvdf)
Create a snapshot of the volume.
Make a new volume with a larger size using the snapshot you just created
Attach the new volume to your instance
Start your instance
SSH to your instance:
$ sudo fdisk -l
This gives your something like:
Disk /dev/xvdf: 21.5 GB, 21474836480 bytes
12 heads, 7 sectors/track, 499321 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xd3a8abe4
Device Boot Start End Blocks Id System
/dev/xvdf1 2048 41943039 20970496 83 Linux
Write down Start and Id values. (in this case 2048 and 83)
Using fdisk ,delete the partition xvdf1 and create a new one that starts exactly from the same block (2048). We will give it the same Id (83):
$ sudo fdisk /dev/xvdf
Command (m for help): d
Selected partition 1
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1):
Using default value 1
First sector (2048-41943039, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039):
Using default value 41943039
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 83
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
This step is explained well here: http://litwol.com/content/fdisk-resizegrow-physical-partition-without-losing-data-linodecom
Almost done, we just have to mount the volume and run resize2fs:
Mount the ebs volume: (mine is at /mnt/ebs1)
$ sudo mount /dev/xvdf1 /mnt/ebs1
and resize it:
$ sudo resize2fs -p /dev/xvdf1
resize2fs 1.42 (29-Nov-2011)
Filesystem at /dev/xvdf1 is mounted on /mnt/ebs1; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 2
Performing an on-line resize of /dev/xvdf1 to 5242624 (4k) blocks.
The filesystem on /dev/xvdf1 is now 5242624 blocks long.
ubuntu#ip-xxxxxxx:~$
Done! Use df -h to verify the new size.
As long a you are okay with a few minutes of downtime, Eric Hammond has written a good article on resizing the root disk on a running EBS instance: http://alestic.com/2010/02/ec2-resize-running-ebs-root
All great recommendations, and I thought I'd add this article I found, which relates to expanding a Windows Amazon EC2 EBS instance using the Amazon Web UI tools to perform the necessary changes. If you're not comfortable using CLI, this will make your upgrade much easier.
http://www.tekgoblin.com/2012/08/27/aws-guides-how-to-resize-a-ec2-windows-ebs-volume/
Thanks to TekGoblin for posting this article.
You can now do this through the AWS Management Console. The process is the same as in the other answers but you no longer need to go to the command line.
BTW: As with physical disks, it might be handy to use LVM; ex:
http://www.davelachapelle.ca/guides/ubuntu-lvm-guide/
http://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/
Big advantage: It allows adding (or removing) space dynamically.
It can also easily be moved between/among instances.
Caveats:
it must be configured ahead of time
a simple JBOD setup means you lose everything if you lose one "disk"
My steps:
stop the instance
find the ebs volume attached to the instance and create a snapshot of it
create a new volume with bigger disk space using the above snapshot. Unfortunately the UI on the aws console to create a snapshot is almost unusable because it's listing all the snapshots on aws. Using command line tool is a lot easier, like this:
ec2-create-volume -s 100 --snapshot snap-a31fage -z us-east-1c
detach the existing ebs (smaller) volume from the instance
attach the new (bigger) volume to the instance, and make sure attach it to the same device the instance is expecting (in my case it is /dev/sda1)
start the instance
You are done!
Other than step 3 above, you can do everything using the aws management console.
Also NOTE as mentioned here:
https://serverfault.com/questions/365605/how-do-i-access-the-attached-volume-in-amazon-ec2
the device on your ec2 instance might be /dev/xv* while aws web console tells you it's /dev/s*.
Use command "diskpart" for Windows OS, have a look here : Use http://support.microsoft.com/kb/300415
Following are the steps I followed for a non-root disk (basic not dynamic disk)
Once you have taken a snapshot, dismounted the old EBS volume (say 600GB) and created a larger EBS volume (say 1TB) and mounted this new EBS volume - you would have to let Windows know of the resizing (from 600GB to 1TB) so at command prompt (run as administrator)
diskpart.exe
select disk=9
select volume=Z
extend
[my disk 9,volume labelled Z, was a volume of size 1TB created from an ec2-snapshot of size 600GB - I wanted to resize 600GB to 1TB and so could follow the above steps to do this.]
I highly recommend Logical Volume Manager (LVM) for all EBS volumes, if your operating system supports it. Linux distributions generally do. It's great for several reasons.
Resizing and moving of logical volumes can be done live, so instead of the whole offline snapshot thing, which requires downtime, you could just add create another larger EBS volume, add it to the LVM pool as a physical volume (PV), move the logical volume (LV) to it, remove the old physical volume from the pool, and delete the old EBS volume. Then, you simply resize the logical volume, and resize the filesystem on it. This requires no downtime at all!
It abstracts your storage from your 'physical' devices. Moving partitions across devices without needing downtime or changes to mountpoints/fstab is very handy.
It would be nice if Amazon would make it possible to resize EBS volumes on-the-fly, but with LVM it's not that necessary.
if your root volume is xfs file system then then run this command xfs_growfs /