How to find which EBS volume is storing the data - amazon-web-services

I have created a ec2 instance and attached 3 ebs volumes to it, I have created a lvm on the 3 ebs volumes and mounted it to /var/lib/pgsql and then installed postgresql source 8.4 and configure it to /var/lib/pgsql/data/data1 and i have created a sample table on it and the data is been store in lvm volumes( /dev/mapper/vg_ebs-lv_ebs) . I want check on which ebs volumes the data is been stored, so that I can remove the rest unused volumes in such that way that it does not affect the postgresql database.
I have attached the photos of the work.
Tried to check the details by pvdisplay, lvdisplay vgdisplay, and tried to make the volume full to 100% still not able to identify which ebs volumes the data is been stored.
Want to know if 1) the data is been stored in all the volumes, 2) in root volumes 2) or any specific volumes the data is stored.

Postgres data is in /var/lib/pgsql which falls under lv_ebs logical volume in vg_ebs volume group.
To provide space for this volume group, you have three physical volumes: /dev/sdf, dev/sdg and /dev/sdh.
In general, LVM decides which data from logical volume to put on which physical volume. Allocation is done by means of extents which are non-overlapping, continuous ranges of bytes, each having a starting offset and a length (similar to partitions). You can get the information about extent allocation of LVs with
lvs -o +seg_pe_ranges
In your case all the space in PVs has been allocated to your LV, so you cannot just remove PV from VG. What you need to do is:
Unmount /var/lib/pgsql
Use resize2fs to shrink your filesystem by at least the size of the PV that you want to remove
Use lvreduce to shrink the size of LV to the reduced size of filesystem
Use pvmove to move any active physical segments off of the PV being removed
Use vgreduce to remove now empty PV from VG
Don't forget to back up your data first.

Related

Delete EBS volumes not in use without losing data

I have created a ec2 instance and attached 4 ebs volumes, gp2=3gb, gp3=3, io1=4gb, io2=4 and mounted it.
I have installed postgres source code on it and created a database with 2 million sample entries.
By checking, I see that the table or data which i am creating is been stored in root volume xvda1.
I want to delete rest 4 volumes that I have attached to an instance in such a way that it won;t affect the postgresdata. The root volume says 36% use and the rest 4 volumes says 2% use and I think the 2% is storage use to mount it ( not sure ). I wanna know how can i delete it,
should I umount it first and then detach the volume and delete it?
As your data is in root volume, you can remove all other volumes.
For graceful unmounting & preserving data integrity, unmount drives one by one & then delete it. Once all done, just restart your EC2 & check if all your data is still there.

Google engine compute instance - size of instance disk vs snapshot

My Google compute engine instance disk size is dramatically different from my snapshot size created from this instance. My used disk space is under 40GB, while snapshot size is 123 GB. Below is illustration. My instance is Ubunt 16.04
Here is output of df command
Here is size of snapshot created from this instance:
I expect them to be of approximately the same size. Do you need clear trash bin before creating snapshot or do something else?
Google cloud snapshot size will change depending on the changes made to datas as only the first snapshot will be a full disk snapshot and then all the other will be differential.
This mean that depending on the actions made on the disk you could have a snapshot containing only 1GB of change and the next one could be 200GB for the same 1TB disk.
Google will maintain consistency between snapshot, you can find more information here: https://cloud.google.com/compute/docs/disks/restore-and-delete-snapshots#deleting_snapshot

AWS create a larger EBS volume from a Snapshot and specify the number of iNodes

I have an EBS volume (ext4) mounted as root.
I would like to both increase the size of the EBS volume, but more importantly increase/set the number of iNodes in it.
However, when creating a volume from a snapshot, I have an option to set the size, but nowhere to specify the number of desired iNodes. My volume should have a large number of iNodes, to support many small files.
How this can be done, when creating a volume out of an existing snapshot?

How to preserve data when stopping EC2 instance

I created a m3.large EC2 instance for data processing, I modified it to a 40GB root drive and it comes with a 3OGB additional drive. I only need to use it for a few days a month so the idea was to stop it when not in use and start it when required.
When I stopped it I was warned that
Emphermal storage data would be lost
on restarting this turned out to be a way of cryptically saying that everything on the 30Gb volume would be wiped which it has been, whereas the root volume that uses the 4OGB is untouched.
So why does it do this and if I hadn't increased the size of my root volume would all my changes on that be lost as well ?
There 2 types of Volumes ( Disk ) provided by AWS
EBS Volumes
Ephemeral ( Instance Store )
EBS Volumes :
Your root volume is always EBS volume ( you can safely assume this point; almost all of the AMIs EBS based these days). The data in EBS volume would persist even if the instance is stopped or terminated; with this you can quickly remove the EBS volumes from one instance and reattached it to other.
Ephemeral ( Instance Store ) :
Ephemeral volumes are the temporary drives provided for the instance, the size ( 10 GB or 80 GB ), the count ( 2 disks or 5 disks ) the type ( Magnetic or SSD ) depends with each instance size.
The data in these drive would be wiped off during the event of instance stop and start. You may use this disk for non critical application activities like temporary file storage and processing, application log. If you need persistence of the content in this drive then you would need to move it to EBS based volumes or store it in S3 from time to time to prevent data loss.
There wont be any loss of data when you increase the size of the volume of an EBS Volume; you need take the snapshot of the volume and restore it again back to a volume of your required size ( increased size ). I have tried this couple of times; I haven't encountered any data loss during the process; but if you little concerned about this, you can upload the content to S3 for backup.

Growing Amazon EBS Volume sizes [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm quite impressed with Amazon's EC2 and EBS services. I wanted to know if it is possible to grow an EBS Volume.
For example: If I have a 50 GB volume and I start to run out of space, can I bump it up to 100 GB when required?
You can grow the storage, but it can't be done on the fly. You'll need to take a snapshot of the current block, add a new, larger block and re-attach your snapshot.
There's a simple walkthrough here based on using Amazon's EC2 command line tools
You can't simply 'bump in' more space on the fly if you need it, but you can resize the partition with a snapshot.
Steps do to this:
unmount ebs volume
create a ebs snapshot
add new volume with more space
recreate partition table and resize
filesystem
mount the new ebs volume
Look at http://aws.amazon.com/ebs/ - EBS Snapshot:
Snapshots can also be used to instantiate multiple new volumes,
expand the size of a volume or move
volumes across Availability Zones.
When a new volume is created, there is
the option to create it based on an
existing Amazon S3 snapshot. In that
scenario, the new volume begins as an
exact replica of the original volume.
By optionally specifying a different
volume size or a different
Availability Zone, this functionality
can be used as a way to increase the
size of an existing volume or to
create duplicate volumes in new
Availability Zones. If you choose to
use snapshots to resize your volume,
you need to be sure your file system
or application supports resizing a
device.
I followed all the answer, all have something missing with all respect.
If you follow these steps you can grow your EBS volume and keep your data (this is not for the root volume). For simplicity I am suggesting to use AWS consule to create snapshot,... you can do that using AWS command line tools too.
We are not touching the root volume here.
Goto your AWS console:
Shutdown your instance ( it will be for a few minutes only)
Detach the volume you are planning to grow (say /dev/xvdf)
Create a snapshot of the volume.
Make a new volume with a larger size using the snapshot you just created
Attach the new volume to your instance
Start your instance
SSH to your instance:
$ sudo fdisk -l
This gives your something like:
Disk /dev/xvdf: 21.5 GB, 21474836480 bytes
12 heads, 7 sectors/track, 499321 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xd3a8abe4
Device Boot Start End Blocks Id System
/dev/xvdf1 2048 41943039 20970496 83 Linux
Write down Start and Id values. (in this case 2048 and 83)
Using fdisk ,delete the partition xvdf1 and create a new one that starts exactly from the same block (2048). We will give it the same Id (83):
$ sudo fdisk /dev/xvdf
Command (m for help): d
Selected partition 1
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1):
Using default value 1
First sector (2048-41943039, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039):
Using default value 41943039
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 83
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
This step is explained well here: http://litwol.com/content/fdisk-resizegrow-physical-partition-without-losing-data-linodecom
Almost done, we just have to mount the volume and run resize2fs:
Mount the ebs volume: (mine is at /mnt/ebs1)
$ sudo mount /dev/xvdf1 /mnt/ebs1
and resize it:
$ sudo resize2fs -p /dev/xvdf1
resize2fs 1.42 (29-Nov-2011)
Filesystem at /dev/xvdf1 is mounted on /mnt/ebs1; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 2
Performing an on-line resize of /dev/xvdf1 to 5242624 (4k) blocks.
The filesystem on /dev/xvdf1 is now 5242624 blocks long.
ubuntu#ip-xxxxxxx:~$
Done! Use df -h to verify the new size.
As long a you are okay with a few minutes of downtime, Eric Hammond has written a good article on resizing the root disk on a running EBS instance: http://alestic.com/2010/02/ec2-resize-running-ebs-root
All great recommendations, and I thought I'd add this article I found, which relates to expanding a Windows Amazon EC2 EBS instance using the Amazon Web UI tools to perform the necessary changes. If you're not comfortable using CLI, this will make your upgrade much easier.
http://www.tekgoblin.com/2012/08/27/aws-guides-how-to-resize-a-ec2-windows-ebs-volume/
Thanks to TekGoblin for posting this article.
You can now do this through the AWS Management Console. The process is the same as in the other answers but you no longer need to go to the command line.
BTW: As with physical disks, it might be handy to use LVM; ex:
http://www.davelachapelle.ca/guides/ubuntu-lvm-guide/
http://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/
Big advantage: It allows adding (or removing) space dynamically.
It can also easily be moved between/among instances.
Caveats:
it must be configured ahead of time
a simple JBOD setup means you lose everything if you lose one "disk"
My steps:
stop the instance
find the ebs volume attached to the instance and create a snapshot of it
create a new volume with bigger disk space using the above snapshot. Unfortunately the UI on the aws console to create a snapshot is almost unusable because it's listing all the snapshots on aws. Using command line tool is a lot easier, like this:
ec2-create-volume -s 100 --snapshot snap-a31fage -z us-east-1c
detach the existing ebs (smaller) volume from the instance
attach the new (bigger) volume to the instance, and make sure attach it to the same device the instance is expecting (in my case it is /dev/sda1)
start the instance
You are done!
Other than step 3 above, you can do everything using the aws management console.
Also NOTE as mentioned here:
https://serverfault.com/questions/365605/how-do-i-access-the-attached-volume-in-amazon-ec2
the device on your ec2 instance might be /dev/xv* while aws web console tells you it's /dev/s*.
Use command "diskpart" for Windows OS, have a look here : Use http://support.microsoft.com/kb/300415
Following are the steps I followed for a non-root disk (basic not dynamic disk)
Once you have taken a snapshot, dismounted the old EBS volume (say 600GB) and created a larger EBS volume (say 1TB) and mounted this new EBS volume - you would have to let Windows know of the resizing (from 600GB to 1TB) so at command prompt (run as administrator)
diskpart.exe
select disk=9
select volume=Z
extend
[my disk 9,volume labelled Z, was a volume of size 1TB created from an ec2-snapshot of size 600GB - I wanted to resize 600GB to 1TB and so could follow the above steps to do this.]
I highly recommend Logical Volume Manager (LVM) for all EBS volumes, if your operating system supports it. Linux distributions generally do. It's great for several reasons.
Resizing and moving of logical volumes can be done live, so instead of the whole offline snapshot thing, which requires downtime, you could just add create another larger EBS volume, add it to the LVM pool as a physical volume (PV), move the logical volume (LV) to it, remove the old physical volume from the pool, and delete the old EBS volume. Then, you simply resize the logical volume, and resize the filesystem on it. This requires no downtime at all!
It abstracts your storage from your 'physical' devices. Moving partitions across devices without needing downtime or changes to mountpoints/fstab is very handy.
It would be nice if Amazon would make it possible to resize EBS volumes on-the-fly, but with LVM it's not that necessary.
if your root volume is xfs file system then then run this command xfs_growfs /