I have created a ec2 instance and attached 4 ebs volumes, gp2=3gb, gp3=3, io1=4gb, io2=4 and mounted it.
I have installed postgres source code on it and created a database with 2 million sample entries.
By checking, I see that the table or data which i am creating is been stored in root volume xvda1.
I want to delete rest 4 volumes that I have attached to an instance in such a way that it won;t affect the postgresdata. The root volume says 36% use and the rest 4 volumes says 2% use and I think the 2% is storage use to mount it ( not sure ). I wanna know how can i delete it,
should I umount it first and then detach the volume and delete it?
As your data is in root volume, you can remove all other volumes.
For graceful unmounting & preserving data integrity, unmount drives one by one & then delete it. Once all done, just restart your EC2 & check if all your data is still there.
Related
I have created a ec2 instance and attached 3 ebs volumes to it, I have created a lvm on the 3 ebs volumes and mounted it to /var/lib/pgsql and then installed postgresql source 8.4 and configure it to /var/lib/pgsql/data/data1 and i have created a sample table on it and the data is been store in lvm volumes( /dev/mapper/vg_ebs-lv_ebs) . I want check on which ebs volumes the data is been stored, so that I can remove the rest unused volumes in such that way that it does not affect the postgresql database.
I have attached the photos of the work.
Tried to check the details by pvdisplay, lvdisplay vgdisplay, and tried to make the volume full to 100% still not able to identify which ebs volumes the data is been stored.
Want to know if 1) the data is been stored in all the volumes, 2) in root volumes 2) or any specific volumes the data is stored.
Postgres data is in /var/lib/pgsql which falls under lv_ebs logical volume in vg_ebs volume group.
To provide space for this volume group, you have three physical volumes: /dev/sdf, dev/sdg and /dev/sdh.
In general, LVM decides which data from logical volume to put on which physical volume. Allocation is done by means of extents which are non-overlapping, continuous ranges of bytes, each having a starting offset and a length (similar to partitions). You can get the information about extent allocation of LVs with
lvs -o +seg_pe_ranges
In your case all the space in PVs has been allocated to your LV, so you cannot just remove PV from VG. What you need to do is:
Unmount /var/lib/pgsql
Use resize2fs to shrink your filesystem by at least the size of the PV that you want to remove
Use lvreduce to shrink the size of LV to the reduced size of filesystem
Use pvmove to move any active physical segments off of the PV being removed
Use vgreduce to remove now empty PV from VG
Don't forget to back up your data first.
I'm writing because I'm very confused around mechanism that is responsible for taking EBS snapshots.
First of all as far as I understand the difference between "backup" and "snapshot" - backup is full copy of volume blocks one to one, where snapshot is "delta" approach where only changed blocks are being copied right?
If that definition is right, than I can assume that taking EBS snapshot should be called backup - as we do typically full copy of all blocks that particular EBS is build on.
In almost every documentation from AWS website, I can read that EBS snapshots are taken incrementally (first one is full, then only difference between previous "state"). But after my small exercise on AWS console I was not able to see that in action.
I did snapshot of my EBS volume (50GB) and snapshot had a size exactly 50GB. Than I did another snapshot - again size 50GB. It made me incredible confused :///
All my experience / test were made only using root volume (first attached to EC2 instance). Now I was wondering if I have DB installed (postgreSQL) on EC2 that has only root volume attached, is that safe to make a snapshot of EBS (as a safe backup for my DB) as machine is running? Or unfortunately I should periodically take whole instance offline and only than make a backup of my DB volume?
EBS Snapshots work like this:
On your initial snapshot, it will create a block-level copy of your volume on S3 in the background. On subsequent snapshots it only saves the blocks that have changed since the last snapshot to S3 and for the rest it will keep track of a pointer to the original blocks. The third snapshot will work similar to the second snapshot, it again stores the blocks that have changed since the second snapshot and adds pointers to the other blocks.
If you restore the second snapshot, it will create a new volume and take a look at its metadata store, which pointers belong to that snapshot and then retrieve the blocks from S3 these point to.
If you delete Snapshot two, it will remove the pointers to the blocks that belong to snapshot two. If any of the blocks on S3 has no pointer left, i.e. doesn't belong to a snapshot anymore, it will be deleted.
To you as the client this whole process is transparent - you can delete or restore any snapshot you like and EBS will take care of the specifics in the background.
Should you be more interested in the details under the hood, I can recommend this article: The Jellyfish-inspired database under AWS Block Storage
I'm sort of new to AWS EC2 instances, but also comfortable.
16.04 Ubuntu Linux ec2 instance
I attempted to attach an extra harddrive volume to an existing ec2 instance. The initial default volume of 8 gigs was not enough to satisfy my needs.
I created the new 500gig volume and attached it to my instance, however it became the boot/root volume, so I detached and modified volume to a different device name /dev/sdf and reattached it keeping the instance's /dev/sda1 drive I had data on.
Upon restarting, the data in my initial /dev/sda1 was reverted back to it's initial snapshot data when the volume was created ( AMI image ), instead of still holding the data that was on it - which is what I was expecting it to do.
Is there anything I can do get that data back? Where is it? No popup or warning said that any data was going to be erased to the volume's initial snapshot and my backup couldn't catch it before hand. The /dev/sda1 was never detached or touched in this process, but the data is gone.
What happened? Where is my data?
Is AWS root storage ephemeral? The documents say they are ephemeral. But, I still can find my files after stopping the instance for some time and reboot it, I can still view my files
When you created your instance, I assume you chose the EBS backed instance. This means that the root device volume is actually an EBS volume, so when the instance restarts, you still have all of your data. If you were to choose the instance-store backed instance, then you would lose your data when restarting.
See http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/RootDeviceStorage.html for more information.
What do you mean by root storage?
Instance Stores are ephemeral. EBS volumes are not.
When launching an instance, if you've selected Instance Store as the root volume, a restart will still retain the data. But a stop and start will delete all data.
If you've selected an EBS volume, the data will be retained.
I have two amazon ec2 instances, (1) using SLES and running my current site, and (2) a new one using Amazon Linux. I am simply trying to move my site from the SLES and then retire it.
I have a 100GB EBS volume attached to the SLES server of which only about 20GB is actually used. What's the best way to bring the data over to the new instance?
create a new EBS volume of about 30GB and attach to new instance, use unix cp?
create a new EBS volume of about 30GB and attach to new instance, temporarily also attach the 100GB original volume the new instance, use unix cp?
something smarter/simpler, such as create a snapshot(?) of the 100GB EBS volume, somehow? create a new 30GB EBS volume out of it, and then attach that to the new instance? The extra benefit will be that I won't have to take down my site
Thanks a lot
It is easy to increase the size of a target volume, but the only way you're going to be able to make it smaller is to create a blank volume of the size you want. Then, either mount the new one to the old instance and do a copy or detach the old volume and reattach it to the new and do a copy. Basically, of the options you listed, only the first and second ones would work. The reason making a snapshot won't work is because creating a snapshot from a volume makes the snapshot the same size as the volume it was created from. You would get an error trying to create a volume from snapshot that is smaller than the original.