Can we start up EC2 instance during attached EBS in optimizing status? - amazon-web-services

Once I stopped our instance and modified the size of one of the EBS volumes. Then, that EBS is stopping at in 'in-use - optimizing (60%)' status. I suppose it can sometimes happen to take long time for optimizing up to 24hours. But we need to start our EC2 instance as soon as possible.
I'm just wondering if it is possible to start EC2 instance, one of the EBS in optimizing status not completed yet.
That EBS is not root volume, but the important volume contains database files.
Any advice would be appreciated.

Yes you can run the EC2 instance even whilst the EBS volume is optimising.
Whilst it is occuring you might find that the performance varies between both modifications however, it will never be lower than the minimal performance of either previous or new configuration.
While the volume is in the optimizing state, your volume performance is in between the source and target configuration specifications. Transitional volume performance will be no less than the source volume performance. If you are downgrading IOPS, transitional volume performance is no less than the target volume performance.
More information available in the documentation.

Related

Why is disk IO on my new AWS EC2 instance so much slower?

I have a regular EC2 instance A with a 200GB SSD filled with data. I used this disk to create an AMI and used that AMI to spin up another EC2 instance B with the same specs.
B started almost instantaneously which surprised me since I thought there would be a delay while AWS copies my 200GB EBS to the SSD corresponding to the new instance. However I noticed IO is extremely slow on B. It takes 3x as long to parse data on B.
Why is this, and how can I overcome this? It's too slow for my application which requires fast disk IO.
This happens because a newly-created EBS volume is built from S3 on-demand: when EC2 first reads a block from that volume it's retrieved from S3. You only get the "full" EBS performance once all blocks have been loaded. This is a huge problem, btw, for big databases restored from snapshot.
One solution may be fast snapshot restore. Although the docs don't describe what's happening behind the scenes, my guess is that they do a parallel disk copy from an existing EBS image. However, you will pay $0.75 per hour per snapshot, and are limited to 10 restores per hour.
Given the use-case that you described in another question, I think that the best solution is to keep an on-demand instance that you start and stop for your job. Assuming you're using Linux, you are charged per-second, so if you only run for 10-20 minutes out of the hour, you'll pay a pro-rated price. And unlike spot instances, you'll know that the machine will always be available and always be able to finish the job.
Another alternative is to just leave the spot instance running. If you're running for a significant portion of every hour, you're not really saving that much by shutting the instance down.

Amazon EC2 ebs vs gp2 ami

It's probably a silly question, but I can't understand the difference between these 2 amazon linux 2 amis:
ami-7105540e amzn2-ami-hvm-2.0.20180622.1-x86_64-ebs
ami-b70554c8 amzn2-ami-hvm-2.0.20180622.1-x86_64-gp2
Judging by this article, isn't gp2 just another ebs instance type?
The question isn't silly at all. In fact AWS's lack of documentation explaining the actual difference between the two is a bit surprising
TLDR;
If you're planning on switching to a faster SSD root volume at some point in the future but want to use Magnetic for now it would be better to use the gp2 version of the AMI and then change the root volume to SSD sometime later
Some more Explanation
ami-b70554c8 amzn2-ami-hvm-2.0.20180622.1-x86_64-gp2 - The recommended root volume type is General Purpose SSD (gp2)
ami-7105540e amzn2-ami-hvm-2.0.20180622.1-x86_64-ebs - The recommended root volume type is Magnetic
However this isn't set in stone so you can still interchange between them (I've used the gp2 version with a Magnetic storage in the past without any issues)
I couldn't find any official documentation as to the actual difference between the two AMI versions but the gp2 version most likely has SSD related optimizations added already to the OS
So if you envision switching to SSD at some point in future but want to start with a Magnetic volume it might be better to just use the gp2 optimized AMI right from the start. It probably has some optimizations not relevant to Magnetic volumes but it might be more future proof in case you want to have a faster root volume later
The took the AMI in two types of Volumes.
amzn2-ami-hvm-2.0.20180622.1-x86_64-ebs - ami-7105540e. This has Magnetic Volume type for its root volume.
ami-b70554c8 amzn2-ami-hvm-2.0.20180622.1-x86_64-gp2. This has SSD Volume type for its root volume.
How can we identify this?
Go to EC2.
Launch Instance
Select the mentioned AMI
Select any instance Type.
Choose the VPC, subnet and etc.
Next it'll show the Boot Disk size, there you can see its in SSD or Magnetic.
To learn about Volume tyes: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

Issue with EBS burst balance AWS

I've got an EBS volume (16GB) attached to a EC2 instance that has full access to an RDS instance. The thing is I've extracted the DB to the RDS instance, so I don't use the EC2 instance for storing the web application database anymore. I did this because I was having a lot of problems with the EBS credits (they were consuming very quickly). I thought that by having the DB on a separate instance (RDS) this will decrease to almost cero the EBS credit consumption because I'm not reading nor writing on the EBS but on the RDS. However, the EBS credits keep consuming (and decrease to 0) every time users access to the web application and I don't understand why. Perhaps is because I still don't fully understand how EBS credit usage works... Can anyone enlighten me with this? Thanks a lot in advance.
You can review volume types including info on their burst credits here. You should also review I/O Characteristics and Monitoring. From that page:
If your I/O latency is higher than you require, check
VolumeQueueLength to make sure your application is not trying to drive
more IOPS than you have provisioned. If your application requires a
greater number of IOPS than your volume can provide, you should
consider using a larger gp2 volume with a higher base performance
level or an io1 volume with more provisioned IOPS to achieve faster
latencies.
You should review that metric and the others it mentions if this is causing you performance problems. If your IOPs are constantly above your baseline and causing them to queue you will always consume credits as fast as they are given.

AWS EBS Volume "in-use - optimizing"

I have an EBS volume that displays a state of "in-use - optimizing(%)". What does this mean? What are the optimizations that AWS is performing? This is on a 300gb encrypted gp2 volume attached to a Windows Server 2012 R2 EC2 instance.
The in-use - optimizing state relates to EBS volume resizing.
in-use indicates that this volume is attached to an EC2 instance.
optimizing is the volume's modification state.
According to the AWS documentation on volume modifications:
An EBS volume being modified goes through a sequence of states. After you issue a ModifyVolume directive, whether from the console, CLI, API, or SDK, the volume enters first the Modifying state, then the Optimizing state, and finally the Complete state.
...
While the volume is in the optimizing state, your volume performance is in between the source and target configuration specifications. Transitional volume performance will be no less than the source volume performance. If you are downgrading IOPS, transitional volume performance is no less than the target volume performance.
And finally, from the introductory blog post for Volume Modifications:
The volume’s state reflects the progress of the operation (modifying, optimizing, or complete):
If you modified the volume, most likely it will show like that. The performance will be degraded during this time since AWS EBS server needs to sync data.
EBS Volumes are in this state after modifying the volume (e.g. resizing). It can take some time (can be stuck at 99% for hours) but eventually will go away.

cloning an amazon machine instance

I have two amazon machine instances running.Both of them are m3.xlarge instances. One of them has the right software and configuration that I want to use.I want to create a snapshot of the EBS volume for that machine and use that as the EBS volue to boot the second machine from. Can I do that and expect it to work without shutting down the first machine.
It is well described in the AWS documentation...
"You can take a snapshot of an attached volume that is in use. However, snapshots only capture data that has been written to your Amazon EBS volume at the time the snapshot command is issued. This might exclude any data that has been cached by any applications or the operating system. If you can pause any file writes to the volume long enough to take a snapshot, your snapshot should be complete. However, if you can't pause all file writes to the volume, you should unmount the volume from within the instance, issue the snapshot command, and then remount the volume to ensure a consistent and complete snapshot.
I have amazon as well, with 3 different clusters. With one of my clusters after setting up 25 of them I realized there was a small issue in the configuration and had live traffic going to them so I couldn't' shut down.
You can snapshot the first machines volume while it's still running, I had to do this myself. It took a little while, but ultimately it worked out. Please note that amazon cannot guarantee the consistency of the disk when doing this.
I did a snapshot of the entire thing, fixed what needed to be fixed, and spooled up 25 new servers and terminated the other 25 ( easier than modifying volumes, etc ).. But you can create a new volume with the new snapshot, and attach it to an instance and do what needs to be done to get it to boot off that volume without much of a headache.
Being that I went the easy route of spooling up new instances after my snapshot was complete, I can't walk you through on how to get a running instance to boot off a new volume.