AWS RDS General SSD or Provisioned IOPS - amazon-web-services

Currently I have a RDS m4.2xlarge MySQL DB with 200GB of disk space allotted using general SSD, which gives me a limit of about 600 IOPS.
Looking at the monitoring I can see the most of the time the write IOPS are right at the 600 IOPS limit and can't go higher, then there are some short periods a lull followed by a burst over the 600 (I assume while my credits earned during the lull are burned). After the credits are used up it's back to the 600 limit.
My question is: is there are downside to running at the limit 99% of the time? My write queue depth is normally less than 2 and never really gets above 3.
Would I see any benefit buying provisioned IOPS (of say 1500) to handle the maximum peak requirements?
Currently I'm not really having any problems with the database, other than an occasional deadlock, but this seems to be expected for the amount of transactions going through the DB.

From the AWS documentation
Baseline I/O performance for General Purpose SSD storage is 3 IOPS for each GiB, with a minimum of 100 IOPS. This relationship means that larger volumes have better performance. For example, baseline performance for a 100-GiB volume is 300 IOPS. Baseline performance for a 1-TiB volume is 3,000 IOPS. And baseline performance for a 5.34-TiB volume is 16,000 IOPS.
In most cases the cheaper alternative is simply increase the disk space and get more IOPS allocated

Related

IOPS on aws machine decrease after sometime during performance test execution

When I start AWS node, and check average IOPS, it was usually between 3000-4000. But after running my test, when I check the iops in AWS node, I see it keeps on decreasing, impacting the throughput of my perfromance test.
Not sure if it is expected that iops will keep on dropping after sometime? Can anyone please guide here?
enter image description here
Are you running GP2 or PIOPs?
If its GP2, it is worth noting that they use burstable credits allowing upto 3000 IOPs. After these credits expire you'll be left with 3 X EBS storage size (in GBs) for IOPs.
So for example, 50GB of storage would be 3 X 50 = 150 IOPs.
This article should help you to further understand: https://aws.amazon.com/blogs/aws/new-burst-balance-metric-for-ec2s-general-purpose-ssd-gp2-volumes/

What does IOPS (in Amazon EBS) mean in practice?

I have some images needed for an app. There are many images (50,000+) but the overall size is small (40 Mb). Initially, I thought I would simply use S3 but it is painfully slow to upload. As a temporary solution, I wanted to attach an EBS containing the images and that would be fine. However, reading a bit about EBS General Purpose (gp2) I noticed the following description:
GP2 is the default EBS volume type for Amazon EC2 instances. These
volumes are backed by solid-state drives (SSDs) and are suitable for a
broad range of transactional workloads, including dev/test
environments, low-latency interactive applications, and boot volumes.
GP2 is designed to offer single-digit millisecond latencies, deliver a
consistent baseline performance of 3 IOPS/GB to a maximum of 10,000
IOPS, and provide up to 160 MB/s of throughput per volume.
It is that 3 IOPS/GB quantity that is worrying me. What does this mean in practical terms? Suppose that you need an e-commerce site for a small amount of users (e.g. < 10,000 requests per minute) and these images need to be retrieved. Amazon describes how IOPS are measured:
When small I/O operations are physically contiguous, Amazon EBS
attempts to merge them into a single I/O up to the maximum size. For
example, for SSD volumes, a single 1,024 KiB I/O operation would count
as 4 operations, while 256 I/O operations at 4 KiB each would count as
256 operations.
Does this actually mean that if I want to retrieve 50 images of 10kB each in under a second, I would require 50 IOPS and easily exceed the baseline of 3 IOPS?
UPDATE:
Thanks to Mark B's suggestion, I was able to use S3 to upload my files. However, I'm still wondering about the amount of IOPS needed to perform common tasks such as running a database or serving other files for a web application. I would be glad to hear some reference values regarding the minimal values of IOPS based on your experience.
You are missing the "/GB" part of that statement. The baseline is 3 IOPS per GB. If your EBS volume is 100GB, then you would have a baseline of 300 IOPS. For a GP2 EBS volume you have to multiple the size of the volume by 3 to get the IOPS.
Note that any GP2 volume under 1TB is also able to burst at up to 3,000 IOPS, so any limited increases in IO should still perform very well.
Also, I will add that S3 sounds like a better fit for your use case. If you are seeing slow upload speeds to S3, that is a problem that can be solved. You can use CloudFront to provide a nearby edge location that you can upload to.
In my experience uploads to S3 are never any slower than uploads to an EC2 instance that your EBS volume would be attached to.
Update:
To answer your additional question, the minimum IOPS needed will depend on many variables such as the amount of RAM available, the type of application you are running, how well the application caches values in memory, the average size of your IO operations, etc. It's really difficult to pin-down an exact number and state that you need exactly X IOPS for an application.
You also need to remember that any volume under 1TB in size can still burst up to 3,000 IOPS for several seconds. So even if your application needs high IOPS when it is in use, if it doesn't see much usage the IOPS burst feature might be all it ever needs.
In general I usually start with something like a 100GB volume with 300 IOPS and test the performance of my app against that. A web server that operates entirely within RAM might never need more than that. For something like a database you would probably start out with the amount of disk space you think you will need and then start performance testing. CloudWatch will show the amount of IOPS your application is using, and if you see it maxing out at the limits of your volume then you would know you need to increase the available IOPS. Rinse and repeat until you no longer max out the available IOPS during your performance tests.
#Mark B's answer is probably correct, in that it points out your IOPs are based on the size of your EBS volume. For what you want, S3 is the best option.
But depending on your use case and requirements, EBS may be needed. This is especially true if you want to run a database. In that case, you have a couple of options.
You can get Provisioned IOPS - if you know you need 5000 IOPS, but only need say 100GB of storage (which with gp2 would normally provide you with around 300 IOPS), you can use io1 volumes. There is an extra cost to this, and you'll want to make sure that it's attached to an EBS optimized instance, but you can get up to 20k IOPS if needed.
If you're doing a lot of sequential reads (reading in a large data set?) then there's a new type of EBS, st1. This is good for 500MB/s, and is less than 1/2 the cost of gp2.
Finally, there's one other scenario you could consider (say, you're a bit of a madman, and want to try doing strange things). If you can grab an archive from somewhere, and all you care about is serving them up from a really fast file system, you could put them on an instance that has instance storage. This is a locally-attached SSD, so it's very fast. The only drawback is that when your instance stops, you data is gone.
To address your update, "how many IOPS do you need for a database", the answer is "it depends". Every database engine has different requirements, and every database use has different usage patterns. Take a look at this if you want more information. But basically, test & monitor. If you're worried, over provision at launch, and scale down as needed. Or take a guess, and increase if you run into problems - is it more important to minimize costs, or provide good performance to your end users?
As per your use case, s3 is a better option but if one wants to use an EBS volume and thinks that they require more IOPS, they can choose gp3 volume type instead of gp2. In gp3 volume, one can increase upto 16,000 IOPS independent of throughput (also, throughput can be increase upto 1000 MiB/s independently of IOPS).
General Purpose SSD (gp2) volumes offer cost-effective storage that is ideal for a broad range of workloads. These volumes deliver single-digit millisecond latencies and the ability to burst to 3,000 IOPS for extended periods of time. Between a minimum of 100 IOPS (at 33.33 GiB and below) and a maximum of 16,000 IOPS (at 5,334 GiB and above), baseline performance scales linearly at 3 IOPS per GiB of volume size. AWS designs gp2 volumes to deliver 90% of the provisioned performance 99% of the time. A gp2 volume can range in size from 1 GiB to 16 TiB.
link:
Link
Sometimes performance also varies:
According to AWS Doc, instance types can support maximum performance for 30 minutes at least once every 24 hours. If you have a workload that requires sustained maximum performance for longer than 30 minutes, select an instance type according to baseline performance
link:
Link

AWS t2.medium performance issues after adding 32 gb volume

On AWS EC2 t2.medium instance we run a site http://www.pricesimply.com/
Our database is installed on the same machine.
By default we had 8 gb storage and the site speed was lightning quick.
Then, we added a 32 gb volume Type - general purpose volume.
Only difference between these 2 volumes -
8 gb default volume - IOPS 24/3000
32 gb new added volume - IOPS 96/3000
Volume type & Availability zones are same.
The site MUCH SLOWER now compared to earlier.
Some random ideas:
1) Performance does vary between volumes. Do some benchmarks to see if it's really slower. (Unlikely, but possible.)
2) Perhaps the volume is a red herring -- Maybe your entire dataset was small enough to fit into RAM before, and now that you've expanded and grown, your data doesn't fit, creating constant I/O?
3) If the drive was created from a snapshot, it may be fetching your data from your snapshot in the background, slowing the drive.
Adding an additional disk shouldn't slow down your machine, you'll need to investigate things a bit more to identify the bottleneck.
Poor performing infrastructure generally fits into one of three categories:
CPU: Check your CPU utilization to see if the t2.medium instance is a suitable size. Amazon CloudWatch can show you CPU history.
Memory (RAM): Your application may be short on memory, causing page swaps to disk. You'll need to monitor memory utilization from within your instance. (CloudWatch cannot see memory utilization.)
Disk IO: If you are reading & writing to disk a lot, then this could be your bottleneck. CloudWatch can give you some metrics, especially the Queue Length, which indicates that IO was waiting to be processed.
Once you've identified which of these three factors appears to be the bottleneck, try to improve them:
CPU: Use a larger instance type
Memory: Use an instance type with more RAM
Disk: Use a faster disk
You are using General Purpose (SSD) EBS volumes. These volumes have an IOPS (Input/Output per second) related to volume size. So, your "96/3000" volume gives a guaranteed 96 IOPS (about the speed of a magnetic hard disk) with the ability to burst up to 3000 IOPS if you have enough IO 'credits'. If you are continually using more than 96 IOPS, you will run out of credits and will be limited to 96 IOPS.
See: Amazon EBS Volume Types

AWS EC2 Volume Larger SSD or smaller volume with Provisioned IOPS

Just creating a couple new small volumes for a new SQL server's TempDB files. So I only need 2 20GB volumes but noiticed on pricing that:
General Purpose (SSD) - 500GB - 1500 / 3000IOPS = $51.70
Provisioned
IOPS (SSD) - 20GB - 1500 IOPS = $110.76
But something is telling me deep down that Provisioned IOPS must surely offer something more such as a guarentee yet general purpose gives you IOPS in the range of 1500 but based on how much strain the rest of the volumes are under? Else what's the point in using the smaller volume with provisioned IOPS?
Regards
Liam
For some applications, "predictable performance" is more important than "high performance", and PIOPS gives you both. One should always test, but for the case case you described, GP2 SSD really seems more efficient.
You can get even more out of GP2 combining volumes using RAID/LVM. For example, 1 1TB => 3K IOPS max, 2 x 500 GB => 6K IOPS max...

Mysql replication & Amazon EC2 SSD vs. Magnetic Disk

We have a replication setup on Amazon EC2 that used Magnetic disks (15GB) that we swapped with SSD disks (15GB) for each the replication servers. We noticed that the slaves would fall behind the master and never catch up with these new SSD disks. This is something that never happened with the Magnetic disks but happened on each and every SSD disk.
We decided to try and move the databases back to Magnetic disks after the SSD disks fell more than 2 days behind. Within 2 hours the slave completely caught up.
I thought that SSD disks were more efficient, and all around better than Magnetic disks and that is why Amazon decided to make them standard.
Another bit of information is that we are using Micro instances, but the only changes we made was with the attached disk.
Anyone have any ideas?
I think you might be bumping up against the max IOPS for your 15GB SSD drive.. Amazon only allows an average of 45 IOPS for that disk size. On a magnetic drive, they have an extra charge when you use more IOPS, it does not seem to be throttled in the same way:
From Amazon's Pop Up on IOPS:
The number of input-output operations per second. For Provisioned IOPS (SSD) volumes, you can specify the IOPS rate when you create the volume. The ratio of IOPS provisioned and the volume size requested can be a maximum of 30 (in other words, a volume with 3000 IOPS must be at least 100 GB). General Purpose (SSD) volume types have a baseline IOPS of volume size X 3 and can burst up to 3000 IOPS for 30 minutes.