How is Amazon DynamoDB throughput calculated and limited? - amazon-web-services

Is it averaged per second? Per minute? Per hour?
For example.. if I pay for 10 "read units" which allows for 10 highly consistent reads per second, will I be throttled if I try to do 20 reads in a single second, even if it was the only 20 reads that occurred in the last hour? The Amazon documentation and FAQ do not answer this critical question anywhere that I could find.
The only related response I could find in the FAQ completely ignores the issue of how usage is calculated and when throttling may happen:
Q: What happens if my application performs more reads or writes than
my provisioned capacity?
A: If your application performs more
reads/second or writes/second than your table’s provisioned throughput
capacity allows, requests above your provisioned capacity will be
throttled and you will receive 400 error codes. For instance, if you
had asked for 1,000 write capacity units and try to do 1,500
writes/second of 1 KB items, DynamoDB will only allow 1,000
writes/second to go through and you will receive error code 400 on
your extra requests. You should use CloudWatch to monitor your request
rate to ensure that you always have enough provisioned throughput to
achieve the request rate that you need.

It appears that they track writes in a five minute window and will throttle you when your average over the last five minutes exceeds your provisioned throughput.
I did some testing. I created a test table with throughput of 1 write/second. If I don't write to it for a while and then send a stream of requests, Amazon seems to accept about 300 before it starts throttling.
The caveat, of course, is that this is not stated in any official Amazon documentation and could change at any time.

The DynamoDB provides 'Burst Capacity' which allows for spikes in amount of data read from table. You can read more about it under: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GuidelinesForTables.html#GuidelinesForTables.Bursting
Basically it's what #abjennings noticed - It uses 5min window to average number of reads from a table.

If I pay for 10 "read units" which allows for 10 highly consistent
reads per second, will I be throttled if I try to do 20 reads in a
single second, even if it was the only 20 reads that occurred in the
last hour?
Yes, this is due to the very concept of Amazon DynamoDB being fast and predictable performance with seamless scalability - the quoted FAQ is actually addressing this correctly already (i.e. you have to take operations/second literally), though the calculation is better illustrated in Provisioned Throughput in Amazon DynamoDB indeed:
A unit of Write Capacity enables you to perform one write per second
for items of up to 1KB in size. Similarly, a unit of Read Capacity
enables you to perform one strongly consistent read per second (or two
eventually consistent reads per second) of items of up to 1KB in size.
Larger items will require more capacity. You can calculate the number
of units of read and write capacity you need by estimating the number
of reads or writes you need to do per second and multiplying by the
size of your items (rounded up to the nearest KB).
Units of Capacity required for writes = Number of item writes per
second x item size (rounded up to the nearest KB)
Units of Capacity
required for reads* = Number of item reads per second x item size
(rounded up to the nearest KB) * If you use eventually consistent reads you’ll get twice the throughput in terms of reads per second.
[emphasis mine]
Getting these calculations right for real world use cases is potentially complex though, please make sure to check further details like e.g. the Provisioned Throughput Guidelines in Amazon DynamoDB as well accordingly.

My guess would be that they don't state it explicitly on purpose. It's probably liable to change/have regional differences/depend on the position of the moon and stars, or releasing the information would encourage abuse. I would do my calculations on a worst-scenario basis.

From AWS :
DynamoDB currently retains up five minutes (300 seconds) of unused read and write capacity
DynamoDB provides some flexibility in the per-partition throughput provisioning. When you are not fully utilizing a partition's throughput, DynamoDB retains a portion of your unused capacity for later bursts of throughput usage. DynamoDB currently retains up five minutes (300 seconds) of unused read and write capacity. During an occasional burst of read or write activity, these extra capacity units can be consumed very quickly—even faster than the per-second provisioned throughput capacity that you've defined for your table. However, do not design your application so that it depends on burst capacity being available at all times: DynamoDB can and does use burst capacity for background maintenance and other tasks without prior notice.

We set our 'write-limit' to 10 units/sec for one of the tables. Cloudwatch graph (see image) shows we exceeded this by one unit (11 writes/sec). I'm assuming there's a small wiggle room (<= 10%). Again , i'm just assuming ...

https://aws.amazon.com/blogs/developer/rate-limited-scans-in-amazon-dynamodb/
Using google guava library to use rateLimiter class to limit the consumed capacity is possible.

Related

Do multiple smaller SQL queries result in more IOPs & more cost using Amazon Aurora?

Amazon Aurora pricing page mentions that:
For I/O charges, let’s assume the same database reads 100 data pages
from storage per second to satisfy the queries running on it. This
would result in 262.8 million read I/Os per month (100 pages per
second x 730 hours x 60 minutes x 60 seconds).
What is meant by "data pages" here?
Similarly, let’s assume your application makes changes to the database affecting an average of 10 data pages per second. Aurora will charge one I/O operation for up to 4 KB of changes on each data page. If the volume of data changed per page is less than 4 KB, this would result in 10 write I/Os per second.
Do multiple smaller SQL queries result in more IOPs than a single large SQL query?
What is meant by "data pages" here?
Each database page is 16 KB for MySQL-compatible Aurora & 8 KB for PostgreSQL-compatible Aurora (source: Amazon Aurora FAQs).
Do multiple smaller SQL queries result in more IOPs than a single large SQL query?
Not necessarily but it is possible.
The key here is that you optimise your queries to read/write only as much as you need and to not split un-necessarily.
Too many small writes less than 4KB will mean that you will pay more in the long run for no reason & you'll be better off making changes of at least 4KB or more to get the most 'bang for your buck'.
Example
Let's say we want to write 22KB of data to the database.
If done in one query, you would be charged for 6 I/O operations.
-> 22KB / 4KB = 5, remainder 2
-> 5 I/O operations with an extra 1 I/O op. (to account for the remaining 2KB left over)
If done in 5 different queries, split by you, you would also be charged for only 6 I/O operations (hence why I said not necessarily).
However, if done in queries split so that each query is less than 4KB, you would then be paying more than needed as you would be consuming more I/O operations.
e.g. if your queries each write only 2KB to the database, you would theoretically1be charged for 11 I/O operations, which would be an extra 5 I/O operations.
-> 22KB / 2KB = 11 I/O operations
1 As the documentation mentions, the number of write operations can potentially be less but that is subject to internal Aurora write I/O optimizations that can combine write operations less than 4 KB in size together under certain circumstances. In other words, they may combine operations together for you but it is not guaranteed.

AWS CloudWatch interpreting insights graph -- how many read/write IOs will be billed?

Introduction
We are trying to "measure" the cost of usage of a specific use case on one of our Aurora DBs that is not used very often (we use it for staging).
Yesterday at 18:18 hrs. UTC we issued some representative queries to it and today we were examining the resulting graphs via Amazon CloudWatch Insights.
Since we are being billed USD 0.22 per million read/write IOs, we need to know how many of those there were during our little experiment yesterday.
A complicating factor is that in the cost explorer it is not possible to group the final billed costs for read/write IOs per DB instance! Therefore, the only thing we can think of to estimate the cost is from the read/write volume IO graphs on CLoudwatch Insights.
So we went to the CloudWatch Insights and selected the graphs for read/write IOs. Then we selected the period of time in which we did our experiment. Finaly, we examined the graphs with different options: "Number" and "Lines".
Graph with "number"
This shows us the picture below suggesting a total billable IO count of 266+510=776. Since we have choosen the "Sum" metric, this we assume would indicate a cost of about USD 0.00017 in total.
Graph with "lines"
However, if we choose the "Lines" option, then we see another picture, with 5 points on the line. The first and last around 500 (for read IOs) and the last one at approx. 750. Suggesting a total of 5000 read/write IOs.
Our question
We are not really sure which interpretation to go with and the difference is significant.
So our question is now: How much did our little experiment cost us and, equivalently, how to interpret these graphs?
Edit:
Using 5 minute intervals (as suggested in the comments) we get (see below) a horizontal line with points at 255 (read IOs) for a whole hour around the time we did our experiment. But the experiment took less than 1 minute at 19:18 (UTC).
Wil the (read) billing be for 12 * 255 IOs or 255 ... (or something else altogether)?
Note: This question triggered another follow-up question created here: AWS CloudWatch insights graph — read volume IOs are up much longer than actual reading
From Aurora RDS documentation
VolumeReadIOPs
The number of billed read I/O operations from a cluster volume within
a 5-minute interval.
Billed read operations are calculated at the cluster volume level,
aggregated from all instances in the Aurora DB cluster, and then
reported at 5-minute intervals. The value is calculated by taking the
value of the Read operations metric over a 5-minute period. You can
determine the amount of billed read operations per second by taking
the value of the Billed read operations metric and dividing by 300
seconds. For example, if the Billed read operations returns 13,686,
then the billed read operations per second is 45 (13,686 / 300 =
45.62).
You accrue billed read operations for queries that request database
pages that aren't in the buffer cache and must be loaded from storage.
You might see spikes in billed read operations as query results are
read from storage and then loaded into the buffer cache.
Imagine AWS report these data each 5 minutes
[100,150,200,70,140,10]
And you used the Sum of 15 minutes statistic like what you had on the image
F̶i̶r̶s̶t̶,̶ ̶t̶h̶e̶ ̶"̶n̶u̶m̶b̶e̶r̶"̶ ̶v̶i̶s̶u̶a̶l̶i̶z̶a̶t̶i̶o̶n̶ ̶r̶e̶p̶r̶e̶s̶e̶n̶t̶ ̶o̶n̶l̶y̶ ̶t̶h̶e̶ ̶l̶a̶s̶t̶ ̶a̶g̶g̶r̶e̶g̶a̶t̶e̶d̶ ̶g̶r̶o̶u̶p̶.̶ ̶I̶n̶ ̶y̶o̶u̶r̶ ̶c̶a̶s̶e̶ ̶o̶f̶ ̶1̶5̶ ̶m̶i̶n̶u̶t̶e̶s̶ ̶a̶g̶g̶r̶e̶g̶a̶t̶i̶o̶n̶,̶ ̶i̶t̶ ̶w̶o̶u̶l̶d̶ ̶b̶e̶ ̶(̶7̶0̶+̶1̶4̶0̶+̶1̶0̶)̶
Edit: First, the "number" visualization represent the whole selected duration, aggregated with would be the total of (100+150+200+70+140+10)
The "line" visualization will represent all the aggregated groups. which would in this case be 2 points (100+150+200) and (70+140+10)
It can be a little bit hard to understand at first if you are not used to data points and aggregations. So I suggest that you set your "line" chart to Sum of 5 minutes you will need to get value of each points and devide by 300 as suggested by the doc then sum them all
Added images for easier visualization

Why is my DynamoDB scan so fast with only 1 provisioned read capacity unit?

I made a table with 1346 items, each item being less than 4KB in size. I provisioned 1 read capacity unit, so I'd expect on average 1 item read per second. However, a simple scan of all 1346 items returns almost immediately.
What am I missing here?
This is likely down to burst capacity in which you gain your capacity over a 300 second period to use for burstable actions (such as scanning an entire table).
This would mean if you used all of these credits other interactions would suffer as they not have enough capacity available to them.
You can see the amount of consumed WCU/RCU via either CloudWatch metrics or within the DynamoDB interface itself (via the Metrics tab).
You don't give a size for your entries except to say "each item being less than 4KB". How much less?
1 RCU will support 2 eventually consistent reads per second of items up to 4KB.
To put that another way, with 1 RCU and eventually consistent reads, you can read 8KB of data per second.
If you records are 4KB, then you get 2 records/sec
1KB, 8/sec
512B, 16/sec
256B, 32/sec
So the "burst" capability already mentioned allowed you to use 55 RCU.
But the small size of your records allowed that 55 RCU to return the data "almost immediately"
There are two things working in your favor here - one is that a Scan operation takes significantly fewer RCUs than you thought it did for small items. The other thing is the "burst capacity". I'll try to explain both:
The DynamoDB pricing page says that "For items up to 4 KB in size, one RCU can perform two eventually consistent read requests per second.". This suggests that even if the item is 10 bytes in size, it costs half an RCU to read it with eventual consistency. However, although they don't state this anywhere, this cost is only true for a GetItem operation to retrieve a single item. In a Scan or Query, it turns out that you don't pay separately for each individual item. Instead, these operations scan data stored on disk sequentially, and you pay for the amount of data thus read. If you 1000 tiny items and the total size that DynamoDB had to read from disk was 80KB, you will pay 80KB/4KB/2, or 10 RCUs, not 500 RCUs.
This explains why you read 1346 items, and measured only 55 RCUs, not 1346/2 = 673.
The second thing working in your favor is that DynamoDB has the "burst capacity" capability, described here:
DynamoDB currently retains up to 5 minutes (300 seconds) of unused read and write capacity. During an occasional burst of read or write activity, these extra capacity units can be consumed quickly—even faster than the per-second provisioned throughput capacity that you've defined for your table.
So if your database existed for 5 minutes prior to your request, DynamoDB saved 300 RCUs for you, which you can use up very quickly. Since 300 RCUs is much more than you needed for your scan (55), your scan happened very quickly, without throttling.
When you do a query, the RCU count applies to the quantity of data read without considering the number of items read. So if your items are small, say a few bytes each, they can easily be queried inside a single 4KB RCU.
This is especially useful when reading many items from DynamoDB as well. It's not immediately obvious that querying many small items is far cheaper and more efficient than BatchGetting them.

Number of WCU equal to number of items to write in DynamoDB?

I have been struggling to understand the meaning of WCU in AWS DynamoDB Documentation. What I understood from AWS documentation is that
If your application needs to write 1000 items where each item is of
size 0.2KB then you need to provision 1000 WCU (i.e. 0.2/1 = 0.2 which
makes nearest 1KB, so 1000 items(to write) * 1KB() = 1000WCU)
If my above understanding is correct then I am wondering for those applications who requires to write millions of records in to DynamoDB per second, Do those application needs to provision that many millions of WCU?
Appreciate if you could clarify me.
I've used DynamoDB in past (and experienced scaling out the RCU and WCU for my application) and according to AWS docs :-
One write capacity unit represents one write per second for an item up
to 1 KB in size. If you need to write an item that is larger than 1
KB, DynamoDB will need to consume additional write capacity units. The
total number of write capacity units required depends on the item
size.
So it means, if you writing a document which is of size 4.5 KB, than it will consume 5 WCU, DyanamoDB roundoff it to next integer number.
Also your understanding
here each item is of size 0.2KB then you need to provision 1000 WCU
(i.e. 0.2/1 = 0.2 which makes nearest 1KB, so 1000 items(to write) *
1KB() = 1000WCU).
is correct.
To save the WCU, unit you need to design your system in such a way that your document size is always near to round-off.
Note :- To avoid the large cost associated with DynamoDB, if you are having lots of reads, you can use caching on top of dynamoDB, which is also suggested by them and was implemented by us as well.(If your application is write heavy, than this approach will not work and you should consider some other alternative like Elasticsearch etc).
According to http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GuidelinesForTables.html doc , see below thing
A caching solution can mitigate the skewed read activity for popular
items. In addition, since it reduces the amount of read activity
against the table, caching can help reduce your overall costs for
using DynamoDB.

does read(write) capacity unit define minimum execution time for read(write) operation

From aws docs:
One read capacity unit represents one strongly consistent read per second, or two eventually consistent reads per second, for items up to 4 KB in size. If you need to read an item that is larger than 4 KB, DynamoDB will need to consume additional read capacity units.
Confused by bold part, does this explicitly means that reading something over 4KB is not possible if you have just 1 read capacity unit(probably not) or they are suggesting it will be terrible slow(probably)?
For example, having 1 read capacity unit defined on a table i need to read(strongly consistent read) 50KB item, does that mean DynamoDB will need 50/4 = 12.5 => so more than 12 seconds for single read operation?
Basically yes, however DynamoDB supports bursting. It will 'save' 300 seconds of reserved capacity in a pool. If you have 1 read capacity reserved and have something of 9 KB (needs 3 read capacity), then you can still use this quickly as you have 300 read capacity of burst capacity available. You can do this 100 times until the burst capacity is depleted and then you need to wait a while until the burst capacity pool is filled again.
See also the docs on burst capacity: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GuidelinesForTables.html#GuidelinesForTables.Bursting