Why is it possible to go beyond DynamoDB burst capacity? - amazon-web-services

I have created a DynamoDB table with 1 RCU (manual provisioned capacity).
I have inserted some items to read in that table.
I can launch a scan on my table (which consumes 82 RCUs according to the response).
I understand this is possible because of the burst capacity.
What I don't understand though, is why am I able to keep consuming huge numbers of RCUs for long periods of time.
As you can see on this screenshot, despite the RCU being 1, I have been
consuming around 150 or 200 RCU per minute for more than 1 hour (we can barely see the 1 RCU red line at the bottom).
Why is that? (some of the requests are of course throttled but why so little ?)

How much data do you have in that table?
When you try scan operation from console, it will read items from the table that will consume RCUs.
There are options to configure baseline read/write capacity units and enable autoscaling if you expect variable reads/write requests. If the load starts to increase, dynamo db service will gradually scale to fulfil those requests instead of throttling.
https://aws.amazon.com/blogs/database/amazon-dynamodb-auto-scaling-performance-and-cost-optimization-at-any-scale/

Related

Do I have to wait for 30 minutes when a on-demand Dynamodb table is throttled?

I am using on-demand Dynamodb table and I have read the doc https://aws.amazon.com/premiumsupport/knowledge-center/on-demand-table-throttling-dynamodb/. It says You might experience throttling if you exceed double your previous traffic peak within 30 minutes. It means Dynamodb adjust the RCU/WCU based on the last 30 minutes.
Let's say my table is throttled, do I have to wait for maximum 30 minutes until the table adjust its RCU/WCU? Or does the table update RCU immediately? or in a few minutes?
The reason I am asking is that I'd like to put a retry on my application code to retry the DB action whenever there is a throttle. How can I add sleep interval between the retry?
Capacity is always managed with an On Demand table to support double any previous peak throughput, but if you grow faster than that, the table will add physical capacity (physical partitions).
When DynamoDB adds partitions it can take between 5 minutes and 30 minutes for that capacity to be available for use.
It has nothing to do with RCUs/WCUs because On Demand tables don't have capacity units.
Note: You may stay throttled if you've designed a hot partition key in either the base table or a GSI.
During the throttle period requests are still getting handled (and handled at a good rate). Just like if you see a line at the grocery store check out, you get in line. Don't design the code to come back in 30 minutes hoping there's no line after adding checkers. The grocery store will be "adding checkers" when it notices the load is high, but it also keeps the existing work processing.

Is it possible to write to DynamoDB only when spare capacity is available?

I am working on an application which receives very predictable, heavy traffic during working hours. Users typically interact with the app for about 40 minutes at a time. DynamoDB table A receives a steady stream of writes throughout user sessions and handles things without difficulty. We attempt to write a large amount of data to table B at the end of each session, however, and early in the day this can result in throttling. Our tables are billed on-demand (no, this is not something I am able to change), but the sudden spike in writes still causes throttling, which is expected.
The data being written to table A is both critical and time sensitive. The data going to table B is critical and must not be lost, but delays in data availability from table B on the order of a few hours is acceptable, but not ideal. So I'm looking for a way to say "please write this to the table ASAP, but only as long as it won't cause throttling". Provisioning for the expected capacity is not an option (don't ask). An SQS queue with a long message delay doesn't really fit the bill because (a) 15 minutes may not be long enough and (b) it doesn't meet the "ASAP" part of the story. I've considered pre-warming the table, but that's just cludgy.
So... you take all the expected ways to handle this that were designed and provided by AWS then say you can't use them. That... doesn't leave you much options.
You're pretty much left with designing some custom architecture. Throttling, provisioning, burst provisioning, on demand, and all are all part of the package for handling these kinds of bursts. If you can't use them, then you'll have to do something like write the entry as a json to an s3 bucket and have some cron event pick them up in an hour or something one a time and batch write them to the table.
You may want to take a look at how your table is arranged. If you are having to make a lot of writes all at once (ie, because you have to duplicate data through multiple PK/SK combinations in order to be able to recall it with a single query) then an RDS may be better suited for the task at hand. Dynamo is more for quick and snappy queries and not really for extended data logging or storage.
Here's the secret to DDB on-demand...
From the page you linked to
For new on-demand tables, you can immediately drive up to 4,000 write
request units or 12,000 read request units, or any linear combination
of the two. For an existing table that you switched to on-demand
capacity mode, the previous peak is half the previous provisioned
throughput for the tableā€”or the settings for a newly created table
with on-demand capacity mode, whichever is higher. For more
information, see Initial throughput for on-demand capacity mode.
And the Inital throughput for on-demand capacity mode page says:
Initial Throughput for On-Demand Capacity Mode If you recently
switched an existing table to on-demand capacity mode for the first
time, or if you created a new table with on-demand capacity mode
enabled, the table has the following previous peak settings, even
though the table has not served traffic previously using on-demand
capacity mode:
Newly created table with on-demand capacity mode: The previous peak is
2,000 write request units or 6,000 read request units. You can drive
up to double the previous peak immediately, which enables newly
created on-demand tables to serve up to 4,000 write request units or
12,000 read request units, or any linear combination of the two.
Existing table switched to on-demand capacity mode: The previous peak
is half the maximum write capacity units and read capacity units
provisioned since the table was created, or the settings for a newly
created table with on-demand capacity mode, whichever is higher. In
other words, your table will deliver at least as much throughput as it
did prior to switching to on-demand capacity mode.
The key thing to realize is that DDB on-demand "peaks" are never lowered..
So if you have a table that at some point peaked at 20K WCU, you can scale cleanly from 1-20K without throttling.
In other words, you shouldn't continue to see throttling in an app unless you hit a new peak.
You can also artificially set the peak by changing the table to provisioned at double the expected peak. Then when you convert it back to on-demand, you'll have a "peak" set for half the provisioned capacity.

DynamoDB write is too slow

I use Lambda to read from a JSON Api and write in DynamoDB via http request. The JSON Api is very big (has 200k objects) and my function is extremely slow with writing to DynamoDB. I used the regular write function and after 10 min I could only populate 5k rows in my DynamoDB table. I was thinking about using BatchWriteItem but since it can only do 25 puts in one batch, it would still take too much time to write all 200k rows. Is there any better solution?
This will be because you're being throttled.
For Lambda
There are a maximum number of concurrent invocations of Lambdas that can be running at a time, the default limit is 1000 concurrent requests.
If you have more than 1000 concurrent requests at the same time you will need to reach out to AWS Support to increase this, you will also need to provide a business use case for why it needs to support this.
For DynamoDB
Whether you use batch or single PutItem your DynamoDB table is configured with a number of WCU (Write Credit Units) and RCU (Read Credit Units).
A single write credit unit covers 1 write of an item 1Kb or less (every extra kb is another unit). If you exceed this you will start to be throttled for write requests, if you're using the SDK it may use exponential backoff as well to keep attempting to write.
As a solution for this you should do one of the following:
If this is a one time process you can adjust the WCU as a fixed number, then wait 5 minutes for it to increase and then scale down.
If this is a natural flow on your app then enable DynamoDB autoscaling to naturally increase and decrease throughout the day
In addition look at your data modelling as this can lead to throttling too.
In extreme cases, throttling can occur if a single partition receives more than 3,000 RCUs or 1,000 WCUs

Query on DynamoDB Provisioned Throughput WCU/RCU operation

I am trying to understand how dynamo DB provisioned throughput (RCU/WCU) works.
I tried 2 scenarios where i made a change in WCU ( 1,000 & 10,000), but the WCU consumed figures which i am getting is same i.e. 809.63.
In a nutshell, i have 123 records distributed in 5 files, each record is of 400 KB ( according to dynamo db limit rule). When executing these cases there was no throttling, and strange thing is script execution time is same i.e. 6 sec, even though i have changed WCU count to 1k & 10k respectively.
My question is why does it behave like this. I would like to know your comments on this.
My assumption is if i decrease/increase WCU count, i should see changes in script execution time, which is not in my case.
Dynamo DB Scenario tests:
WCU/RCU do not increase the speed of a DynamoDB SDK response time, they only set an upper limit for capacity usage.
Read and Write Capacity Units are, as the name suggests, capacity units. They indicate the upper limit of how much capacity your table can handle in terms of read/write. What this means is, in your case since you are using 809.63 WCU, if your WCU is set to above 810 then you won't get any throttled requests. However, if you lower your WCU to 800, you will start seeing your requests being throttled.
If you have consistent TPS and know how many capacity units you will be using, then set just the amount that you will require. In your case, 1k WCU seems sufficient and will not make any difference compared to 10k in terms of performance, unless you use more than 1k WCU, in which case you can provision more capacity or implement auto-scaling to handle it.
See here for more information: Documentation
Edit: As discussed in below comments, if you use more capacity than is provisioned, DynamoDB will temporarily allow a burst of capacity to support it for up to 5 minutes, which could lead to varying results in terms of throttling
Before answering, many Thanks to Deiv & Stu for finding this evidence.
DynamoDB can consume up to 300 seconds of unused throughput in burst capacity.
The maximum item size in DynamoDB is 400KB and 1 RCU gives you a read of up to 4KB.
Lets say you want to read an item that is 400KB in size and you have 1 RCU on your table. You could retrieve that item once every 100 seconds.
Because of burst capacity there will always be a time you can read that item, because in fact you can use up to 300 RCUs in one go, not just 1.
Imagine starting the table with that 400KB item. You need to wait 100 seconds without spending any RCUs so that you've earned enough burst capacity to get the item. After 101 seconds you make the request, spend 100 RCUs and get the item. After another 5 seconds you make the request again, but get denied with a Throttling Exception.
So no, DynamoDB will not increase request latency to meet your RCU provision. It either returns your results as fast as possible, or throws an exception.

aws dynamo db throughput

There's something which I cant understand about AWS DynamoDb throughput.
Lets consider strongly consistent reads.
Now, I understand that in this case, 1 unit of capacity would mean I can read up to 4KB of per second.
It's the "per second" bit that slightly confuses me. If you know exactly how quickly you want to read data then you can set the units appropriately. But what if you're not too fussy about the read time?
Say I do have only 1 read unit assigned to my table and I try to read an item which is more than 4KB. Now surely that just means that my read is going to take more than 1 second? That would be fine but the documentation talks about Requests failing. How can AWS determine that I used too many units when I didn't request that the data be read within a particular time?
Maybe I am missing something obvious. Can you someone help clear this up?
DynamoDB can consume up to 300 seconds of unused throughput in burst capacity.
The maximum item size in DynamoDB is 400KB and 1 RCU gives you a read of up to 4KB.
Lets say you want to read an item that is 400KB in size and you have 1 RCU on your table. You could retrieve that item once every 100 seconds.
Because of burst capacity there will always be a time you can read that item, because in fact you can use up to 300 RCUs in one go, not just 1.
Imagine starting the table with that 400KB item. You need to wait 100 seconds without spending any RCUs so that you've earned enough burst capacity to get the item. After 101 seconds you make the request, spend 100 RCUs and get the item. After another 5 seconds you make the request again, but get denied with a Throttling Exception.
So no, DynamoDB will not increase request latency to meet your RCU provision. It either returns your results as fast as possible, or throws an exception.
EDIT: By the way, I should mention that all AWS DynamoDB SDKs handle Throttling Exceptions for you. If you try and read an item, but get denied because you don't have enough throughput available, the SDK backs off and try again. So unless your table really is under provisioned, you shouldn't have to worry about handling Throttling Exceptions.