DynamoDb throttling write requests even though DB is under provisioned capacity - amazon-web-services

In the dynamodb table, we are getting lot of throttled write requests at the rate of ~2500/min. I am having confusions in some of the things here:
The consumed write capacity is much lower than the provisioned write capacity. Eg. Consumed Write capacity = 500 and Provisioned Write Capacity = 800. Then also, why the throttling is happening?
There is another metric called Throttle write events which is like 100/min. What does this mean and how its different from throttle write requests?
The parameters in capacity I see to change are Target utilization, Minimum provisioned capacity and Maximum provisioned capacity for Write. They all look good to me. I am using Auto scaling. So, I am not sure, what is to increase which can fix this issue?
What is happening to the throttled write events? Will they result in exception in code?

How much do the partition keys vary when you write your data?
I previously ran into a similar issue, when inserting a lot of data for the same partition key in many consecutive BatchWriteItem calls. The problem there was that there were too many write requests for the same partition key - overwhelming the partitions - which caused the throttling.
The solution is to structure the data in a BatchWriteItem call in such a way, that the partition keys are not repeating as much.

Related

How does partition capacity limit relate to table's total capacity in DynamoDB?

In the Dynamodb table, each partition is subject to a hard limit of 1,000 write capacity units and 3,000 read capacity units. What I don't understand is that how these limits relate to the table's total RCU/WCU?
For example, if I configure a table's RCU to 6000 and WCU to 3000. Is this capacity evenly used by all partitions in the table? Or do all partitions fight for the total capacity?
I can't find a way to know how many partitions the DynamoDB table is using. Is there a metric to tell me that?
The single-partition limit will only matter if your workload is so terribly imbalanced that a significant percentager of requests go to the same partition. In a better designed data model, you have a large number of different partition keys, which allows DynamoDB to use a large number of different partitions, so you never see a significant percentage of your requests going to the same partition.
That does not mean, however, that the load on all partitions is equal. It might very well be that one partition sees twice the number of requests as another partition. A few years ago, this meant your performance suffered: DynamoDB split the provisioned capacity (RCU/WCU) equally between partitions, so as the busier partition got throttled sooner, the total capacity you got from DynamoDB was less than what you paid for. However, they fixed this a few years ago with what they call adaptive capacity: DynamoDB now detects when your workloads total capacity is under what you paid for, and increase the capacity limits on individual partitions.
For example if you provision 10,000 RCU capacity and DynamoDB divides your data into 10 partitions, each of those start out with 1,000 RCU. However, it one partition gets double the requests as other, this will lead the workload to doing only 1000+9*500 = 5,500 RCU, significanltly less than the 10,000 you are paying for. So DynamoDB quickly recognizes this, and increases the busy partition's limit from 1,000 to 1,818 RCU - and now the total performance is 1,818 + 9*909 = 9,999 RCU. DynamoDB does this automatically for you - you don't need to do anything special. All you need to is to make sure that your workload has enough different partition keys, and no significant percentage of requests go to one specific partition keys - otherwise DynamoDB will not be able to achieve high total RCU - it will always be limited by that single-partition limit of 3,000.
Regarding your last question, I don't know if there is such a metric (maybe another responder will know), but the important thing to check is that you have a lot of partition keys. If that's the case, and your workload doesn't access one specific key for a large percentage of the requests, you should be safe.

AWS DynamoDB: What does the graph implies? What needs to be done? Few of my btachwrite (delete request) failed

Can somebody tell what needs to be done?
Im facing few issues when I am having 1000+ events.
Few of them are not getting deleted after my process.
Im doing a batch delete through batchwriteitem
Each partition on a DynamoDB table is subject to a hard limit of 1,000 write capacity units and 3,000 read capacity units. If your workload is unevenly distributed across partitions, or if the workload relies on short periods of time with high usage (a burst of read or write activity), the table might be throttled.
It seems You are using DynamoDB adaptive capacity, however, DynamoDB adaptive capacity automatically boosts throughput capacity to high-traffic partitions. However, each partition is still subject to the hard limit. This means that adaptive capacity can't solve larger issues with your table or partition design. To avoid hot partitions and throttling, optimize your table and partition structure.
https://aws.amazon.com/premiumsupport/knowledge-center/dynamodb-table-throttled/
One way to better distribute writes across a partition key space in Amazon DynamoDB is to expand the space. You can do this in several different ways. You can add a random number to the partition key values to distribute the items among partitions. Or you can use a number that is calculated based on something that you're querying on.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-partition-key-sharding.html

DynamoDB write is too slow

I use Lambda to read from a JSON Api and write in DynamoDB via http request. The JSON Api is very big (has 200k objects) and my function is extremely slow with writing to DynamoDB. I used the regular write function and after 10 min I could only populate 5k rows in my DynamoDB table. I was thinking about using BatchWriteItem but since it can only do 25 puts in one batch, it would still take too much time to write all 200k rows. Is there any better solution?
This will be because you're being throttled.
For Lambda
There are a maximum number of concurrent invocations of Lambdas that can be running at a time, the default limit is 1000 concurrent requests.
If you have more than 1000 concurrent requests at the same time you will need to reach out to AWS Support to increase this, you will also need to provide a business use case for why it needs to support this.
For DynamoDB
Whether you use batch or single PutItem your DynamoDB table is configured with a number of WCU (Write Credit Units) and RCU (Read Credit Units).
A single write credit unit covers 1 write of an item 1Kb or less (every extra kb is another unit). If you exceed this you will start to be throttled for write requests, if you're using the SDK it may use exponential backoff as well to keep attempting to write.
As a solution for this you should do one of the following:
If this is a one time process you can adjust the WCU as a fixed number, then wait 5 minutes for it to increase and then scale down.
If this is a natural flow on your app then enable DynamoDB autoscaling to naturally increase and decrease throughout the day
In addition look at your data modelling as this can lead to throttling too.
In extreme cases, throttling can occur if a single partition receives more than 3,000 RCUs or 1,000 WCUs

aws dynamo db throughput

There's something which I cant understand about AWS DynamoDb throughput.
Lets consider strongly consistent reads.
Now, I understand that in this case, 1 unit of capacity would mean I can read up to 4KB of per second.
It's the "per second" bit that slightly confuses me. If you know exactly how quickly you want to read data then you can set the units appropriately. But what if you're not too fussy about the read time?
Say I do have only 1 read unit assigned to my table and I try to read an item which is more than 4KB. Now surely that just means that my read is going to take more than 1 second? That would be fine but the documentation talks about Requests failing. How can AWS determine that I used too many units when I didn't request that the data be read within a particular time?
Maybe I am missing something obvious. Can you someone help clear this up?
DynamoDB can consume up to 300 seconds of unused throughput in burst capacity.
The maximum item size in DynamoDB is 400KB and 1 RCU gives you a read of up to 4KB.
Lets say you want to read an item that is 400KB in size and you have 1 RCU on your table. You could retrieve that item once every 100 seconds.
Because of burst capacity there will always be a time you can read that item, because in fact you can use up to 300 RCUs in one go, not just 1.
Imagine starting the table with that 400KB item. You need to wait 100 seconds without spending any RCUs so that you've earned enough burst capacity to get the item. After 101 seconds you make the request, spend 100 RCUs and get the item. After another 5 seconds you make the request again, but get denied with a Throttling Exception.
So no, DynamoDB will not increase request latency to meet your RCU provision. It either returns your results as fast as possible, or throws an exception.
EDIT: By the way, I should mention that all AWS DynamoDB SDKs handle Throttling Exceptions for you. If you try and read an item, but get denied because you don't have enough throughput available, the SDK backs off and try again. So unless your table really is under provisioned, you shouldn't have to worry about handling Throttling Exceptions.

Improving DynamoDB Write Operation

I am trying to call dynamodb write operation to write around 60k records.
I have tried to put 1000 write capacity unites for Provisioned Write capacity. But my write operation is still taking lot of time. Also when I check the metrics I can still see the consumed Write capacity units as around 10 per seconds.
My record size is definitely less than 1KB.
Is there a way we can speed up the write operation for dynamodb?
So here is what I figured out.
I changed my call to use batchWrite and my consumed Write capacity units has increased significantly upto 286 write capacity units.
Also the complete write operation finished within couple of minutes.
As mentioned in all above answers using putItem to load large number of data has the latency issues and it affects your consumed capacities. It is always better to batchWrite.
DynamoDB performance, like most databases is highly dependent on how it is used.
From your question, it is likely that you are using only a single DynamoDB partition. Each partition can support up to 1000 write capacity units and up to 10GB of data.
However, you also mention that your metrics show only 10 write units consumed per second. This is very low. Check all the metrics visible for the table in the AWS console. This is a tab per table under the DynamoDB pages. Check for throttling and any errors. Check the consumed capacity is below the provisioned capacity on the charts.
It is possible that there is some other bottleneck in your process.
It looks like you can send more requests per second. You can perform more request, but if you send requests in a loop like this:
for item in items:
table.putItem(item)
You need to mind the roundtrip latency for each request.
You can use two tricks:
First, upload data from multiple threads/machines.
Second, you can use BatchWriteItem method that allow you to write up to 25 items in one request:
The BatchWriteItem operation puts or deletes multiple items in one or
more tables. A single call to BatchWriteItem can write up to 16 MB of
data, which can comprise as many as 25 put or delete requests.
Individual items to be written can be as large as 400 KB.