Query Regarding DynamoDB Pricing - amazon-web-services

I have an use case wherein, I need to read data from DynamoDB each 30 seconds.
And the number of items the client will be reading will always be 200, now it is known that each item is less than 4KB, so this batch read will use 200 Read Capacity Units. But the 200 Read Capacity Units will be used in a 30 seconds interval, so how much would I be paying for this per hour ?
Will it be same as 200*(0.00725/10) USD per hour or something else ?

Yes. You will be paying for the hour.
The price of dynamodb is based on provisioned throughput and size of the data.
The provisioned throughput is preconfigured and charged irrespective of whether you consume the available throughput or not.
You have the option of controlling the throughput configurations through APIs, but that also has limits on how many times you can do it in a day.
Since you will querying every 30 seconds, increasing & decreasing throughput may not be option, since the increase and decrease is not instantaneous.

Related

Do I have to wait for 30 minutes when a on-demand Dynamodb table is throttled?

I am using on-demand Dynamodb table and I have read the doc https://aws.amazon.com/premiumsupport/knowledge-center/on-demand-table-throttling-dynamodb/. It says You might experience throttling if you exceed double your previous traffic peak within 30 minutes. It means Dynamodb adjust the RCU/WCU based on the last 30 minutes.
Let's say my table is throttled, do I have to wait for maximum 30 minutes until the table adjust its RCU/WCU? Or does the table update RCU immediately? or in a few minutes?
The reason I am asking is that I'd like to put a retry on my application code to retry the DB action whenever there is a throttle. How can I add sleep interval between the retry?
Capacity is always managed with an On Demand table to support double any previous peak throughput, but if you grow faster than that, the table will add physical capacity (physical partitions).
When DynamoDB adds partitions it can take between 5 minutes and 30 minutes for that capacity to be available for use.
It has nothing to do with RCUs/WCUs because On Demand tables don't have capacity units.
Note: You may stay throttled if you've designed a hot partition key in either the base table or a GSI.
During the throttle period requests are still getting handled (and handled at a good rate). Just like if you see a line at the grocery store check out, you get in line. Don't design the code to come back in 30 minutes hoping there's no line after adding checkers. The grocery store will be "adding checkers" when it notices the load is high, but it also keeps the existing work processing.

Why is it possible to go beyond DynamoDB burst capacity?

I have created a DynamoDB table with 1 RCU (manual provisioned capacity).
I have inserted some items to read in that table.
I can launch a scan on my table (which consumes 82 RCUs according to the response).
I understand this is possible because of the burst capacity.
What I don't understand though, is why am I able to keep consuming huge numbers of RCUs for long periods of time.
As you can see on this screenshot, despite the RCU being 1, I have been
consuming around 150 or 200 RCU per minute for more than 1 hour (we can barely see the 1 RCU red line at the bottom).
Why is that? (some of the requests are of course throttled but why so little ?)
How much data do you have in that table?
When you try scan operation from console, it will read items from the table that will consume RCUs.
There are options to configure baseline read/write capacity units and enable autoscaling if you expect variable reads/write requests. If the load starts to increase, dynamo db service will gradually scale to fulfil those requests instead of throttling.
https://aws.amazon.com/blogs/database/amazon-dynamodb-auto-scaling-performance-and-cost-optimization-at-any-scale/

Query on DynamoDB Provisioned Throughput WCU/RCU operation

I am trying to understand how dynamo DB provisioned throughput (RCU/WCU) works.
I tried 2 scenarios where i made a change in WCU ( 1,000 & 10,000), but the WCU consumed figures which i am getting is same i.e. 809.63.
In a nutshell, i have 123 records distributed in 5 files, each record is of 400 KB ( according to dynamo db limit rule). When executing these cases there was no throttling, and strange thing is script execution time is same i.e. 6 sec, even though i have changed WCU count to 1k & 10k respectively.
My question is why does it behave like this. I would like to know your comments on this.
My assumption is if i decrease/increase WCU count, i should see changes in script execution time, which is not in my case.
Dynamo DB Scenario tests:
WCU/RCU do not increase the speed of a DynamoDB SDK response time, they only set an upper limit for capacity usage.
Read and Write Capacity Units are, as the name suggests, capacity units. They indicate the upper limit of how much capacity your table can handle in terms of read/write. What this means is, in your case since you are using 809.63 WCU, if your WCU is set to above 810 then you won't get any throttled requests. However, if you lower your WCU to 800, you will start seeing your requests being throttled.
If you have consistent TPS and know how many capacity units you will be using, then set just the amount that you will require. In your case, 1k WCU seems sufficient and will not make any difference compared to 10k in terms of performance, unless you use more than 1k WCU, in which case you can provision more capacity or implement auto-scaling to handle it.
See here for more information: Documentation
Edit: As discussed in below comments, if you use more capacity than is provisioned, DynamoDB will temporarily allow a burst of capacity to support it for up to 5 minutes, which could lead to varying results in terms of throttling
Before answering, many Thanks to Deiv & Stu for finding this evidence.
DynamoDB can consume up to 300 seconds of unused throughput in burst capacity.
The maximum item size in DynamoDB is 400KB and 1 RCU gives you a read of up to 4KB.
Lets say you want to read an item that is 400KB in size and you have 1 RCU on your table. You could retrieve that item once every 100 seconds.
Because of burst capacity there will always be a time you can read that item, because in fact you can use up to 300 RCUs in one go, not just 1.
Imagine starting the table with that 400KB item. You need to wait 100 seconds without spending any RCUs so that you've earned enough burst capacity to get the item. After 101 seconds you make the request, spend 100 RCUs and get the item. After another 5 seconds you make the request again, but get denied with a Throttling Exception.
So no, DynamoDB will not increase request latency to meet your RCU provision. It either returns your results as fast as possible, or throws an exception.

AWS DynamoDB. Am I overusing my write capacity?

Input data
DynamoDB free tier provides:
25 GB of Storage
25 Units of Read Capacity
25 Units of Write Capacity
Capacity units (SC/EC - strongly/eventually consistent):
1 RCU = 1 SC read of 4kB item/second
1 RCU = 2 EC reads of 4kB item/second
1 WCU = 1 write of 1kB item/second
My application:
one DynamoDB table 5 RCU, 5 WCU
one lambda
runs each 1 minute
writes 3 items ~8kB each to the DynamoDB
lambda execution takes <1 second
The application works ok, no throttling so far.
CloudWatch
In my CloudWatch there are some charts (ignore the part after 7:00):
the value on this chart is 22 WCU
on this chart it is 110 WCU - actually figured it out - this chart resolution is 5min - 5*22=110 (leaving it here in case my future self gets confused)
Questions
We have 3 writes of ~8kB items/second - that's ~24 WCU. That is consistent with what we see in the CloudWatch (22 WCU). But the table is configured to have only 5 WCU. I've read some other questions and as far as I understand I'm safe from paying extra if the sum of WCUs in my tables configurations is below 25.
Am I overusing the write capacity for my table?
Should I expect throttling or extra charges?
As far as I can tell my usage is still within the free tier limits, but it is close (22 of 25). Am I to be charged extra if my usage gets over 25 on those charts?
The configured provisioned capacity is per second, while the data you see in CloudWatch is per minute. So your configured 5 WCU per second translate to 300 WCU per minute (5 WCU * 60 seconds), which is well above the consumed 22 WCU per minute.
That should already answer your question, but to elaborate a bit on some details:
A single write of 7KB with a configured amount of 5 WCU would in theory never succeed and cause throttling, as 7KB would require 7 WCU to write, while you only have 5 WCU configured (and we can safely assume that your write would occur within one second). Fortunately the DynamoDB engineers thought about that and implemented burst capacity. While you're not using provisioned capacity you'll save them up for up to 5 minutes to use them when you need more than the provisioned capacity. That's something to keep in mind when increasing the utilization of your capacity.

aws dynamo db throughput

There's something which I cant understand about AWS DynamoDb throughput.
Lets consider strongly consistent reads.
Now, I understand that in this case, 1 unit of capacity would mean I can read up to 4KB of per second.
It's the "per second" bit that slightly confuses me. If you know exactly how quickly you want to read data then you can set the units appropriately. But what if you're not too fussy about the read time?
Say I do have only 1 read unit assigned to my table and I try to read an item which is more than 4KB. Now surely that just means that my read is going to take more than 1 second? That would be fine but the documentation talks about Requests failing. How can AWS determine that I used too many units when I didn't request that the data be read within a particular time?
Maybe I am missing something obvious. Can you someone help clear this up?
DynamoDB can consume up to 300 seconds of unused throughput in burst capacity.
The maximum item size in DynamoDB is 400KB and 1 RCU gives you a read of up to 4KB.
Lets say you want to read an item that is 400KB in size and you have 1 RCU on your table. You could retrieve that item once every 100 seconds.
Because of burst capacity there will always be a time you can read that item, because in fact you can use up to 300 RCUs in one go, not just 1.
Imagine starting the table with that 400KB item. You need to wait 100 seconds without spending any RCUs so that you've earned enough burst capacity to get the item. After 101 seconds you make the request, spend 100 RCUs and get the item. After another 5 seconds you make the request again, but get denied with a Throttling Exception.
So no, DynamoDB will not increase request latency to meet your RCU provision. It either returns your results as fast as possible, or throws an exception.
EDIT: By the way, I should mention that all AWS DynamoDB SDKs handle Throttling Exceptions for you. If you try and read an item, but get denied because you don't have enough throughput available, the SDK backs off and try again. So unless your table really is under provisioned, you shouldn't have to worry about handling Throttling Exceptions.