We know the minimum, maximum provisioned capacity for a certain table.
For example our minimum capacity is 200 reads per second and maximum is 1000 read per second, so what should be the target utilization percentage ?
Some background for a complete answer; DynamoDB provides an Autoscaling option for managing throughput capacity. With autoscaling you define a minimum, maximum and target utilization.
DynamoDB Autoscaling will then vary the provisioned throughput between the maximum and mimumum bounds set. It will aim to keep this throughput provision at the utilization capacity.
Target utilization is the ratio of consumed capacity units to
provisioned capacity units, expressed as a percentage
A good starting point is to ask why not set target utilization to 100%? This sounds efficient, because you will only be paying for the throughput you use. But there is a problem to this:
DynamoDB auto scaling modifies provisioned throughput settings only
when the actual workload stays elevated (or depressed) for a sustained
period of several minutes
So, imagine your target utilization is 100% and you have increased demand on your table for 15 minutes. For the first 5 minutes you might be saved by burst capacity, in the second lot of 5 minutes you are likely to see database read/write failures as your throughput is exceeded, and then after around 10 minutes Autoscaling should kick in and increase your throughput.
This is the problem you are trying to avoid by setting target utilization (i.e. an increase in demand causing throttling). You need to consider two things
1) What is the biggest change in throughput capacity usage you see over a time period of 15 minutes expressed as a percentage? Leave this amount of room in your target utilization.
2) How much do you care if you have some database throttling? (i.e. some database read/writes fail?) Adjust your target utilization higher or lower depending on your appetite for cost saving versus throttling.
Lets say you look over one week of data, and find that in a 15 minute period, the largest increase in throughput you see is 20%. That gives you a target utilization of 80% (because then your increased demand is absorbed by autoscaling)*. However lets say you are cautious and you really aren't OK with database throttling, so to be on the safe side, you might go with 70%.
Hope that helps make some decisions. In summary, your target utilization should be a function of how quickly your throughput capacity changes, and how averse you are to throttling.
EDIT:*The maths isn't perfect here, but you get the idea I think. And its probably a close enough approximation.
Related
According to what I know about gp2 from AWS docs (link), gp2 disks have burst capabilily when they are smaller than 1000GB.
After disk is bigger 1000GB, baseline performance exceeeds 3000 IOPS burst performance, so that "burst" term cannot apply.
However, as I see on my current prod database with 2TB gp2 storage, burst balance still somehow apply to me, and storage is considerably faster while burst balance is more than 0.
Apparently, there are changes in AWS Burst term. Does anybody knows modern terms, so I can plan my hardware accordingly?
I made request to AWS support about this.
It was a lengthy thread where I got to know several important facts.
I have saved my conversation at this link, so it's not lost for community.
Answer: burts balance may still apply for storage bigger than 1TB, because there may be several volumes to serve storage space. If volume is smaller than 1TB - burst balance gets utilized for that volume.
Other facts that were obscure for me:
database may look like it's capped by IOPS limits (due internal IOPS submerge operation), but in reality it may be capped by network throughput.
network throughput is gueranteed by EBS-Optimized. At RDS docs you won't find explicit tables how instances relate to throughput, but it's there on EBS docs
For some of the instances that are nitro-based, EBS-Optimized allows to work at maximum throughput for class for 30 minutes each 24 hours. For smaller instances it means that for 30 minutes database may go skyrocket performance, comprared to poor baseline.
I've run into that issue with EFS, provisioning enough capacity (storage and throughput) is one thing, provisioning burst capacity is something else. In this case it appears that you are running into the same issue. Exceeding your burst capacity. If you have a read-heavy application, consider using a replica or a caching scheme. Alternatively you can increase your 2TB disk to 4TB or look into provisioned iops solution.
From the screen capture, I can see that AWS is already delivering the performance they promised for your instance ( 6K IOPS, consistently )
So the question remains is why there is still burst performance that let you burst up to > 11K IOPS ( the 7:00 - 9:00 timeframe ) for a limitted time
My guess is that the 3K IOPS burst limit is only for instances with less than 1TB. For instance of bigger size, you can burst up to "Baseline performance + 3k IOPS" ( around 9k in your case ) until the IO credit runs out. I have not seen any document around this though
Right now I am using on-demand mode for my DynamoDB tables, as I didn't know how much data to expect. But now that the application has run a while, I can see the metrics for ConsumedReadCapacityUnits and ConsumedWriteCapacityUnits for my tables in CloudWatch.
In on-demand mode I pay per request, whereas in provisioned capacity mode I have to pay for the provisioned capacity. If I simply take the metrics for (max) consumed capacity units and compare the prices of those in provisioned capacity mode to my current costs, I believe provisioned capacity mode would be a lot cheaper for me.
My question is, can I simply take the metrics and take the max (plus some buffer) of the consumed capacity units and configure them as provisioned capacity, or is that an error in reasoning on my part?
There are two other things you need to consider:
How 'bursty' is your throughput?
Are you using SDKs to connect to your database?
Setting your provision to the maximum throughput you ever see will ensure you don't get throttled requests, however you will probably be setting the provision too high. Dynamodb can actually consume more provision than you have set using Burst Capacity. This will accomodate short bursts of high throughput over the space of 5 minutes. If you see sustained peaks, for example your database is busy in the day but not the night, you might consider setting your tables to Autoscale. In this case you can set the provisioned throughput lower, and Dyanmodb will automatically scale up provision as required. Note that autoscale is good for workloads that vary over the course of hours (e.g. for handling daily peak hours). It's not good for reacting to events that occur in less than about 30 minutes.
If you are using official SDKs, they will handle throttle responses, and retry any failed requests. This gives Dynamodb some time to scale without your application failing requests.
Our table has bursty writes, expected once a week. We have auto-scaling enabled, with provisioned capacity as 5 WCU's, with 70% target utilization. This suffices for our off-peak (non-bursty) traffic. However, during the bursty writes, the WCU's reach around 1.5-2k, which leads to a lot of throttled writed and ultimately failures to write as well.
1) Is the auto-scaling suitable for such an use-case?
2) If yes, what should our initial provisioned capacity be?
This answer will tell you why auto-scaling is not working for you:
https://stackoverflow.com/a/53005089/4985580
This answer will tell you how you can configure your SDK to retry operations over a much longer period (and therefore stop your operation failures furing peak requests).
What should be done when the provisioned throughput is exceeded?
Ultimately you should probably move your tables to on-demand.
For tables using on-demand mode, DynamoDB instantly accommodates
customers’ workloads as they ramp up or down to any previously
observed traffic level. If the level of traffic hits a new peak,
DynamoDB adapts rapidly to accommodate the workload.
No, auto-scaling is not suitable for your needs. It takes a few minutes to scale up and it does that by increasing a fixed percentage of your current capacity at each time. There's also a limited number of times it scales up or down per day, so you can't get from 5 to 2,000 in a matter of minutes. You may not even get that in a matter of hours.
I'd suggest to try on demand mode, or manually setting capacity to 2,000 some time before you actually need it (it doesn't really scale instantly).
I strongly advise to read the ENTIRE dynamo documentation with regards to best practices for primary key, GSI, data architecture. Depending on the size of your table (lager that 10 Gb), the 2,000 units may get spread across partitions and you could potentially still have throttled requests.
My primary requirement is as follows:
When CPU consumption on an instance exceeds 50 % then adjust capacity of autoscaling group to 5 instances, when CPU consumption exceeds 80% then adjust capacity to 10 instances.
However if I use cloudwatch alarms to set capacity I can imagine the following race condition:
5 instances exist
CPU consumption exceeds 80 %
Alarm is triggered
Capacity is changed to 19 instances
CPU consumption drops below 50 %
Eventually CPU consumption again exceeds 50% but now capacity will be changed to 5 instances (which is something I don't want to happen)
So what I would ideally like to happen is that in response to alarm triggers I would like to ensure that capacity is altleast the corresponding threshold.
I am aware that this can be done by manually setting the capacity through AWS SDK - which could be triggered in response to lifecycle events monitored by a supervisor, but is there a better approach, preferably one that does not require setting up additional supervisors or webhooks for alarms ?
A general approach is to fine grain the scaling actions:
Do not jump that big:
if the ASG avg CPU is over 70% > Add an instance
if the ASG avg CPU is over 90% > Add "n" instances
if the ASG avg CPU is under 40% > remove an instance
if the ASG avg CPU is under 10% > remove "n" instance
All of these values are the last 5 mins AVG. So if you have a really fast pike, you need more aggressive scaling. So in half an hour you can easily add 6 servers or even more.
Also scaling works better with higher numbers. So if your system needs only 1-3 instances, it may make sense to decrease the instance size so you can have 2-6 instances. It give some extra flexibility to your system.
But again, the question is, what is your expected load? Big pikes or an expected up and down during the day?
I would suggest looking into an AWS lambda function, triggered by an SNS message from cloudwatch - it should give you free reign to put as much logic into the scaling decision as you want.
Good Luck!
In the answer to "How is Amazon DynamoDB throughput calculated and limited?" it's been suggested, that DynamoDB throttles request whenever you exceed provisioned throughput on per second basis. However, this contradicts my experience.
I've table where I post multiple rows, often the number of rows way exceeding provisioned write capacity. This happens in short bursts. At one point I've even got 5 minutes average above provisioned capacity. OTOH, 15 minutes average is below capacity. I haven't got any throttled request in that period.
5 minutes average peaks at 8.053 with provisioned capacity of 6:
15 minutes average peaks well below provisioned capacity:
So when does DynamoDB throttle requests? What kind of average does it take in account? How high above provisioned capacity can the burst be before it gets throttled?
DynamoDB is designed to ensure that your provisioned capacity is available on a per-second basis. If you provision a table for ten 1kB reads per second then DynamoDB will give you enough capacity to handle that throughput rate. In addition, DynamoDB will sometimes allow you to achieve limited bursting above your provisioned throughput for a short period of time. This is intended to absorb natural variations in customer workloads. This bursting is not guaranteed and it is not always available (and the nature of the available bursting may change over time). As is currently described in the best practices documentation, in order to get the best performance you should have an evenly distributed workload that does not exceed your provisioned capacity and distributes the load evenly over the key space. However, if the reality of production behavior for your application deviates from an evenly distributed workload then DynamoDB may absorb some of the bursts.
As for how much to provision your table, it depends a lot on your workload. You could start with provisioning to something like 80% of your peaks and then adjust your table capacity depending on how many throttles you receive (which you can see in your CloudWatch graphs) and your application’s tolerance for latency induced by retries. Keep in mind that DynamoDB does not allow unlimited bursts above your provisioned capacity. You may be able to absorb short bursts but you cannot sustain a throughput rate above your provisioned capacity level for an extended period of time. The general guidance we can give is to provision for something close to your peaks and then dial down while watching for throttles.
This answer was posted in AWS forums
Disclaimer: I work for Amazon, DynamoDB team.
There's a hint in the DynamoDB documentation that explains how bursting works:
When you are not fully utilizing a partition's throughput, DynamoDB retains a portion of your unused capacity for later bursts of throughput usage. DynamoDB currently retains up five minutes (300 seconds) of unused read and write capacity.
But it also says that you cannot rely on this behavior:
However, do not design your application so that it depends on burst capacity being available at all times: DynamoDB can and does use burst capacity for background maintenance and other tasks without prior notice.
At least that would explain why it was possible to have a 5 minute average above the provisioned capacity. With the explanation above, it would even be possible to have 15 minute averages (or longer timespans) to be above the provisioned capacity, if you have a spike in the very beginning of the interval and less usage within the 300 seconds before the start of the interval.
DynamoDB provides some flexibility in your per-partition throughput provisioning by providing burst capacity. Whenever you're not fully using a partition's throughput, DynamoDB reserves a portion of that unused capacity for later bursts of throughput to handle usage spikes.
DynamoDB currently retains up to 5 minutes (300 seconds) of unused read and write capacity. During an occasional burst of read or write activity, these extra capacity units can be consumed quickly—even faster than the per-second provisioned throughput capacity that you've defined for your table.
DynamoDB can also consume burst capacity for background maintenance and other tasks without prior notice.