AWS - How is EBS charged? - amazon-web-services

How is EBS charged? Is it Per stored data and bandwidth, per allocated storage and bandwidth, per stored data and IO operations or per allocated data IO operations?

It's stated pretty clear inside the documentation
https://aws.amazon.com/ebs/pricing/
Generally it's based on the disk type, provisioned size and provisioned IOPS ( where applicable ) multiplied by the amount of time until you delete the resource
Price also varies a bit region to region. To use or not the provisioned resource will have no effect on your billed amount
Also
If you need help setting up AWS resources, ask those questions on Server Fault.

Related

Why does AWS RDS still shows burst balance 0 with disk size 2TB gp2?

According to what I know about gp2 from AWS docs (link), gp2 disks have burst capabilily when they are smaller than 1000GB.
After disk is bigger 1000GB, baseline performance exceeeds 3000 IOPS burst performance, so that "burst" term cannot apply.
However, as I see on my current prod database with 2TB gp2 storage, burst balance still somehow apply to me, and storage is considerably faster while burst balance is more than 0.
Apparently, there are changes in AWS Burst term. Does anybody knows modern terms, so I can plan my hardware accordingly?
I made request to AWS support about this.
It was a lengthy thread where I got to know several important facts.
I have saved my conversation at this link, so it's not lost for community.
Answer: burts balance may still apply for storage bigger than 1TB, because there may be several volumes to serve storage space. If volume is smaller than 1TB - burst balance gets utilized for that volume.
Other facts that were obscure for me:
database may look like it's capped by IOPS limits (due internal IOPS submerge operation), but in reality it may be capped by network throughput.
network throughput is gueranteed by EBS-Optimized. At RDS docs you won't find explicit tables how instances relate to throughput, but it's there on EBS docs
For some of the instances that are nitro-based, EBS-Optimized allows to work at maximum throughput for class for 30 minutes each 24 hours. For smaller instances it means that for 30 minutes database may go skyrocket performance, comprared to poor baseline.
I've run into that issue with EFS, provisioning enough capacity (storage and throughput) is one thing, provisioning burst capacity is something else. In this case it appears that you are running into the same issue. Exceeding your burst capacity. If you have a read-heavy application, consider using a replica or a caching scheme. Alternatively you can increase your 2TB disk to 4TB or look into provisioned iops solution.
From the screen capture, I can see that AWS is already delivering the performance they promised for your instance ( 6K IOPS, consistently )
So the question remains is why there is still burst performance that let you burst up to > 11K IOPS ( the 7:00 - 9:00 timeframe ) for a limitted time
My guess is that the 3K IOPS burst limit is only for instances with less than 1TB. For instance of bigger size, you can burst up to "Baseline performance + 3k IOPS" ( around 9k in your case ) until the IO credit runs out. I have not seen any document around this though

How to compute initial Auto-scaling limits for DynamoDb table

Our table has bursty writes, expected once a week. We have auto-scaling enabled, with provisioned capacity as 5 WCU's, with 70% target utilization. This suffices for our off-peak (non-bursty) traffic. However, during the bursty writes, the WCU's reach around 1.5-2k, which leads to a lot of throttled writed and ultimately failures to write as well.
1) Is the auto-scaling suitable for such an use-case?
2) If yes, what should our initial provisioned capacity be?
This answer will tell you why auto-scaling is not working for you:
https://stackoverflow.com/a/53005089/4985580
This answer will tell you how you can configure your SDK to retry operations over a much longer period (and therefore stop your operation failures furing peak requests).
What should be done when the provisioned throughput is exceeded?
Ultimately you should probably move your tables to on-demand.
For tables using on-demand mode, DynamoDB instantly accommodates
customers’ workloads as they ramp up or down to any previously
observed traffic level. If the level of traffic hits a new peak,
DynamoDB adapts rapidly to accommodate the workload.
No, auto-scaling is not suitable for your needs. It takes a few minutes to scale up and it does that by increasing a fixed percentage of your current capacity at each time. There's also a limited number of times it scales up or down per day, so you can't get from 5 to 2,000 in a matter of minutes. You may not even get that in a matter of hours.
I'd suggest to try on demand mode, or manually setting capacity to 2,000 some time before you actually need it (it doesn't really scale instantly).
I strongly advise to read the ENTIRE dynamo documentation with regards to best practices for primary key, GSI, data architecture. Depending on the size of your table (lager that 10 Gb), the 2,000 units may get spread across partitions and you could potentially still have throttled requests.

GCP Dataflow vCPU usage and pricing question

I submited a GCP dataflow pipeline to receive my data from GCP Pub/Sub, parse and store to GCP Datastore. It seems work perfect.
Through 21 days, I found the cost is $144.54 and worked time is 2,094.72 hour. It means after I submitted it, it will be charged every sec, even there is not receive (process) any data from Pub/Sub.
Is this behavior normal? Or I set a wrong parameters?
I thought CPU use time only be counted when data is received.
Is there have any way to reduce the cost in same working model (receive from Pub/Sub and store to Datastore)?
The Cloud Dataflow service usage is billed in per second increments, on a per job basis. I guess your job used 4 n1-standard-1 workers, which used 4 vCPUs giving an estimated of 2,000 vCPU hr resource usage. Therefore, this behavior is normal. To reduce the cost, you can use either autoscaling, to specify the maximum number of workers, or the pipeline options, to override the resource settings that are allocated to each worker. Depending on your necessities, you could consider using Cloud Functions which cost less, but considering its limits.
Hope it helps.

AWS snapshot frequency and its effect on cost

Does the frequency of AWS snapshot have any effect on price because of network consumption or any other parameter, say snapshot every 30 minute or a single snapshot at the end of the day.
There isn't any cost associated with the creation of a snapshot, such as for network bandwidth.
The cost is in storing the snapshots, so the cost is related to how many you keep, not now many you make... as well as how different they all are from each other (and, of course, volume size, to some extent). If you were to snapshot a volume every few minutes and nothing on that volume were changing, then the incremental cost for each additonal snapshot being stored would approach $0, because EBS snapshots are automatically deduplicated.
For snapshots, pricing calculates based on the total size of your initial snapshot and the incremental amount in the size.
For example, if you have got a 100GB volume, initial pricing applied for 100GB snapshot. And let's say the 2nd snapshot is incremental and size is 101 GB (which has added only 1GB), you will charge for 100 + 1 GB of size. Likewise you will be charged for the accumulative size.
However if you need your snapshots cross-region, there will be a data transfer charges as well.
More Info: https://aws.amazon.com/ebs/pricing/
Just so that it will help someone am adding this answer, neither frequency nor keeping or deleting the snapshot is going to affect the cost to support this i am quoting these line from aws user guide:
Deleting a snapshot might not reduce your organization's data storage
costs. Other snapshots might reference that snapshot's data, and
referenced data is always preserved. If you delete a snapshot
containing data being used by a later snapshot, costs associated with
the referenced data are allocated to the later snapshot.
Reference: Deleting an Amazon EBS Snapshot
Yes, you're paying for the snapshot storage. Per EBS Pricing:
$0.05 per GB-month of data stored
However:
Snapshot storage is based on the amount of space your data consumes in Amazon S3. Because Amazon EBS does not save empty blocks, it is likely that the snapshot size will be considerably less than your volume size. For the first snapshot of a volume, Amazon EBS saves a full copy of your data to Amazon S3. For each incremental snapshot, only the changed part of your Amazon EBS volume is saved.
So while you will pay more if you do snapshots frequently it's hard to determine how much more. You may consider a different backup solution as EBS is not the best one.

Aws s3 reduced redundancy costs the same as standard?

I typed in the same parameters on s3 calculator for rss and standard and the result seems the same. I put in 10M put requests and 10M get requests for both and they came out to $5.38 under us-east zone.
Am i missing something here?
Thank you in advance.
The requests do cost the same for both types of storage it costs exactly the same.
The pricing differences come down to cost per GB. For example storing 100GB in standard S3 storage will cost you $3. Storing the same amount of data in reduced redundancy will cost you $2,4.
Pricing for standard can be found here: https://aws.amazon.com/s3/pricing/.
While pricing for Reduced can be found here: https://aws.amazon.com/s3/reduced-redundancy/.
Update: As the comment below points out, now the the price for Standard storage is $0,023 per GB, so 100GB would cost $2,4 while the Reduced stayed at the same cost.
Now there's another option called Standard - Infrequent Access Storage, which has the same benefits of S3 Standard and has a cost of 0.0125 per GB, so storing 100GB on tier would cost $1,25, but there are some caveats to watch for:
Minimum object size of 128KB.
Minimum storage duration of 30 days (if you delete the object before you'll be billed for 30 days).
A per GB retrieval fee ($0.01 per GB) similar to the one present in AWS Glacier.
Refer below document.
https://aws.amazon.com/s3/pricing/
Anyways, pricing differs from Region to Region, No. of put/get requests and on data transfer.