Aws s3 reduced redundancy costs the same as standard? - amazon-web-services

I typed in the same parameters on s3 calculator for rss and standard and the result seems the same. I put in 10M put requests and 10M get requests for both and they came out to $5.38 under us-east zone.
Am i missing something here?
Thank you in advance.

The requests do cost the same for both types of storage it costs exactly the same.
The pricing differences come down to cost per GB. For example storing 100GB in standard S3 storage will cost you $3. Storing the same amount of data in reduced redundancy will cost you $2,4.
Pricing for standard can be found here: https://aws.amazon.com/s3/pricing/.
While pricing for Reduced can be found here: https://aws.amazon.com/s3/reduced-redundancy/.
Update: As the comment below points out, now the the price for Standard storage is $0,023 per GB, so 100GB would cost $2,4 while the Reduced stayed at the same cost.
Now there's another option called Standard - Infrequent Access Storage, which has the same benefits of S3 Standard and has a cost of 0.0125 per GB, so storing 100GB on tier would cost $1,25, but there are some caveats to watch for:
Minimum object size of 128KB.
Minimum storage duration of 30 days (if you delete the object before you'll be billed for 30 days).
A per GB retrieval fee ($0.01 per GB) similar to the one present in AWS Glacier.

Refer below document.
https://aws.amazon.com/s3/pricing/
Anyways, pricing differs from Region to Region, No. of put/get requests and on data transfer.

Related

How can I pin-point the source of AWS S3 costs?

I'm currently surprised by a pretty high daily AWS S3 cost of over 31 USD per day (I expected 9 USD - 12 USD per month):
I'm using eu-central-1
All buckets combined are less than 400 GB
No replication
The best explanation I have is that the number of requests was way higher than expected. But I don't know how I can confirm this. How can I narrow down the source of AWS S3 cost?
Is it possible to see the costs by bucket?
Is it possible to see a breakdown by storage / requests / transfers / other features like replication?
First pay attention to the factors on which AWS S3 charges - i.e. based on storage, how many requests s3 is getting, data transfer and retrieval.
Some of the ways for cutting and keep track on the cost -
Delete the previous version of the buckets if you don't need that.
Move the data to different s3 storage based on frequency of data retrieval.
activate the cost allocation tags on your buckets so that you can review the cost on individual bucket.
create an S3 Storage Lens dashboard for all the buckets in your account.

S3 Standard to Glacier - Lifecycle Transition Cost

I wanted to confirm my understanding of the cost for lifecycle policy based transition of files from Standard to Glacier is correct as mentioned with below example.
Per 1000 files of transfer, we get charged a $0.06 (ap-south-1 region) to transfer to Glacier.
Eg:
Bucket A: Has 1 million files (3TB total size). If we move all the objects to Glacier, we will be charged 1000000*0.06/1000 = $60
Bucket B: Has 300 files (3TB total size). If we move all the objects to Glacier, we will be charged $0.06 or less (as it has less than 1000 files in the bucket)
Yes, the transition costs are indeed driven by the number of files being moved. It is similar to performing a new PUT operation to S3. You pay based on the number of requests being made. Once the data (files) are part of that storage class, then you are charged for the storage based on the class.
As you may note, transition to Glaicer (or a PUT to Glacier) is around 10 times costlier than a corresponding PUT to S3 standard. In ap-south-1, S3 PUT is charged at $0.005 per 1000 requests, while Glacier transition (or Glacier PUT) is charged at $0.06 per 1000 requests (as of May 2020).
Also, there are additional costs that need to be considered while moving data from S3 to Glacier. Hence it is always a good idea to do a cost analysis of whether it makes sense to move data from S3 to Glacier and determine when, if at all, you would see any savings.
I have covered such a cost analysis with various costs involved in great details in a blog post in case you are interested.
http://pragmaticnotes.com/2020/04/22/s3-to-glacier-lifecycle-transition-see-if-its-worth-it
Hope this helps!

Is it a good practice to use AWS S3 infrequent Access (IA) with cloud front for static website resources?

I am in a position where I have a static site hosted in S3 that I need to front with CloudFront. In other words I have no option but to put CloudFront in front of it. I would like to reduce my S3 costs by changing the objects storage class to S3 Infrequent Access (IA), this will reduce my S3 costs by like 45% which is nice since I have to now spend money on CloudFront. Is this a good practice to do? since the resources will be cached by CloudFront anyways? S3 IA has 99.9% uptime which means it can have as much as 8.75 hours of down time per year with AWS s3 IA.
First, don't worry about the downtime. Unless you are using Reduced Redundancy or One-Zone Storage, all data on S3 has pretty much the same redundancy and therefore very high availability.
S3 Standard-IA is pretty much half-price for storage ($0.0125 per GB) compared to S3 Standard ($0.023 per GB). However, data retrieval costs for Standard-IA is $0.01 per GB. Thus, if the data is retrieved more than once per month, then Standard-IA is more expensive.
While using Amazon CloudFront in front of S3 would reduce data access frequency, it's worth noting that CloudFront caches separately in each region. So, if users in Singapore, Sydney and Tokyo all requested the data, it would be fetched three times from S3. So, data stored as Standard-IA would incur 3 x $0.01 per GB charges, making it much more expensive.
See: Announcing Regional Edge Caches for Amazon CloudFront
Bottom line: If the data is going to be accessed at least once per month, it is cheaper to use Standard Storage instead of Standard-Infrequent Access.

AWS - How is EBS charged?

How is EBS charged? Is it Per stored data and bandwidth, per allocated storage and bandwidth, per stored data and IO operations or per allocated data IO operations?
It's stated pretty clear inside the documentation
https://aws.amazon.com/ebs/pricing/
Generally it's based on the disk type, provisioned size and provisioned IOPS ( where applicable ) multiplied by the amount of time until you delete the resource
Price also varies a bit region to region. To use or not the provisioned resource will have no effect on your billed amount
Also
If you need help setting up AWS resources, ask those questions on Server Fault.

Are S3 storage costs based on total current usage or on total volume ingested

Suppose I have a script which uploads a 100GB object every day to my S3 bucket. This same script will delete any file older than 1 week from the bucket. How much will I be charged at the end of the month?
Let's use pricing from the us-west-2 region. Suppose this is a 30-day month and I start with no data in the bucket at the beginning of the month.
If charged for maximum bucket volume per month, I would have 700GB at the end of the month and be charged $0.023 * 7 * 100 = $16.10. Also some money for my PUT requests ($0.005 per 1,000 requests so effectively 0).
If charged for total amount of data that had transited through the bucket over the course of that month, I would be charged $0.023 * 30 * 100 = $69. (again +effectively $0 for PUT requests)
I'm not clear on which of these two cases Amazon bills. This becomes very important for me, since I expect to have a high amount of churn in my bucket.
Both of your calculations are incorrect, although the first one comes close to the right answer, for the wrong reason. It is neither peak nor end-of-month that matters.
The charge for storage is calculated hourly. For all practical purposes, this is the same as saying that you are billed for your average storage over the course of a month -- not your maximum, and not the amount you uploaded.
Storing 30 GB for 30 days or storing 900 GB for 1 day would cost the same amount, $0.69.
The volume of storage billed in a month is based on the average storage used throughout the month. This includes all object data and metadata stored in buckets that you created under your AWS account. We measure your storage usage in “TimedStorage-ByteHrs,” which are added up at the end of the month to generate your monthly charges.
https://aws.amazon.com/s3/faqs/#billing
This is true for STANDARD storage.
STANDARD_IA and GLACIER are also billed hourly, but there is a notable penalty for early deletion: Each object stored in these classes has a minimum billable lifetime of 30 days in IA or 90 days in Glacier, no matter when you delete it. Both of these alternate storage classes are only appropriate for data you do not intend to delete soon or retrieve often, by design.
REDUCED_REDUNDANCY storage follows the same rules as STANDARD (hourly billing, no early delete penalty) but after the most recent round of price decreases, it is now only less expensive than STANDARD in regions with higher costs. It is an older offering that is no longer competitively priced in regions where STANDARD pricing is lowest.
Your bill will for storage will be closer to your #1 example, perhaps a bit higher because for brief amounts of time, while uploading the 8th day, you still have 7 days of storage accruing charges, but you won't be charged anywhere near your #2 example.
Firstly, you don't need a script to delete files older than 1 week. You can set a transition cycle on the bucket which will automatically do that; or might be transfer contents to Glacier ( with 10% cost ) if you might need them later.
Secondly, storage cost might not be huge.. Probably better idea would be to that script first deletes data from S3 ( if u want script to do that ) and then you add more data.. so that your bucket overall never have more data and you are always charged on consistent storage basis.
Thirdly, your main charges could be bandwidth charges (if not handled well) which can be really huge as you are transferring so much data. If all this data is generated internally from your grid then make sure u create VPC endpoint to your S3 so that you don't pay "bandwidth charges" as then this data transfer will be considered to be transferred on intranet.