How can I pin-point the source of AWS S3 costs? - amazon-web-services

I'm currently surprised by a pretty high daily AWS S3 cost of over 31 USD per day (I expected 9 USD - 12 USD per month):
I'm using eu-central-1
All buckets combined are less than 400 GB
No replication
The best explanation I have is that the number of requests was way higher than expected. But I don't know how I can confirm this. How can I narrow down the source of AWS S3 cost?
Is it possible to see the costs by bucket?
Is it possible to see a breakdown by storage / requests / transfers / other features like replication?

First pay attention to the factors on which AWS S3 charges - i.e. based on storage, how many requests s3 is getting, data transfer and retrieval.
Some of the ways for cutting and keep track on the cost -
Delete the previous version of the buckets if you don't need that.
Move the data to different s3 storage based on frequency of data retrieval.
activate the cost allocation tags on your buckets so that you can review the cost on individual bucket.
create an S3 Storage Lens dashboard for all the buckets in your account.

Related

Need details on how aws data transfer costs are calculated using S3

We're making a social media app using amplify and are newbies to aws. The services we're using include s3, auth, hosting, analytics, api and lambda functions. We've already accrued significant data transfer usage and I'm guessing it's from repeatedly grabbing images from S3.
Does Storage.get() which generates a presigned URL count as "data transfer out"
Or does it only count when we actually view the file from the URL?
Is there a difference in data transfer between generating a URL and downloading the actual file with Storage.get?
Major costs associated with S3 :
Storage cost : charged per GB / month. ~ $0.03 / GB / month, charged hourly
API cost for operation of files : ~$0.005 / 10000 read requests, write requests are 10 times more expensive
Data transfer outside of AWS region : ~$0.02 / GB to different AWS region, ~$0.06 / GB to the internet.
Based on volume and region the actual prices differs a bit, but optimization techniques stay the same. I will use the above prices in following cost estimates
Note : Leverage AWS Pricing Calculator as much as you can to save cost. It lets you estimate the cost for your architecture solution.
Generating a URL does not cost anything because it's just a computational operation.
The price of S3 downloads can be computed using the AWS Pricing Calculator. Consider using a CDN such as CloudFront in front of S3. There are many benefits in using a CDN, one of which is pricing.

Is it a good practice to use AWS S3 infrequent Access (IA) with cloud front for static website resources?

I am in a position where I have a static site hosted in S3 that I need to front with CloudFront. In other words I have no option but to put CloudFront in front of it. I would like to reduce my S3 costs by changing the objects storage class to S3 Infrequent Access (IA), this will reduce my S3 costs by like 45% which is nice since I have to now spend money on CloudFront. Is this a good practice to do? since the resources will be cached by CloudFront anyways? S3 IA has 99.9% uptime which means it can have as much as 8.75 hours of down time per year with AWS s3 IA.
First, don't worry about the downtime. Unless you are using Reduced Redundancy or One-Zone Storage, all data on S3 has pretty much the same redundancy and therefore very high availability.
S3 Standard-IA is pretty much half-price for storage ($0.0125 per GB) compared to S3 Standard ($0.023 per GB). However, data retrieval costs for Standard-IA is $0.01 per GB. Thus, if the data is retrieved more than once per month, then Standard-IA is more expensive.
While using Amazon CloudFront in front of S3 would reduce data access frequency, it's worth noting that CloudFront caches separately in each region. So, if users in Singapore, Sydney and Tokyo all requested the data, it would be fetched three times from S3. So, data stored as Standard-IA would incur 3 x $0.01 per GB charges, making it much more expensive.
See: Announcing Regional Edge Caches for Amazon CloudFront
Bottom line: If the data is going to be accessed at least once per month, it is cheaper to use Standard Storage instead of Standard-Infrequent Access.

Pricing Structure of Amazon S3 AWS

I'm a newb at this and just want a clear understanding of how price is calculated.
I want to uses s3 cloud storage so that people can upload video, let's say 5 gb per day, we download the video, make changes and upload again for the person to retrieve.
That is 4x upload/download. Would that be 20gb transfer total? AWS on the high cost allows 100gb transfer per month (https://www.hostingadvice.com/how-to/aws-s3-pricing/). If each day our transfer is 20gb that will add up pretty fast.
Is there a more cost effective way to do cloud storage?
Thanks!
Pricing for Amazon S3 can be found at: S3 Pricing by Region | Amazon Simple Storage Service
The data transfer costs incurred would be:
Upload from Internet to S3: Zero
Download from S3 to an Amazon EC2 instance in the same region (for processing): Zero
Upload from EC2 to S3 in same region: Zero
Download from Amazon S3 to Internet: 9c/GB (in USA, but varies elsewhere!)
For a 5GB file, total data transfer costs: 5 x 9c = 45c
There would also be storage costs (2.3c/GB per month) and request pricing (way under 1c).

How to monitor daily costs of Amazon S3

I was looking around to learn about Do and Don'ts of Amazon S3 and was looking at options on how to monitor usage daily. I have couple of questions here:
I see on the Amazon S3 pricing page that PUT, COPY, POST, or LIST Requests are $0.005 per 1,000 requests
I have 5 buckets in S3 and each of these buckets have sub folders having 100k files.
If I do aws s3 ls --recursive will I be charged
5*100k=500000/1000 = 500*0.005 = $2.50?
Any suggestions on tools that can be used to email daily usage rate of my S3 bucket?
A LIST request can return up to 1000 objects. Therefore, to obtain the full list of 100k objects, it would require 100 LIST requests. That would consume a tenth of the initial $0.005 charge.
If you were to repeat this for 5 buckets, it would require 500 LIST requests, which would consume of half of the initial $0.005 charge.
Alternatively you could use Amazon S3 Storage Inventory, which can provide a daily/weekly CSV file that contains a list of all your objects. It is charged at $0.0025 per million objects listed (per bucket per run).
I sense that you are worried about your Amazon S3 costs. You can configure billing alerts and notifications to remain aware of when your costs rise above expected levels. This works for all AWS services.
See:
Monitoring Charges with Alerts and Notifications
Create a Billing Alarm to Monitor Your Estimated AWS Charges

Number of Buckets in Amazon Free Tier

I am trying out Amazon S3 for my file uploads and would like to store different buckets for development, test and production environments. In amazon documentation it was mentioned the following statement
As part of the AWS Free Usage Tier,
you can get started with Amazon S3 for
free. Upon sign-up, new AWS customers
receive 5 GB of Amazon S3 storage,
20,000 Get Requests, 2,000 Put
Requests, 15GB of data transfer in,
and 15GB of data transfer out each
month for one year.
Is there any limitation about the number of buckets. I mean if I have three buckets and If I use within the overall storage limit, will I be charged.
Each account in AWS is limited to 100 buckets -- even if you are paying the normal usage rates.
Buckets are not billable items in S3.
If the limit of 100 is not enough you can create virtual folders in your buckets and structure your environment that way.