Need details on how aws data transfer costs are calculated using S3 - amazon-web-services

We're making a social media app using amplify and are newbies to aws. The services we're using include s3, auth, hosting, analytics, api and lambda functions. We've already accrued significant data transfer usage and I'm guessing it's from repeatedly grabbing images from S3.
Does Storage.get() which generates a presigned URL count as "data transfer out"
Or does it only count when we actually view the file from the URL?
Is there a difference in data transfer between generating a URL and downloading the actual file with Storage.get?

Major costs associated with S3 :
Storage cost : charged per GB / month. ~ $0.03 / GB / month, charged hourly
API cost for operation of files : ~$0.005 / 10000 read requests, write requests are 10 times more expensive
Data transfer outside of AWS region : ~$0.02 / GB to different AWS region, ~$0.06 / GB to the internet.
Based on volume and region the actual prices differs a bit, but optimization techniques stay the same. I will use the above prices in following cost estimates
Note : Leverage AWS Pricing Calculator as much as you can to save cost. It lets you estimate the cost for your architecture solution.

Generating a URL does not cost anything because it's just a computational operation.
The price of S3 downloads can be computed using the AWS Pricing Calculator. Consider using a CDN such as CloudFront in front of S3. There are many benefits in using a CDN, one of which is pricing.

Related

How can I pin-point the source of AWS S3 costs?

I'm currently surprised by a pretty high daily AWS S3 cost of over 31 USD per day (I expected 9 USD - 12 USD per month):
I'm using eu-central-1
All buckets combined are less than 400 GB
No replication
The best explanation I have is that the number of requests was way higher than expected. But I don't know how I can confirm this. How can I narrow down the source of AWS S3 cost?
Is it possible to see the costs by bucket?
Is it possible to see a breakdown by storage / requests / transfers / other features like replication?
First pay attention to the factors on which AWS S3 charges - i.e. based on storage, how many requests s3 is getting, data transfer and retrieval.
Some of the ways for cutting and keep track on the cost -
Delete the previous version of the buckets if you don't need that.
Move the data to different s3 storage based on frequency of data retrieval.
activate the cost allocation tags on your buckets so that you can review the cost on individual bucket.
create an S3 Storage Lens dashboard for all the buckets in your account.

I am using digital ocean cloud storage, and I want to migrate to AWS S3

I am using digital ocean spaces as a cloud storage to store users data, and its costing me for both hosting the data and for datatransfer. So, I wanna migrate to Amazon Simple storage s3 (frequent access). I just went through the official docs of AWS S3 and found that, it will cost only for the data hosted in their storage, regardless of the retrieval numbers, I am new to AWS ecosystem and I am not sure about the pricing concept of AWS. Please let me know the pricing estimate for the following scenario:
=> any user can upload a data in my mobile applications
=> if i store around 100 gb of data with AWS s3,
=> if i retrieve that 100 gb around 50 to 100 times a day in my mobile app.
=> how much I need to pay per month,
=> current pricing to store 1 gb is around $0.02.($0.02/1gb)
Not sure what documentation you were reading, but the official S3 pricing page is pretty clear that you are charged for:
Data storage, which depends on region but is somewhere between 2 and 5 US cents per gigabyte, per month.
Number of requests, which again depends on region, but is on the order of a few US cents per 1,000 requests (retrieving a file is a GET request; uploading a file is a PUT request).
Data transfer, which again depends on region, but ranges from a low of $0.09/GB in the US regions, to a high (I think) of $0.154 in the Capetown region.
So, if you're retrieving 100 GB of data 100 times a day, you will be paying data transfer costs of anywhere from $900 to $1540 per day.
In my experience, Digital Ocean tends to be cheaper than AWS for most things (but you get fewer features). However, if you're really transferring 10 TB of data per day (I think that's unlikely, but it's what you asked), you should look for some hosting service that offers unlimited bandwidth.

What AWS charges are incurred if I upload images through API Gateway RestAPI to S3 bucket?

I'm looking to understand what charges exactly are going to be incurred if I was to for example create an API Gateway RestAPI-private ( and perhaps an asp.net core web API ) which stream images/documents into S3 bucket.
The reason why I am considering this is to utilize existing RestAPI authentication mechanism which is in place for private RestAPI, and avoid any complexity around trying to allow s3 uploads using things like direct connect.
I was told by someone that doing something such as this would cause the bill to rise, and there were concerns about costs.
Just looking to understand all the costs involved. Again, all I am looking for here is an API Endpoint which clients can upload images to, and avoiding all the complexity involved with trying to create some private connection between on prem clients and s3 (which looks complex)
Is anyone doing something similar to this?
So as per the AWS Documentation, the max single payload size apigateway can handle is 10mb. Taking the assumption that all request POST's will be under that limit the costs will take into consideration charges from ApiGateway, S3 and (assuming you want to handle the files first) Lambda. Without knowing your region, pricing is determined by US-East-2 (Ohio), and free-tier is taken as non-existent (max charges).
Breaking the pricing into those 3 sections, you can expect the following:
Total - $6.72 USD/month
ApiGateway ~ $0.88 USD
HTTPS API used for uploading data. The API is called 100k time a month to upload documents which on average is 5 MB in size: [100,000 * (5mb /512kb)] * [1/1,000,000] = $0.8789
S3 ~ $1.65 USD
50 GB of files for a month with standard storage and no GET requests: [50 * 0.023] + [100,000 x 0.000005] = $1.65
Lambda ~ $4.19 USD
512MB of memory for the function, executed 100k times in one month, and it ran for 5 seconds each time: [(100,000 * 5) * (512/1024) * $0.00001667] + [$0.20 * 0.1] = $4.1875
If you want more specific information on a service, I suggest you look at the AWS Doc pricing links I included for each service which have a very extensive breakdown of costs.

Reducing AWS Data Transfer Cost for AWS S3 files

AWS S3 has a standard public bucket and folder (Asia Pacific region) which hosts ~30 GB of images/media. On another hand, the website and app access these images by using a direct S3 object URL. Unknowingly we run into high data transfer cost and its significantly unproportionate:
Amazon Simple Storage Service: USD 30
AWS Data Transfer: USD 110
I have also read that if EC2 and S3 is in the same region cost will be significantly lower, but problem is S3 objects are accessible from anywhere in the world from client machine directly and no EC2 is involved in between.
Can someone please suggest how can data transfer costs be reduced?
The Data Transfer charge is directly related to the amount of information that goes from AWS to the Internet. Depending on your region, it is typically charged at 9c/GB.
If you are concerned about the Data Transfer charge, there are a few things you could do:
Activate Amazon S3 Server Access Logging, which will create a log file for each web request. You can then see how many requests are coming and possibly detect strange access behaviour (eg bots, search engines, abuse).
You could try reducing the size of files that are typically accessed, such as making images smaller. Take a look at the Access Logs and determine which objects are being accessed the most, and are therefore causing the most costs.
Use less large files on your website (eg videos). Again, look at the Access Logs to determine where the data is being used.
A cost of $110 suggests about 1.2TB of data being transferred.

S3 and large files backup

I have large files and I am looking for a place where I could store files (databases dumps). AWS S3 is good for backups? I have already exceeded all limits.
I have a few questions:
I am using API and CLI. Which solution is cheaper to send files via API? "aws s3api put-object" or "aws s3 cp"?
"2,000 Put, Copy, Post or List Requests of Amazon S3". How is consumption calculated? In HTTP requests or bytes? Ac Currently, Currently, I have level of consumption for 20 files per day: 2,000.00/2,000 Requests.
Are there any paid plans?
Everything you need to know is at the Request Pricing section of the S3 Pricing page.
Amazon S3 request costs are based on the request type, and are charged
on the quantity of requests or the volume of data retrieved as listed
in the table below. When you use the Amazon S3 console to browse your
storage, you incur charges for GET, LIST, and other requests that are
made to facilitate browsing. Charges are accrued at the same rate as
requests that are made using the API/SDK.
Specific pricing is available at that page (not included here because it will change over time).