Am I charged for creating and keeping AWS S3 buckets? - amazon-web-services

I created two buckets back in 2018 but never removed them. Rest assured that they are empty buckets with no files at all.
I didn't see any fee charged, so I presume Amazon/AWS didn't charge for simply creating S3 buckets?

S3 pricing is based on object storage, not buckets. You can read more about S3 pricing on the AWS S3 pricing page here.

Well, even though you're not charged for the bucket itself, you can still incur some charges related to them.

There are six Amazon S3 cost components to consider when storing and managing your data—storage pricing, request and data retrieval pricing, data transfer and transfer acceleration pricing, data management and analytics pricing, replication pricing, and the price to process your data with S3 Object Lambda. For my details about the pricing model, check it out here

Related

Is there a way to check which S3 bucket is costing me the most for outgoing data?

I have 50+ S3 buckets in my company AWS account. About 40% of my total monthly bill is for outgoing data transfer. Is there a way to get information on how much data is going out from which S3 bucket? (note: I am not looking for how much data the buckets have in storage).
Follow these steps:
Add a common tag to each bucket.
Activate the tag as a cost allocation tag.
Use the AWS Cost Explorer to create a cost report for the tag.
Wait a couple of days for AWS to catch up
Source: How do I find the cost of my Amazon S3 buckets?
If you don't want to go over all the buckets and tag them with their name, you get it out of the box from amazon as the "Resource ID" (which is the bucket name). If you enabled resources in the Cost explorer or you're using the CUR file, just filter the product to "AmazonS3", aggregate by resource ID, and check the sum of your unBlendedCosts.
Good luck!

How much will it cost to read from an S3 bucket in the same region but on different account

I read it doesn't cost anything to read data from an S3 bucket on the same region within the same account.
However, I'm interested how much will it cost to read a GB from a different account's bucket, on the same region.
There will be no Data Transfer cost, since the Amazon EC2 instance and Amazon S3 bucket are in the same region.
The owner of the bucket will be charged for the GET Request ($0.0004 per 1,000 requests). Apart from that, the fact that the bucket belongs to a different AWS Account will not impact cost.

How to monitor daily costs of Amazon S3

I was looking around to learn about Do and Don'ts of Amazon S3 and was looking at options on how to monitor usage daily. I have couple of questions here:
I see on the Amazon S3 pricing page that PUT, COPY, POST, or LIST Requests are $0.005 per 1,000 requests
I have 5 buckets in S3 and each of these buckets have sub folders having 100k files.
If I do aws s3 ls --recursive will I be charged
5*100k=500000/1000 = 500*0.005 = $2.50?
Any suggestions on tools that can be used to email daily usage rate of my S3 bucket?
A LIST request can return up to 1000 objects. Therefore, to obtain the full list of 100k objects, it would require 100 LIST requests. That would consume a tenth of the initial $0.005 charge.
If you were to repeat this for 5 buckets, it would require 500 LIST requests, which would consume of half of the initial $0.005 charge.
Alternatively you could use Amazon S3 Storage Inventory, which can provide a daily/weekly CSV file that contains a list of all your objects. It is charged at $0.0025 per million objects listed (per bucket per run).
I sense that you are worried about your Amazon S3 costs. You can configure billing alerts and notifications to remain aware of when your costs rise above expected levels. This works for all AWS services.
See:
Monitoring Charges with Alerts and Notifications
Create a Billing Alarm to Monitor Your Estimated AWS Charges

How to make automated S3 Backups

I am working on an app which uses S3 to store important documents. These documents need to be backed up on a daily, weekly rotation basis much like how database backups are maintained.
Does S3 support a feature where a bucket can be backup up into multiple buckets periodically or perhaps in Amazon Glacier. I want to avoid using an external service as much as possible, and was hoping S3 had some mechanism to do this, as its a common usecase.
Any help would be appreciated.
Quote from Amazon S3 FAQ about durability:
Amazon S3 is designed to provide 99.999999999% durability of objects over a given year. This durability level corresponds to an average annual expected loss of 0.000000001% of objects. For example, if you store 10,000 objects with Amazon S3, you can on average expect to incur a loss of a single object once every 10,000,000 years
These numbers mean, first of all, that they are almost unbeatable. In other words, your data is safe in Amazon S3.
Thus, the only reason why you would need to backup your data objects is to prevent their accidental loss (by your own mistake). To solve this problem Amazon S3 enables versioning of S3 objects. Enable this feature on your S3 bucket and you're safe.
ps. Actually, there is one more possible reason - cost optimization. Amazon Glacier is cheaper than S3. I would recommend to use AWS Data Pipeline to move S3 data to Glacier routinely.
Regarding Glacier, you can make settings on your bucket to backup (old) s3 data to glaciaer if it is older than specified duration. This can save you cost if you want infrequently accessed data to be archived.
In s3 bucket there are lifecycle rules using which we can automatically move data from s3 to glaciers.
but if you want to access these important documents frequently from backup then you can also use another S3 bucket for backup your data.This backup can be scheduled using AWS datapipeline daily,weekly etc.
*Glaciers are cheaper than S3 as data is stored in compressed format in galaciers.
I created a Windows application that will allow you to schedule S3 bucket backups. You can create three kinds of backups: Cumulative, Synchronized and Snapshots. You can also include or exclude root level folders and files from your backups. You can try it free with no registration at https://www.bucketbacker.com

Number of Buckets in Amazon Free Tier

I am trying out Amazon S3 for my file uploads and would like to store different buckets for development, test and production environments. In amazon documentation it was mentioned the following statement
As part of the AWS Free Usage Tier,
you can get started with Amazon S3 for
free. Upon sign-up, new AWS customers
receive 5 GB of Amazon S3 storage,
20,000 Get Requests, 2,000 Put
Requests, 15GB of data transfer in,
and 15GB of data transfer out each
month for one year.
Is there any limitation about the number of buckets. I mean if I have three buckets and If I use within the overall storage limit, will I be charged.
Each account in AWS is limited to 100 buckets -- even if you are paying the normal usage rates.
Buckets are not billable items in S3.
If the limit of 100 is not enough you can create virtual folders in your buckets and structure your environment that way.