Why I am still charged after removing all S3 resources? - amazon-web-services

I deleted my Amazon S3 resources, but I still see charges for S3. I have only one
bucket and it is empty, for some reason I am not able to delete it.
It does not have any logging or something, all properties are displayed in below picture.

A likely cause of an "empty" bucket that isn't actually empty is abandoned multipart uploads that were never completed or aborted.
Use aws s3api list-multipart-uploads to verify this.
If they are there, you can aws s3api abort-multipart-upload to delete each one, after which you should be able to delete them.
Or, create a lifecycle policy to purge them, see https://aws.amazon.com/blogs/apn/automating-lifecycle-rules-for-multipart-uploads-in-amazon-s3/.

Related

AWS Backup for S3 buckets - what is the size limit?

I am using AWS Backup to back up S3 buckets. One of the buckets is about 190GB (the biggest of the buckets I am trying to back up) and it is the only bucket that the backup job fails on, with the error message:
Bucket [Bucket name] is too large, please contact AWS for support
The backup job failed to create a recovery point for your resource [Bucket ARN] due to missing permissions on role [role ARN]
As you can see, these are two error messages concatenated together (probably an AWS bug) but I think that the second message is incorrect, because all the rest of the buckets were backed up successfully with the same permissions, and they are configured that same way. Thus, I think the first message is the issue.
I was wondering what is the size limit for AWS backup for S3. I took a look at the AWS Backup quotas page and there was no mention of a size limit. How do I fix this error?
Here is the information you're looking for :
https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.html#S3-backup-limitations
Backup size limitations: AWS Backup for Amazon S3 allows you to
automatically backup and restore S3 buckets up to 1 PB in size and
containing fewer than 24 million objects.

Delete AWS S3 Bucket with millions of objects

I want to delete an s3 bucket in aws with millions of objects. Is there a quick way of doing it through AWS CLI command or a script to delete them all without going in the console and manually doing it?
The easiest way I have found is to first edit the bucket's lifecycle policy to expire all objects. Then wait a day or two for the lifecycle policy to have removed all the objects from the bucket.
You can use the following to delete the bucket with the object. But i think that can take a lot of time if there are a lot of objects. I think there isn't a really fast way to do this.
aws s3 rb --force s3://your_bucket_name
But perhaps everyone does a better way.

Can't delete S3 buckets - Error Data not found

I can't get rid of five buckets in S3. Every screen in the AWS console says "Error Data not found" (i.e. Overview, Properties, Permissions, Management, Access points).
I can't set lifecycle rules to delete objects, but the buckets never had anything in them and versioning was never enabled anyway.
I've also tried forcing it in my terminal...
aws s3 rb s3://bucketblah --force
...but it fails and I get remove_bucket failed: Unable to delete all objects in the bucket, bucket will not be deleted.
Help me Obi Wan...
Amazon S3 is what gives a developer their power. It's an energy field created by objects stored in the cloud. It surrounds us and penetrates us. It binds the Internet together.
Some may mock Amazon S3 because they cannot sense invisible objects in their bucket. But the wise Jedi amongst us will check whether the bucket has Versioning enabled. When attempting to rid the galaxy of their bucket, they might see messages such as:
$ aws s3 rb s3://rebel-base --force
remove_bucket failed: An error occurred (BucketNotEmpty) when calling the DeleteBucket operation: The rebel base you tried to destroy is not empty. You must delete all versions in the bucket.
If such resistance is met, sneak into the Amazon S3 management console, select the bucket, choose Versions: Show and reach out with your mind. If any deleted versions of objects are displayed, delete them within this view until all objects cry out in terror and are suddenly silenced.
If this does not lead to the resolution you seek, then check that your Master has allocated sufficient access privileges for you to access the central computer and this particular bucket. It is also possible that these buckets have bucket policies that override the central computer via Deny policies. If so, attempt to bypass security by deleting the bucket policy first, then destroy the rebel bucket. You know better than to trust a strange computer!

How does Lifecycle policy for moving s3 to glacier work?

I have created a lifecycle policy for one of my buckets as below:
Name and scope
Name MoveToGlacierAndDeleteAfterSixMonths
Scope Whole bucket
Transitions
For previous versions of objects Transition to Amazon Glacier after 1 days
Expiration Permanently delete after 360 days
Clean up incomplete multipart uploads after 7 days
I would like to get answer for the following questions:
When would the data be deleted from s3 as per this policy ?
Do i have to do anything on the glacier end inorder to move my s3 bucket to glacier ?
My s3 bucket is 6 years old and all the versions of the bucket are even older. But i am not able to see any data in the glacier console though my transition policy is set to move to glacier after 1 day from the creation of the data. Please explain this behavior.
Does this policy affect only new files which will be added to the bucket post lifepolicy creation or does this affect all the files in s3 bucket ?
Please answer these questions.
When would the data be deleted from s3 as per this policy ?
Never, for current versions. A lifecycle policy to transition objects to Glacier doesn't delete the data from S3 -- it migrates it out of S3 primary storage and over into Glacier storage -- but it technically remains an S3 object.
Think of it as S3 having its own Glacier account and storing data in that separate account on your behalf. You will not see these objects in the Glacier console -- they will remain in the S3 console, but if you examine an object that has transitioned, is storage class will change from whatever it was, e.g. STANDARD and will instead say GLACIER.
Do i have to do anything on the glacier end inorder to move my s3 bucket to glacier ?
No, you don't. As mentioned above, it isn't "your" Glacier account that will store the objects. On your AWS bill, the charges will appear under S3, but labeled as Glacier, and the price will be the same as the published pricing for Glacier.
My s3 bucket is 6 years old and all the versions of the bucket are even older. But i am not able to see any data in the glacier console though my transition policy is set to move to glacier after 1 day from the creation of the data. Please explain this behavior.
Two parts: first, check the object storage class displayed in the console or with aws s3api list-objects --output=text. See if you don't see some GLACIER-class objects. Second, it's a background process. It won't happen immediately but you should see things changing within 24 to 48 hours of creating the policy. If you have logging enabled on your bucket, I believe the transition events will also be logged.
Does this policy affect only new files which will be added to the bucket post lifepolicy creation or does this affect all the files in s3 bucket ?
This affects all objects in the bucket.

How to check is S3 service is available or not in AWS via CLI?

We have options to :
1. Copy file/object to another S3 location or local path (cp)
2. List S3 objects (ls)
3. Create bucket (mb) and move objects to bucket (mv)
4. Remove a bucket (rb) and remove an object (rm)
5. Sync objects and S3 prefixes
and many more.
But before using the commands, we need to check if S3 service is available in first place. How to do it?
Is there a command like :
aws S3 -isavailable
and we get response like
0 - S3 is available, I can go ahead upload object/create bucket etc.
1 - S3 is not availble, you can't upload object etc. ?
You should assume that Amazon S3 is available. If there is a problem with S3, you will receive an error when making a call with the Amazon CLI.
If you are particularly concerned, then add a simple CLI command first, eg aws s3 ls and throw away the results. But that's really the same concept. Or, you could use the --dry-run option available on many commands that simply indicates whether you would have had sufficient permissions to make the request, but doesn't actually run the request.
It is more likely that you will have an error in your configuration (eg wrong region, credentials not valid) than S3 being down.