Can't delete S3 buckets - Error Data not found - amazon-web-services

I can't get rid of five buckets in S3. Every screen in the AWS console says "Error Data not found" (i.e. Overview, Properties, Permissions, Management, Access points).
I can't set lifecycle rules to delete objects, but the buckets never had anything in them and versioning was never enabled anyway.
I've also tried forcing it in my terminal...
aws s3 rb s3://bucketblah --force
...but it fails and I get remove_bucket failed: Unable to delete all objects in the bucket, bucket will not be deleted.
Help me Obi Wan...

Amazon S3 is what gives a developer their power. It's an energy field created by objects stored in the cloud. It surrounds us and penetrates us. It binds the Internet together.
Some may mock Amazon S3 because they cannot sense invisible objects in their bucket. But the wise Jedi amongst us will check whether the bucket has Versioning enabled. When attempting to rid the galaxy of their bucket, they might see messages such as:
$ aws s3 rb s3://rebel-base --force
remove_bucket failed: An error occurred (BucketNotEmpty) when calling the DeleteBucket operation: The rebel base you tried to destroy is not empty. You must delete all versions in the bucket.
If such resistance is met, sneak into the Amazon S3 management console, select the bucket, choose Versions: Show and reach out with your mind. If any deleted versions of objects are displayed, delete them within this view until all objects cry out in terror and are suddenly silenced.
If this does not lead to the resolution you seek, then check that your Master has allocated sufficient access privileges for you to access the central computer and this particular bucket. It is also possible that these buckets have bucket policies that override the central computer via Deny policies. If so, attempt to bypass security by deleting the bucket policy first, then destroy the rebel bucket. You know better than to trust a strange computer!

Related

Google Cloud storage bucket not listing deleted objects

Two days after having manually deleted all the objects in a multi-region Cloud Storage bucket (e.g. us.artifacts.XXX.com) without Object Versioning I noticed that the bucket size hadn't decreased at all. Only when trying to delete the bucket I discovered that it actually stills containing the objects that I had presumably deleted.
Why aren't those objects displayed in the bucket list view, even when enabling Show deleted data?
When deploying a Function for the first time, two buckets are created automatically:
gcf-sources-XXXXXX-us-central1
us.artifacts.project-ID.appspot.com
You can observe these two buckets from the GCP Console by clicking on Cloud Storage from the left panel.
The files you're seeing in bucket us.artifacts.project-ID.appspot.com are related to a recent change in how the runtime (for Node 10 and up) is built as this post explains.
I also found out that this bucket doesn't have object versioning, retention policy or any lifecycle rule. Although you delete this bucket, it will be created again when you deploy the related function, so, if you are seeing unexpected amounts of Cloud Storage used, this is likely caused by a known issue with the cleanup of artifacts created in the function deployment process as indicated here.
Until the issue is resolved, you can avoid hitting storage limits by creating an auto-deletion rule in the Cloud Console:
In the Cloud Console, select your project > Storage > Browser to open the storage browser.
Select the "artifacts" bucket from the list.
Under the Lifecycle tab, add a rule to auto-delete old images. Choose a deletion interval that works within your normal rate of deployments.
If possible, try to reproduce this scenario with a new function. In the meantime, take into account that if you delete many objects at once, you can track deletion progress by clicking the Notifications icon in the Cloud Console.
In addition, the Google Cloud Status Dashboard provides information about regional or global incidents affecting Google Cloud services such as Cloud Storage.
Nevermind! Eventually (at some point between 2-7 days after the deletion) the bucket size decreased and the objects are no longer displayed in the "Delete bucket" dialog.

AWS S3 Cross Region Replication - When there is outage

We have a primary bucket that stores a list of files and a replication bucket in a different region. what will happen when the region that has replication bucket is down(has outage)
Will the replication fail or will it stay on pending state until the region is back?
Let me quote the documentation:
If object replication fails after you upload an object, you can't retry replication. You must upload the object again. Objects transition to a FAILED state for issues such as missing replication role permissions, AWS KMS permissions, or bucket permissions. For temporary failures, such as if a bucket or Region is unavailable, replication status will not transition to FAILED, but will remain PENDING. After the resource is back online, S3 will resume replicating those objects.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-status.html
Usually objects replicate within 15 minutes, however sometimes this can vary to hours depending on object size, including source and destination Region pair and outage like you mentioned.
so if your configuration in the source bucket is correct then i think that will be in a pending state, if its fails then you might need to check your source configuration over here
FAILED is a terminal state that occurs only due to permission failures or misconfiguration (such as recreation of the destination bucket without versioning). It will not occur for transient issues source

Why I am still charged after removing all S3 resources?

I deleted my Amazon S3 resources, but I still see charges for S3. I have only one
bucket and it is empty, for some reason I am not able to delete it.
It does not have any logging or something, all properties are displayed in below picture.
A likely cause of an "empty" bucket that isn't actually empty is abandoned multipart uploads that were never completed or aborted.
Use aws s3api list-multipart-uploads to verify this.
If they are there, you can aws s3api abort-multipart-upload to delete each one, after which you should be able to delete them.
Or, create a lifecycle policy to purge them, see https://aws.amazon.com/blogs/apn/automating-lifecycle-rules-for-multipart-uploads-in-amazon-s3/.

How to check is S3 service is available or not in AWS via CLI?

We have options to :
1. Copy file/object to another S3 location or local path (cp)
2. List S3 objects (ls)
3. Create bucket (mb) and move objects to bucket (mv)
4. Remove a bucket (rb) and remove an object (rm)
5. Sync objects and S3 prefixes
and many more.
But before using the commands, we need to check if S3 service is available in first place. How to do it?
Is there a command like :
aws S3 -isavailable
and we get response like
0 - S3 is available, I can go ahead upload object/create bucket etc.
1 - S3 is not availble, you can't upload object etc. ?
You should assume that Amazon S3 is available. If there is a problem with S3, you will receive an error when making a call with the Amazon CLI.
If you are particularly concerned, then add a simple CLI command first, eg aws s3 ls and throw away the results. But that's really the same concept. Or, you could use the --dry-run option available on many commands that simply indicates whether you would have had sufficient permissions to make the request, but doesn't actually run the request.
It is more likely that you will have an error in your configuration (eg wrong region, credentials not valid) than S3 being down.

Lifecycle policy on S3 bucket

I have an S3 bucket on which I've configured a Lifecycle policy which says to archive all objects in the bucket after 1 day(s) (since I want to keep the files in there temporarily but if there are no issues then it is fine to archive them and not have to pay for the S3 storage)
However I have noticed there are some files in that bucket that were created in February ..
So .. am I right in thinking that if you select 'Archive' as the lifecycle option, that means "copy-to-glacier-and-then-delete-from-S3"? In which case this issue of the files left from February would be a fault - since they haven't been?
Only I saw there is another option - 'Archive and then Delete' - but I assume that means "copy-to-glacier-and-then-delete-from-glacier" - which I don't want.
Has anyone else had issues with S3 -> Glacier?
What you describe sounds normal. Check the storage class of the objects.
The correct way to understand the S3/Glacier integration is the S3 is the "customer" of Glacier -- not you -- and Glacier is a back-end storage provider for S3. Your relationship is still with S3 (if you go into Glacier in the console, your stuff isn't visible there, if S3 put it in Glacier).
When S3 archives an object to Glacier, the object is still logically "in" the bucket and is still an S3 object, and visible in the S3 console, but can't be downloaded from S3 because S3 has migrated it to a different backing store.
The difference you should see in the console is that objects will have A "storage class" of Glacier instead of the usual Standard or Reduced Redundancy. They don't disappear from there.
To access the object later, you ask S3 to initiate a restore from Glacier, which S3 does... but the object is still in Glacier at that point, with S3 holding a temporary copy, which it will again purge after some number of days.
Note that your attempt at saving may be a little bit off target if you do not intend to keep these files for 3 months, because any time you delete an object from Glacier, you are billed for the remainder of the three months, if that object has been in Glacier for a shorter time than that.