What Glacier vault does S3 default to? - amazon-web-services

Relating to (or following on from) the question: How does AWS transfer S3 objects to Glacier archives when you use lifecycle archive rules? - where it is explained that S3 is essentially just another multi-tennanted 'customer' of Glacier - does that mean I'm completely unable to configure notification events on the 'default' Glacier vault?
In S3, I'm unable to specify a vault to archive to, so are Glacier notifications only of purpose for bespoke apps that use the API's to handle their storage needs?
I'd really like to enable notifications on archive retrieval (and 'restored item expiry' if possible) - and I've already vaulted a huge stack of things via S3 - into 'the default vault of concealment'.
Is it a turd or am I just doing it wrong? (go on, be honest)?
(EDIT: sorry, forgot question mark)

It is concealed.
When archiving from Amazon S3 to Amazon Glacier, there is no ability to configure the back-end vault, nor its notifications. There is also no notification from S3 available to notify of restorations/deletions.

Related

How can you find the most retrieved object in your s3 bucket?

I want to know how many times a object retrieved from a aws s3 bucket. When we use aws CloudFront, we can use cloudFront popular objects report. How can i get similar report for s3
Download metrics for individual S3 objects are not readily available, afaik.
You could derive them from one of:
Amazon S3 access logs
CloudTrail logs

When is a file available to download from Amazon S3?

I can't find some information about Amazon S3, hope you will help me. When is a file available for user to download, after the POST upload? I mean some small JSON file that doesn't require much processing. Is it available to download immediately after uploading? Or maybe amazon s3 works in some sessions and it always takes a few hours?
According to the doc,
Amazon S3 provides strong read-after-write consistency for PUTs and DELETEs of objects in your Amazon S3 bucket in all AWS Regions.
This means that your objects are available to download immediately after it's uploaded.
An object that is uploaded to an Amazon S3 bucket is available right away. There is no time period that you have to wait. That means if you are writing a client app that uses these objects, you can access them as soon as they are uploaded.
In case anyone is wondering how to programmatically interact with objects located in an Amazon S3 bucket through code, here is an example of uploading and reading objects in an Amazon S3 bucket from a client web app....
Creating an example AWS photo analyzer application using the AWS SDK for Java

Who has deleted files in S3 bucket?

Which is the best way to find out who deleted files in AWS S3 bucket?
I am working on AWS S3 Bucket. Going through the AWS docs and haven't found the best way to monitor S3 buckets so thought of checking if anyone can help me here.
For monitoring S3 object operations, such as DeleteObject, you have to enable CloudTrail with S3 data events:
How do I enable object-level logging for an S3 bucket with AWS CloudTrail data events?
Examples: Logging Data Events for Amazon S3 Objects
However, the trials don't work retrospectively. Thus, you have to check if you have such trial enabled in CloudTrail console. If not, then you can create one to monitor any future S3 object level activities for all, or selected, buckets.
To reduce the impact of accidental deletions you can enable object version. And to fully protect against that for important objects, you can use MFA delete.
You can check S3 access logs or CloudTrail to check who deleted files from your S3 bucket. More information here - https://aws.amazon.com/premiumsupport/knowledge-center/s3-audit-deleted-missing-objects/

Copy files from s3 bucket to another AWS account

Is it possible to send/sync files from source AWS S3 bucket into destination S3 bucket on a different AWS account, in a different location?
I found this: https://aws.amazon.com/premiumsupport/knowledge-center/copy-s3-objects-account/
But if I understand it correctly, this is the way how to sync files from destination account.
Is there a way how to do it other way around? Accessing destination bucket from source account (using source IAM user credentials).
AWS finally came up with a solution for this: S3 batch operations.
S3 Batch Operations is an Amazon S3 data management feature that lets
you manage billions of objects at scale with just a few clicks in the
Amazon S3 Management Console or a single API request. With this
feature, you can make changes to object metadata and properties, or
perform other storage management tasks, such as copying objects
between buckets, replacing object tag sets, modifying access controls,
and restoring archived objects from S3 Glacier — instead of taking
months to develop custom applications to perform these tasks.
It allows you to replicate data at bucket, prefix or object level, from any region to any region, between any storage class (e.g. S3 <> Glacier) and across AWS accounts! No matter if it's thousands, millions or billions of objects.
This introduction video has an overview of the options (my apologies if I almost sound like a salesperson, I'm just very excited about it as I have a couple of million objects to copy ;-) https://aws.amazon.com/s3/s3batchoperations-videos/
That needs the right IAM and Bucket policy settings.
A detailed configuration for cross account access, is discussed here
Once you have it configured you can perform sync,
aws s3 sync s3://sourcebucket s3://destinationbucket --recursive
Hope it helps.

Is there a way to log files that are copied from an S3 bucket?

I'm looking for a way to log when data is copied from my S3 bucket. Most importantly, which file(s) were copied and when. If I had my way, I'd like by who and where but I don't want to get ahead of myself.
A couple of options:
Server Access Logging provides detailed records for the requests that are made to an S3 bucket
AWS CloudTrail captures a subset of API calls for Amazon S3 as events, including calls from the Amazon S3 console and from code calls to the Amazon S3 APIs