How to get AWS S3 usage metrics by IAM user? - amazon-web-services

How to get the usage metrics of S3.
Currently, IAM users are uploading/downloading files from the S3 bucket. Each IAM user has a separate folder. How to access, how many GB of data were transferred/downloaded from S3?.

You can't. There is no such metric provided by AWS. You have to develop a custom solution for that. If you, for example, have CloudTrial trial enabled for S3 operations, you can parse the past logs and based on them build up a report on who downloaded/upload what. Once you know what objects were uploaded/downloaded by a give IAM user, you can add up their sizes.

Related

How to get list of users who are accessing the objects in S3 Buckets?

Scenario:
My client have 80+ S3 Buckets and 1000+ applications is running in their AWS account. I want to get the list of IAM users/roles who are accessing the objects in all the S3 Buckets.
Method 1: Initially I tried to fetch it from CloudTrail Event History, but no luck.
From the above image, you can see CloudTrail is failing to log the object level logging.
Method 2: I created a CloudTrail Trails to log the activities. But it captures all management level activities happening through out the account which makes me hard to find the S3 logs alone(I already mentioned that there is 80+ Buckets & 1000+ applications in the account).
Method 3: S3 Server Access Log: If I enable this option, it creates log entry for every action happening to the objects. (that is: When I attempt to read a log file, it creates an another log. It keeps on doubling the count of logs)
If anyone have a solution to find the list of IAM users/roles who are accessing the S3 bucket objects and in an effective way, please help me.
Thanks in advance.
For each bucket, configure object-level logging.
Once that is complete, you can use the CloudTrail API to filter events and extract IAM identities making the requests.
aws cloudtrail lookup-events --lookup-attributes AttributeKey=ResourceType,AttributeValue=AWS::S3::Object --query Events[*].Username

Securing the resources in Amazon S3

We have an amazon s3 account and a number of important documents are saved there in the bucket.
Is there a way we can secure those resources so that they are not deleted from the S3 account by any team member other than the primary account holder?
Also, how can we back up all the S3 resources in a google drive?
Thanks in advance.
The highest level of securing object from being delete is by using
MFA delete which can only be enabled by the root user.
The MFA delete also will not allow for disabling versioning in your bucket.
Regarding Google drive, I'm not aware of any build in AWS tool for that. I think you would have to look at some third party tools, or develop your own.
For backing up all S3 resources to Google Drive (or vice versa) - rclone running on a schedule is probably one of the simplest solutions that can achieve this nicely for you
Confidential documents
Some organizations keep confidential documents in a separate AWS Account, so that normal users do not have access to the documents.
Cross-account read permissions can be granted to appropriate users (eg for confidential HR documents).
Critical documents
If you just wish to "backup" to avoid accidental deletion, one option is to use Amazon S3 Same Region Replication to copy documents to another bucket. The destination bucket can be in a different account, so normal users do not have the ability to delete the copies.
In both cases, credentials to the secondary accounts would only be given sparingly, and they can also be protected by using Multi-Factor Authentication.

Is cross tenant blob access possible in azure?

I am having a hard time understanding Azure docs and terminologies. The problem is this. My customer has an azure bucket and we need to read/write to this bucket. They won't be sharing their storage account credentials either.
This can be achieved in AWS by following this:
https://aws.amazon.com/premiumsupport/knowledge-center/cross-account-access-s3/
I have just created an IAM user and asked my customers to allow the necessary permissions in the bucket policy. Thus, with one IAM user and one set of credentials, I can write to multiple buckets belonging to multiple AWS accounts.
Is something like above also possible in Azure?
they can create a Shared access signature while they can control what kind of access you need to have and also when to expire.

Copy files from s3 bucket to another AWS account

Is it possible to send/sync files from source AWS S3 bucket into destination S3 bucket on a different AWS account, in a different location?
I found this: https://aws.amazon.com/premiumsupport/knowledge-center/copy-s3-objects-account/
But if I understand it correctly, this is the way how to sync files from destination account.
Is there a way how to do it other way around? Accessing destination bucket from source account (using source IAM user credentials).
AWS finally came up with a solution for this: S3 batch operations.
S3 Batch Operations is an Amazon S3 data management feature that lets
you manage billions of objects at scale with just a few clicks in the
Amazon S3 Management Console or a single API request. With this
feature, you can make changes to object metadata and properties, or
perform other storage management tasks, such as copying objects
between buckets, replacing object tag sets, modifying access controls,
and restoring archived objects from S3 Glacier — instead of taking
months to develop custom applications to perform these tasks.
It allows you to replicate data at bucket, prefix or object level, from any region to any region, between any storage class (e.g. S3 <> Glacier) and across AWS accounts! No matter if it's thousands, millions or billions of objects.
This introduction video has an overview of the options (my apologies if I almost sound like a salesperson, I'm just very excited about it as I have a couple of million objects to copy ;-) https://aws.amazon.com/s3/s3batchoperations-videos/
That needs the right IAM and Bucket policy settings.
A detailed configuration for cross account access, is discussed here
Once you have it configured you can perform sync,
aws s3 sync s3://sourcebucket s3://destinationbucket --recursive
Hope it helps.

Can you get the AWS usage report for subdirectory for buckets?

Can you get the AWS usage report for subdirectory for buckets? I want to know the amount of traffic of all 'GetObject' requests for all subdirectory of S3.
First, remember that there are no "subdirectories" in S3. Everything within a bucket is in a flat index and identified by an object key. However, in the AWS console, objects that contain a shared prefix are represented together in a "folder" named after the shared prefix.
With that in mind, it should be easier to understand why you cannot get an AWS usage report for a specific "subdirectory". The AWS usage report is meant to be an overview of your AWS services and is not meant to be used for more detailed analytics.
Instead there is another AWS service that allows you insight into more detailed analytics for your other AWS services: AWS CloudWatch. With AWS Cloudwatch you can:
Set up daily storage
metrics
Set up request (GET) metrics on a
bucket
And, for your specific case, you can set up request metrics for specific prefixes (subdirectories) within a bucket.
Using request metrics from AWS CloudWatch is a paid service (and another reason why you cannot get detailed request metrics in the AWS usage report).