Is it possible to log when an upload or deletion of a file happens in s3 via the management console? From what I can tell, CloudTrail allows object level logging of events via API calls, as well as a few management console actions, like signing in to the console. But I can't figure out how to log uploads/deletes via the console. Thanks!
To enable S3 Access Logs:
Go to Amazon S3 console.
Select your bucket.
Click on the Properties tab.
Click on Server access logging.
Enter the name of the bucket to store the logs in. This must be different than the bucket that you are tracking. Optionally enter a target prefix.
Server Access Logging
Related
I'd like to send Amplify monitoring data(access logs, metrics) to Splunk - this would be best case scenario. But for the beginning it would be ok if I could at least store them into another service like s3 or even grater to link it with CloudWatch, as I haven't found if those logs are somehow taken from CW log groups.
My question would be if there's a way to get those metrics outside of Amplify service?
There's a way you can send CW logs to your 3rd Party Apps.
Two Major steps:
Export CW logs to s3.
Configure lambda with s3 bucket & write your logic to read & send logs to 3rd party apps every time a file is written on bucket.
Cloudwatch allows you to export logs to s3.
From AWS docs:
To export data to Amazon S3 using the CloudWatch console
Sign in as the IAM user that you created in Step 2: Create an IAM user
with full access to Amazon S3 and CloudWatch Logs.
Open the CloudWatch console at
https://console.aws.amazon.com/cloudwatch/.
In the navigation pane, choose Log groups.
On the Log Groups screen, choose the name of the log group.
Choose Actions, Export data to Amazon S3.
On the Export data to Amazon S3 screen, under Define data export, set
the time range for the data to export using From and To.
If your log group has multiple log streams, you can provide a log
stream prefix to limit the log group data to a specific stream. Choose
Advanced, and then for Stream prefix, enter the log stream prefix.
Under Choose S3 bucket, choose the account associated with the Amazon
S3 bucket.
For S3 bucket name, choose an Amazon S3 bucket.
For S3 Bucket prefix, enter the randomly generated string that you
specified in the bucket policy.
Choose Export to export your log data to Amazon S3.
To view the status of the log data that you exported to Amazon S3,
choose Actions and then View all exports to Amazon S3.
Once you have exported logs into S3, You can setup simple S3 lambda trigger to read & send these logs to Third Party Applications(splunk in this case) using their APIs.
This way you'd also have saved logs in the S3 for future use or something.
Is there a way to customize AWS management console so that it shows only the allowed services per user?
The AWS management console is identical for every user.
However, if a user does not have permissions for a particular service, the console may not be able to display some information (eg a list of Amazon S3 buckets or the state of Amazon EC2 instances). Users might also receive error messages explaining that they do not have permission to view some data.
It is not possible to customize the AWS management console.
I have created an S3 bucket, not sure what am I missing with IAM lifecycle policies.
Files in s3 bucket are automatically moving to tombstone folder after few days. how to stop this?
I have enabled only "Server access logging" in properties tab. And there are no life cycle rules are attached.
You can enable Amazon S3 Server Access Logging. following these instructions
Server access logging provides detailed records for the requests that are made to a bucket. Server access logs are useful for many applications. For example, access log information can be useful in security and access audits.
I am using AWS S3 for serving assets to my website, now even though I have added cache-control metadata header to all my assets my daily overall bandwidth usage almost got doubled in past month.
I am sure that traffic on my website has not increased dramatically to account for increase in S3's bandwidth usage.
Is there a way to find out how much a file is contributing to the total bill in terms of bandwidth or cost ?
I am routing all my traffic through cloudfare so it should be protected against DDoS attack.
I expect the bandwidth of my S3 bucket to reduce or to get some valid reason which explains why bandwidth almost doubled when there's no increase in daily traffic.
You need to enable Server Access Logging on your content bucket. Once you do this, all bucket accesses will be written to logfiles that are stored in a (different) S3 bucket.
You can analyze these logfiles with a custom program (you'll find examples on the web) or AWS Athena, which lets you write SQL queries against structured data.
I would focus on the remote IP address of the requestor, to understand what proportion of requests are served via CloudFlare versus people going directly to your bucket.
If you find that CloudFlare is constantly reloading content from the bucket, you'll need to give some thought to cache-control headers, either as metadata on the object in S3, or in your CloudFlare configuration.
From: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/enable-cloudtrail-events.html
To enable CloudTrail data events logging for objects in an S3 bucket:
Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/.
In the Bucket name list, choose the name of the bucket that you want.
Choose Properties.
Choose Object-level logging.
Choose an existing CloudTrail trail in the drop-down menu. The trail you select must be in the same AWS Region as your bucket, so the drop-down list contains only trails that are in the same Region as the bucket or trails that were created for all Regions.
If you need to create a trail, choose the CloudTrail console link to go to the CloudTrail console. For information about how to create trails in the CloudTrail console, see Creating a Trail with the Console in the AWS CloudTrail User Guide.
Under Events, select Read to specify that you want CloudTrail to log Amazon S3 read APIs such as GetObject. Select Write to log Amazon S3 write APIs such as PutObject. Select both Read and Write to log both read and write object APIs. For a list of supported data events that CloudTrail logs for Amazon S3 objects, see Amazon S3 Object-Level Actions Tracked by CloudTrail Logging in the Amazon Simple Storage Service Developer Guide.
Choose Create to enable object-level logging for the bucket.
To disable object-level logging for the bucket, you must go to the CloudTrail console and remove the bucket name from the trail's Data events.
What is the easiest way to get the user/role used to update/upload an object to S3?
I object is still in the bucket. Just want to know who did it.
tried CLI didn't find anything. CloudTrail could be an option as well I guess.
The easiest way would be to enable S3 server access logging:
AWS Console -> S3 -> Choose your bucket -> Properties -> Choose target bucket (where wou want your logs to be stored) -> Save
Each request is saved as one row in logs. It's not just for get requests, it's for all types of requests.
In logs, you would look for Requester:
The canonical user ID of the requester, or a - for unauthenticated requests. If the requester was an IAM user, this field returns the requester's IAM user name along with the AWS root account that the IAM user belongs to. This identifier is the same one used for access control purposes.
You can see more details in official documentation:
https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html
From Logging Amazon S3 API Calls by Using AWS CloudTrail - Amazon Simple Storage Service:
Amazon S3 is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in Amazon S3. CloudTrail captures a subset of API calls for Amazon S3 as events, including calls from the Amazon S3 console and from code calls to the Amazon S3 APIs.