Protection against deletion of S3 server access logs - amazon-web-services

The server access logs of a bucket, say A, is required for auditing purposes. These logs are stored in a other bucket, say B. How can we make sure that no tampering or deletion of logs has happened from bucket B. It would have been easier to send these logs to Cloudwatch for better retention but I'm not sure if it is possible for S3 server access logs.

Versioning, MFA delete, and Object lock are all features of S3 that will help you achieve your task.

Related

AWS S3 Access Logging for All Buckets

Is it possible to have server access logging enabled for all S3 buckets in an account? While you can configure an S3 bucket to log to itself, according to this document that will result in a loop that will cascade: https://aws.amazon.com/premiumsupport/knowledge-center/s3-server-access-logs-same-bucket/.
Is there some way to tell it not to log actions caused by the access logging action?
Should the access log bucket not have access logs enable for itself? Seems like that'd be leaving a security hole.
I set my S3 access log bucket to send access logs to itself and, as expected, an infinite storm of logs occurred as the access logging seemed to log it's own actions. I had hoped that maybe AWS had improved this over the years, but I guess not.

Centralized multi-account logging for AWS CloudFront

Let me preface this by saying that I have discussed this problem in detail with advanced AWS support engineers and they have no reasonable solution, so this may be an impossible problem. However, I thought I'd give it a shot here and see if someone has a creative solution.
The problem:
We have an AWS organization with multiple accounts, and multiple different accounts use CloudFront distributions.
In accordance with AWS recommendations / best practices, we have a centralized logs archive account that collects all of our logs from all organization accounts.
We need to collect CloudFront logs from each CloudFront distribution in each organization account, and store them in our logs archive bucket in the centralized logging account.
CloudFront supports two logging options: (a) logging directly to S3, and (b) "Real-Time Logging" to a Kinesis data stream.
Option (a) would be ideal, but CloudFront uses the archaic S3 ACL system and requires the "Full Control" ACL on the bucket being logged to. In order to configure this, the user/role that is configuring CloudFront must have either the "Full Control" ACL as well or the "s3:PutBucketAcl" IAM permission (so it can grant an ACL to CloudFront).
If we grant "Full Control" ACL or the "s3:PutBucketAcl" permission to the org account, that account can then grant full access to any other entity they please, and there's no way to restrict who they grant permissions to (there's no IAM condition key that supports restricting who ACLs are granted to). Additionally, they could write objects to the bucket other than CloudFront logs. This is unacceptable for multiple reasons since we don't necessarily have a high level of trust for this account.
Furthermore, we would have no way to control which path in the bucket they configure CloudFront to log to, which is unacceptable because we need CloudFront logs to go to a specific path in the bucket. We could use the bucket policy to deny "s3:PutObject" to anything other than the path we want, but then if the account specifies an incorrect path, the logs just won't get written and will be lost to the ether.
Option (b) could work, but due to the fact that Kinesis data streams don't support resource policies and AWS doesn't support cross-account role passing, there's no way to have CloudFront send logs to a central Kinesis stream in the logging account. We would need a Kinesis data stream in each org account that uses CloudFront. This would become quite expensive.
So, I'm pretty much out of ideas. AWS's suggestion, after weeks of discussion and debate, was to:
Send CloudFront logs to an S3 bucket in the org account.
Use a Lambda that triggers on object puts to read the object and send the data to CloudWatch.
Put a CloudWatch subscription on the log group to send the logs to a Kinesis Firehose in the log archives account, which would then store them in the S3 bucket.
This is also an unacceptable solution though, because it creates a ton of extra resources/cost and has a lot of points of failure.
So, if anyone has any bright ideas or details on how they centralized CloudFront logs for their organization, please let me know!

Trigger alarm based on a rate-limit on S3 GetObject and DeleteObject requests

Recently, one of my AWS accounts got compromised, fortunately we were able to change all secure information in time. To avoid recurrence of such a situation the first thing to do would be to have a process in place for secret info management.
That said, I would also want to trigger a cloudwatch alarm in a case where multiple download or delete is taking place from inside my AWS account.
I have come across solutions like
AWS WAF
Have a CDN in place
Trigger a lambda function on an event in S3
Solutions #1 & #2 are not serving to my requirement as they throttle requests coming from outside of AWS. Once it is implemented at S3 level, it will automatically throttle both inside and outside requests.
In solution #3 I could not get a hold of multiple objects requested by an IP in my lambda function, when a threshold time limit and threshold number of file is crossed.
Is raising an alarm by rate-limiting at S3 level a possibility?
There is no rate limit provided by AWS on S3 directly, but you can implement alarms over SNS Topics with CloudTrails.
Unless you explicitly require anyone in your team to remove the objects in your S3 bucket, you shouldn't provide anyone access. The following are some idea you can follow:
Implement the least privilege access
You can block the access to remove the objects on the IAM User
level, so no-one will be able to remove any items.
You can modify the Bucket policy to provide DeleteObject Access to
specific users/roles as conditions.
Enable multi-factor authentication (MFA) Delete
MFA Delete can help prevent accidental bucket deletions. If MFA
Delete is not enabled, any user with the password of a sufficiently
privileged root or IAM user could permanently delete an Amazon S3
object.
MFA Delete requires additional authentication for either of the
following operations:
Changing the versioning state of your bucket
Permanently deleting an object version.
S3 Object Lock
S3 Object Lock enables you to store objects using a "Write Once Read Many" (WORM) model. S3 Object Lock can help prevent accidental or inappropriate deletion of data. For example, you could use S3 Object Lock to help protect your AWS CloudTrail logs.
Amazon Macie with Amazon S3
Macie uses machine learning to automatically discover, classify, and protect sensitive data in AWS. Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property. It provides you with dashboards and alerts that give visibility into how this data is being accessed or moved.
You can learn more about the best Security Practices with S3.
https://aws.amazon.com/premiumsupport/knowledge-center/secure-s3-resources/

Who has deleted files in S3 bucket?

Which is the best way to find out who deleted files in AWS S3 bucket?
I am working on AWS S3 Bucket. Going through the AWS docs and haven't found the best way to monitor S3 buckets so thought of checking if anyone can help me here.
For monitoring S3 object operations, such as DeleteObject, you have to enable CloudTrail with S3 data events:
How do I enable object-level logging for an S3 bucket with AWS CloudTrail data events?
Examples: Logging Data Events for Amazon S3 Objects
However, the trials don't work retrospectively. Thus, you have to check if you have such trial enabled in CloudTrail console. If not, then you can create one to monitor any future S3 object level activities for all, or selected, buckets.
To reduce the impact of accidental deletions you can enable object version. And to fully protect against that for important objects, you can use MFA delete.
You can check S3 access logs or CloudTrail to check who deleted files from your S3 bucket. More information here - https://aws.amazon.com/premiumsupport/knowledge-center/s3-audit-deleted-missing-objects/

AWS S3 bucket logs vs AWS cloudtrail

What's the difference between the AWS S3 logs and the AWS CloudTrail?
On the doc of CloudTrail I saw this:
CloudTrail adds another dimension to the monitoring capabilities
already offered by AWS. It does not change or replace logging features
you might already be using.
CloudTrail tracks API access for infrastructure-changing events, in S3 this means creating, deleting, and modifying bucket (S3 CloudTrail docs). It is very focused on API methods that modify buckets.
S3 Server Access Logging provides web server-style logging of access to the objects in an S3 bucket. This logging is granular to the object, includes read-only operations, and includes non-API access like static web site browsing.
AWS has added one more functionality since this question was asked, namely CloudTrail Data events
Currently there are 3 features available:
CloudTrail: Which logs almost all API calls at Bucket level Ref
CloudTrail Data Events: Which logs almost all API calls at Object level Ref
S3 server access logs: Which logs almost all (best effort server logs delivery) access calls to S3 objects. Ref
Now, 2 and 3 seem similar functionalities but they have some differences which may prompt users to use one or the other or both(in our case)! Below are the differences which I could find:
Both works at different levels of granularity. e.g. CloudTrail data events can be set for all the S3 buckets for the AWS account or just for some folder in S3 bucket. Whereas, S3 server access logs would be set at individual bucket level
The S3 server access logs seem to give more comprehensive information about the logs like BucketOwner, HTTPStatus, ErrorCode, etc. Full list
Information which is not available in Cloudtrail logs but is available in Server Access logs. Reference:
Fields for Object Size, Total Time, Turn-Around Time, and HTTP Referer for log records
Life cycle transitions, expiration, restores
Logging of keys in a batch delete operation
Authentication failures
CloudTrail does not deliver logs for requests that fail authentication (in which the provided credentials are not valid). However, it does include logs for requests in which authorization fails (AccessDenied) and requests that are made by anonymous users.
If a request is made by a different AWS Account, you will see the CloudTrail log in your account only if the bucket owner owns or has full access to the object in the request. If that is not the case, the logs will only be seen in the requester account. The logs for the same request will however be delivered in the server access logs of your account without any additional requirements.
AWS Support recommends that decisions can be made using CloudTrail logs and if you need that additional information too which is not available in CloudTrail logs, you can then use Server access logs.
There are two reasons to use CloudTrail Logs over S3 Server Access Logs:
You are interested in bucket-level activity logging. CloudTrail has that, S3 logs does not.
You have a log analysis setup that involves CloudWatch log streams. The basic S3 logs just store log events to files on some S3 bucket and from there it's up to you to process them (though most log analytics services can do this for you).
Bottom line: use CloudTrail, which costs extra, if you have a specific scenario that requires it. Otherwise, the "standard" S3 Server Access Logs are good enough.
From the CloudTrail developer guide (https://docs.aws.amazon.com/AmazonS3/latest/dev/cloudtrail-logging.html):
Using CloudTrail Logs with Amazon S3 Server Access Logs and CloudWatch Logs
You can use AWS CloudTrail logs together with server access logs for Amazon S3. CloudTrail logs provide you with detailed API tracking for Amazon S3 bucket-level and object-level operations, while server access logs for Amazon S3 provide you visibility into object-level operations on your data in Amazon S3. For more information about server access logs, see Amazon S3 Server Access Logging.
You can also use CloudTrail logs together with CloudWatch for Amazon S3. CloudTrail integration with CloudWatch logs delivers S3 bucket-level API activity captured by CloudTrail to a CloudWatch log stream in the CloudWatch log group you specify. You can create CloudWatch alarms for monitoring specific API activity and receive email notifications when the specific API activity occurs. For more information about CloudWatch alarms for monitoring specific API activity, see the AWS CloudTrail User Guide. For more information about using CloudWatch with Amazon S3, see Monitoring Metrics with Amazon CloudWatch.
AWS CloudTrail is an AWS service for logging all account activities on different AWS resources. It also tracks things like IAM console login etc. Once CloudTrail service is enabled you can just go to CloudTrail console and see all the activity and also apply filters. Also, while enabling you can choose to log these activities and send the data to AWS CloudWatch. In AWS CloudWatch you can apply filters and also create alarms to notify you when a certain kind of activity happens.
S3 logging is enabling logging for basic activity on your S3 buckets/Objects.
CloudTrail logs API calls accessed to your AWS Account.
These CloudTrail logs are stored in Amazon S3 Bucket.
The two offer different services.
The Definition you have shared from CloudTrail Doc:
CloudTrail adds another dimension to the monitoring capabilities already offered by AWS. It does not change or replace logging features you might already be using.
It means you might have already activated some of the other logging features offered in other AWS services like ELB logging etc..
But when you enable CloudTrail monitoring, you need not worry about your previous logging functionalities as they will be still active.
You will recieve logs from all the services.
So By Enabling CloudTrail logging, It does not change or replace logging features you might already be using.
Hope it Helps.. :)