Let me preface this by saying that I have discussed this problem in detail with advanced AWS support engineers and they have no reasonable solution, so this may be an impossible problem. However, I thought I'd give it a shot here and see if someone has a creative solution.
The problem:
We have an AWS organization with multiple accounts, and multiple different accounts use CloudFront distributions.
In accordance with AWS recommendations / best practices, we have a centralized logs archive account that collects all of our logs from all organization accounts.
We need to collect CloudFront logs from each CloudFront distribution in each organization account, and store them in our logs archive bucket in the centralized logging account.
CloudFront supports two logging options: (a) logging directly to S3, and (b) "Real-Time Logging" to a Kinesis data stream.
Option (a) would be ideal, but CloudFront uses the archaic S3 ACL system and requires the "Full Control" ACL on the bucket being logged to. In order to configure this, the user/role that is configuring CloudFront must have either the "Full Control" ACL as well or the "s3:PutBucketAcl" IAM permission (so it can grant an ACL to CloudFront).
If we grant "Full Control" ACL or the "s3:PutBucketAcl" permission to the org account, that account can then grant full access to any other entity they please, and there's no way to restrict who they grant permissions to (there's no IAM condition key that supports restricting who ACLs are granted to). Additionally, they could write objects to the bucket other than CloudFront logs. This is unacceptable for multiple reasons since we don't necessarily have a high level of trust for this account.
Furthermore, we would have no way to control which path in the bucket they configure CloudFront to log to, which is unacceptable because we need CloudFront logs to go to a specific path in the bucket. We could use the bucket policy to deny "s3:PutObject" to anything other than the path we want, but then if the account specifies an incorrect path, the logs just won't get written and will be lost to the ether.
Option (b) could work, but due to the fact that Kinesis data streams don't support resource policies and AWS doesn't support cross-account role passing, there's no way to have CloudFront send logs to a central Kinesis stream in the logging account. We would need a Kinesis data stream in each org account that uses CloudFront. This would become quite expensive.
So, I'm pretty much out of ideas. AWS's suggestion, after weeks of discussion and debate, was to:
Send CloudFront logs to an S3 bucket in the org account.
Use a Lambda that triggers on object puts to read the object and send the data to CloudWatch.
Put a CloudWatch subscription on the log group to send the logs to a Kinesis Firehose in the log archives account, which would then store them in the S3 bucket.
This is also an unacceptable solution though, because it creates a ton of extra resources/cost and has a lot of points of failure.
So, if anyone has any bright ideas or details on how they centralized CloudFront logs for their organization, please let me know!
Related
Recently, one of my AWS accounts got compromised, fortunately we were able to change all secure information in time. To avoid recurrence of such a situation the first thing to do would be to have a process in place for secret info management.
That said, I would also want to trigger a cloudwatch alarm in a case where multiple download or delete is taking place from inside my AWS account.
I have come across solutions like
AWS WAF
Have a CDN in place
Trigger a lambda function on an event in S3
Solutions #1 & #2 are not serving to my requirement as they throttle requests coming from outside of AWS. Once it is implemented at S3 level, it will automatically throttle both inside and outside requests.
In solution #3 I could not get a hold of multiple objects requested by an IP in my lambda function, when a threshold time limit and threshold number of file is crossed.
Is raising an alarm by rate-limiting at S3 level a possibility?
There is no rate limit provided by AWS on S3 directly, but you can implement alarms over SNS Topics with CloudTrails.
Unless you explicitly require anyone in your team to remove the objects in your S3 bucket, you shouldn't provide anyone access. The following are some idea you can follow:
Implement the least privilege access
You can block the access to remove the objects on the IAM User
level, so no-one will be able to remove any items.
You can modify the Bucket policy to provide DeleteObject Access to
specific users/roles as conditions.
Enable multi-factor authentication (MFA) Delete
MFA Delete can help prevent accidental bucket deletions. If MFA
Delete is not enabled, any user with the password of a sufficiently
privileged root or IAM user could permanently delete an Amazon S3
object.
MFA Delete requires additional authentication for either of the
following operations:
Changing the versioning state of your bucket
Permanently deleting an object version.
S3 Object Lock
S3 Object Lock enables you to store objects using a "Write Once Read Many" (WORM) model. S3 Object Lock can help prevent accidental or inappropriate deletion of data. For example, you could use S3 Object Lock to help protect your AWS CloudTrail logs.
Amazon Macie with Amazon S3
Macie uses machine learning to automatically discover, classify, and protect sensitive data in AWS. Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property. It provides you with dashboards and alerts that give visibility into how this data is being accessed or moved.
You can learn more about the best Security Practices with S3.
https://aws.amazon.com/premiumsupport/knowledge-center/secure-s3-resources/
We have an amazon s3 account and a number of important documents are saved there in the bucket.
Is there a way we can secure those resources so that they are not deleted from the S3 account by any team member other than the primary account holder?
Also, how can we back up all the S3 resources in a google drive?
Thanks in advance.
The highest level of securing object from being delete is by using
MFA delete which can only be enabled by the root user.
The MFA delete also will not allow for disabling versioning in your bucket.
Regarding Google drive, I'm not aware of any build in AWS tool for that. I think you would have to look at some third party tools, or develop your own.
For backing up all S3 resources to Google Drive (or vice versa) - rclone running on a schedule is probably one of the simplest solutions that can achieve this nicely for you
Confidential documents
Some organizations keep confidential documents in a separate AWS Account, so that normal users do not have access to the documents.
Cross-account read permissions can be granted to appropriate users (eg for confidential HR documents).
Critical documents
If you just wish to "backup" to avoid accidental deletion, one option is to use Amazon S3 Same Region Replication to copy documents to another bucket. The destination bucket can be in a different account, so normal users do not have the ability to delete the copies.
In both cases, credentials to the secondary accounts would only be given sparingly, and they can also be protected by using Multi-Factor Authentication.
I would like the get the email notification to specific mail id, if anyone updated the permission like list, write, read, make public of objects inside the specific S3 Bucket.
we have the situation where multiple people in our organization can allowed to access the S3 buckets. Each can upload/download their own team related files. While doing this some people make mistake by making the whole bucket as public, or enabled the write and list permission. We are unable to identify this problem when this permission enabled and couldn't take immediate action revoke that permission . To avoid this we require to notification mail service when someone changed the permission on particular S3 Bucket.
Please help how to handle this situation.
Write better permissions, attach exclusive deny permissions to low level users to specific buckets, which will overrule the existing bad policies. Even if you get a notification when someone changes a bucket policy it might take you some time to act on that. CloudTrail API detection and email notification can be done but it will still be open to vulnerabilities. I believe efforts should be focused on figuring out access permissions primarily, then an event based email trigger.
#Amit shared the correct article for that: How to Detect and Automatically Remediate Unintended Permissions in Amazon S3 Object ACLs with CloudWatch Events | AWS Security Blog
I am using AWS S3 for serving assets to my website, now even though I have added cache-control metadata header to all my assets my daily overall bandwidth usage almost got doubled in past month.
I am sure that traffic on my website has not increased dramatically to account for increase in S3's bandwidth usage.
Is there a way to find out how much a file is contributing to the total bill in terms of bandwidth or cost ?
I am routing all my traffic through cloudfare so it should be protected against DDoS attack.
I expect the bandwidth of my S3 bucket to reduce or to get some valid reason which explains why bandwidth almost doubled when there's no increase in daily traffic.
You need to enable Server Access Logging on your content bucket. Once you do this, all bucket accesses will be written to logfiles that are stored in a (different) S3 bucket.
You can analyze these logfiles with a custom program (you'll find examples on the web) or AWS Athena, which lets you write SQL queries against structured data.
I would focus on the remote IP address of the requestor, to understand what proportion of requests are served via CloudFlare versus people going directly to your bucket.
If you find that CloudFlare is constantly reloading content from the bucket, you'll need to give some thought to cache-control headers, either as metadata on the object in S3, or in your CloudFlare configuration.
From: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/enable-cloudtrail-events.html
To enable CloudTrail data events logging for objects in an S3 bucket:
Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/.
In the Bucket name list, choose the name of the bucket that you want.
Choose Properties.
Choose Object-level logging.
Choose an existing CloudTrail trail in the drop-down menu. The trail you select must be in the same AWS Region as your bucket, so the drop-down list contains only trails that are in the same Region as the bucket or trails that were created for all Regions.
If you need to create a trail, choose the CloudTrail console link to go to the CloudTrail console. For information about how to create trails in the CloudTrail console, see Creating a Trail with the Console in the AWS CloudTrail User Guide.
Under Events, select Read to specify that you want CloudTrail to log Amazon S3 read APIs such as GetObject. Select Write to log Amazon S3 write APIs such as PutObject. Select both Read and Write to log both read and write object APIs. For a list of supported data events that CloudTrail logs for Amazon S3 objects, see Amazon S3 Object-Level Actions Tracked by CloudTrail Logging in the Amazon Simple Storage Service Developer Guide.
Choose Create to enable object-level logging for the bucket.
To disable object-level logging for the bucket, you must go to the CloudTrail console and remove the bucket name from the trail's Data events.
I want to allow a certain AWS account read permissions to one of my S3 objects (file) via a URL.
Is it possible to grant permissions to other AWS account using his AWS account ID (The user's AWS account Id is the only information I have about his account)?
Yes, you can do this. You want to use the Principal element.
You can find examples here.
(I know links are generally frowned upon, but AWS technologies change at such a rapid pace that actual examples may be obsolete within days or months)