Email notification when S3 bucket permission changed - amazon-web-services

I would like the get the email notification to specific mail id, if anyone updated the permission like list, write, read, make public of objects inside the specific S3 Bucket.
we have the situation where multiple people in our organization can allowed to access the S3 buckets. Each can upload/download their own team related files. While doing this some people make mistake by making the whole bucket as public, or enabled the write and list permission. We are unable to identify this problem when this permission enabled and couldn't take immediate action revoke that permission . To avoid this we require to notification mail service when someone changed the permission on particular S3 Bucket.
Please help how to handle this situation.

Write better permissions, attach exclusive deny permissions to low level users to specific buckets, which will overrule the existing bad policies. Even if you get a notification when someone changes a bucket policy it might take you some time to act on that. CloudTrail API detection and email notification can be done but it will still be open to vulnerabilities. I believe efforts should be focused on figuring out access permissions primarily, then an event based email trigger.
#Amit shared the correct article for that: How to Detect and Automatically Remediate Unintended Permissions in Amazon S3 Object ACLs with CloudWatch Events | AWS Security Blog

Related

Fully evaluate S3 access

I am kinda new to S3 and I am aware that access to my bucket/objects can be given either through bucket policy or acl. The thing is that acl access can be given per object, so it is not clear to me how to fully review who was given access, even to a single object in the bucket. My question is how can I easily and accurately evaluate that either from the aws web management console or from boto3, in order to ensure that I am the only one who has access to my bucket and all of its objects.
It's not easy.
First, let's review the ways that permission might be granted:
Access Control Lists (ACLs) are object-level permissions that can grant public access, or access to a specific user. They are a remnant of the early way that Amazon S3 worked and can be annoying to manage. In fact, a new feature was recently made available that allows ACLs to be disabled -- and that's what AWS recommends!
Bucket Policies allow permissions to be assigned to a bucket, or a path within a bucket. This is a great way to make a bucket public and the only way to provide cross-account access to a bucket.
IAM Policies can be applied to an IAM User, IAM Group or IAM Role. These policies can grant permission to access Amazon S3 resources within the same account. This is a great way to assign permissions to specific IAM Users rather than doing it via a Bucket Policy.
The Amazon S3 console does allow you to Review bucket access using Access Analyzer for S3:
Access Analyzer for S3 alerts you to S3 buckets that are configured to allow access to anyone on the internet or other AWS accounts, including AWS accounts outside of your organization. For each public or shared bucket, you receive findings into the source and level of public or shared access. For example, Access Analyzer for S3 might show that a bucket has read or write access provided through a bucket access control list (ACL), a bucket policy, a Multi-Region Access Point policy, or an access point policy. Armed with this knowledge, you can take immediate and precise corrective action to restore your bucket access to what you intended.
However, it won't give you a full list of who can access which buckets.
You want to "ensure that you are the only one who has access to the bucket" -- this would require checking the Bucket Policy and the permissions of all IAM Users. There's no short-cut for doing this.
I think the other answer answers most of the options in a very good detail.
But usually each bucket contains either public data, non-public data or sensitive data. For any bucket which should not contain public data just disable it and the CloudFormation
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-s3-bucket.html
mentions this https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket-publicaccessblockconfiguration.html for restricting public access.
Additionally the bucket supports encryption, when you allow KMS encryption you can also control access to data via the KMS key. That is something worth to consider for sensitive data.
Otherwise - yes, it is really hard to make sure there is no policy in the account which would not allow any user in your account to get access to something they should not have access to. One way how to do it may be to just control who can modify the the IAM policies (the iam:* permissions). There are also automatic tools to control policies and find vulnerabilities. For just one purpose it is not that hard to create one too.
Even if the bucket is completely private, the objects can be made public by other means - typically via CloudFront.
From petrch's answer
Otherwise - yes, it is really hard to make sure there is no policy in the account which would not allow any user in your account to get access to something they should not have access to
At least, it will be simpler now.
From "Controlling ownership of objects and disabling ACLs for your bucket":
Starting in April 2023, Amazon S3 will change the default settings for S3 Block Public Access and Object Ownership (ACLs disabled) for all new S3 buckets.
For new buckets created after this update,
all S3 Block Public Access settings will be enabled, and
S3 access control lists (ACLs) will be disabled.
These defaults are the recommended best practices for securing data in Amazon S3.
You can adjust these settings after creating your bucket.
For more information, see Default settings for new S3 buckets FAQ and Heads-Up: Amazon S3 Security Changes Are Coming in April of 2023 in the AWS News Blog.

Allowing S3 bucket to access itself

I just want my S3 bucket to be able to access itself. For example in my index.html there is a reference to a favicon, which resides in my s3 (the same actually) bucket. When i call the index.html, i get 403 HTTP ACCESS DENIED error.
If i put block all access off and i add a policy it works, but i do not want the Bucket to be public.
How am i able to invoke my website with my AWS user for example without making the site public (that is with having all internet access blocked)?
I just want my S3 bucket to be able to access itself.
no, the request always comes from the client
How am i able to invoke my website with my AWS user
For the site-level access control there is CloudFront with signed cookie. You will still need some logic (apigw+lambda? lambda on edge? other server?) to authenticate the user and sign the cookie.
You mention that "the websites in the bucket should be only be able to see by a few dedicated users, which i will create with IAM."
However, accessing Amazon S3 content with IAM credentials is not compatible with accessing objects via URLs in a web browser. IAM credentials can be used when making AWS API calls, but a different authentication method is required when accessing content via URLs. Authentication normally requires a back-end to perform the authentication steps, or you could use Amazon Cognito.
Without knowing how your bucket is set up and what permissions / access controls you have already deployed it is hard to give a definite answer.
Having said that it sounds like you simply need to walk through the proper steps for building an appropriate permission model. You have already explored part of this with the block all access and a policy, but there are also ACL's and permission specifics based on object ownership that need to be considered.
Ultimately AWS's documentation is going to do a better job than most to illustrate what to do and where to start:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteAccessPermissionsReqd.html
NOTE: if you share more information about how the bucket is configured and how your client side is accessing the website, I can edit the answer to give a more prescriptive solution (assuming the AWS docs don't get you all the way there)
UPDATE: After re-reading your question and comment on my answer, I think gusto2 and John's answers are pointing you in the right direction. What you are wanting to do is to authenticate users before they access the contents of the S3 bucket (which if I understand you right, is a s3 hosted static website). This means you need an authentication layer between the client and the bucket, which can be accomplished in a number of ways (lambda + cloudfront, or using an IdP like Cognito are certainly viable options). It would be a moot point for me to regurgitate exactly how to pull off something like this when there are a ton of accessible blog posts on the topic (search "Authenticate s3 static website").
HOWEVER I also want to point out that what you are wanting to accomplish is not possible in the way you are hoping to accomplish it (using IAM permission modeling to authenticate users against an s3 hosted static website). You can either authenticate users to your s3 website OR you can use IAM + S3 Permissions and ACL to set up AWS User and Role specific access to the contents of a bucket, but you can't use IAM users / roles as a method for authenticating client access to an S3 static website (not in any way I would imagine is simple or recommended at least...)

Centralized multi-account logging for AWS CloudFront

Let me preface this by saying that I have discussed this problem in detail with advanced AWS support engineers and they have no reasonable solution, so this may be an impossible problem. However, I thought I'd give it a shot here and see if someone has a creative solution.
The problem:
We have an AWS organization with multiple accounts, and multiple different accounts use CloudFront distributions.
In accordance with AWS recommendations / best practices, we have a centralized logs archive account that collects all of our logs from all organization accounts.
We need to collect CloudFront logs from each CloudFront distribution in each organization account, and store them in our logs archive bucket in the centralized logging account.
CloudFront supports two logging options: (a) logging directly to S3, and (b) "Real-Time Logging" to a Kinesis data stream.
Option (a) would be ideal, but CloudFront uses the archaic S3 ACL system and requires the "Full Control" ACL on the bucket being logged to. In order to configure this, the user/role that is configuring CloudFront must have either the "Full Control" ACL as well or the "s3:PutBucketAcl" IAM permission (so it can grant an ACL to CloudFront).
If we grant "Full Control" ACL or the "s3:PutBucketAcl" permission to the org account, that account can then grant full access to any other entity they please, and there's no way to restrict who they grant permissions to (there's no IAM condition key that supports restricting who ACLs are granted to). Additionally, they could write objects to the bucket other than CloudFront logs. This is unacceptable for multiple reasons since we don't necessarily have a high level of trust for this account.
Furthermore, we would have no way to control which path in the bucket they configure CloudFront to log to, which is unacceptable because we need CloudFront logs to go to a specific path in the bucket. We could use the bucket policy to deny "s3:PutObject" to anything other than the path we want, but then if the account specifies an incorrect path, the logs just won't get written and will be lost to the ether.
Option (b) could work, but due to the fact that Kinesis data streams don't support resource policies and AWS doesn't support cross-account role passing, there's no way to have CloudFront send logs to a central Kinesis stream in the logging account. We would need a Kinesis data stream in each org account that uses CloudFront. This would become quite expensive.
So, I'm pretty much out of ideas. AWS's suggestion, after weeks of discussion and debate, was to:
Send CloudFront logs to an S3 bucket in the org account.
Use a Lambda that triggers on object puts to read the object and send the data to CloudWatch.
Put a CloudWatch subscription on the log group to send the logs to a Kinesis Firehose in the log archives account, which would then store them in the S3 bucket.
This is also an unacceptable solution though, because it creates a ton of extra resources/cost and has a lot of points of failure.
So, if anyone has any bright ideas or details on how they centralized CloudFront logs for their organization, please let me know!

Trigger alarm based on a rate-limit on S3 GetObject and DeleteObject requests

Recently, one of my AWS accounts got compromised, fortunately we were able to change all secure information in time. To avoid recurrence of such a situation the first thing to do would be to have a process in place for secret info management.
That said, I would also want to trigger a cloudwatch alarm in a case where multiple download or delete is taking place from inside my AWS account.
I have come across solutions like
AWS WAF
Have a CDN in place
Trigger a lambda function on an event in S3
Solutions #1 & #2 are not serving to my requirement as they throttle requests coming from outside of AWS. Once it is implemented at S3 level, it will automatically throttle both inside and outside requests.
In solution #3 I could not get a hold of multiple objects requested by an IP in my lambda function, when a threshold time limit and threshold number of file is crossed.
Is raising an alarm by rate-limiting at S3 level a possibility?
There is no rate limit provided by AWS on S3 directly, but you can implement alarms over SNS Topics with CloudTrails.
Unless you explicitly require anyone in your team to remove the objects in your S3 bucket, you shouldn't provide anyone access. The following are some idea you can follow:
Implement the least privilege access
You can block the access to remove the objects on the IAM User
level, so no-one will be able to remove any items.
You can modify the Bucket policy to provide DeleteObject Access to
specific users/roles as conditions.
Enable multi-factor authentication (MFA) Delete
MFA Delete can help prevent accidental bucket deletions. If MFA
Delete is not enabled, any user with the password of a sufficiently
privileged root or IAM user could permanently delete an Amazon S3
object.
MFA Delete requires additional authentication for either of the
following operations:
Changing the versioning state of your bucket
Permanently deleting an object version.
S3 Object Lock
S3 Object Lock enables you to store objects using a "Write Once Read Many" (WORM) model. S3 Object Lock can help prevent accidental or inappropriate deletion of data. For example, you could use S3 Object Lock to help protect your AWS CloudTrail logs.
Amazon Macie with Amazon S3
Macie uses machine learning to automatically discover, classify, and protect sensitive data in AWS. Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property. It provides you with dashboards and alerts that give visibility into how this data is being accessed or moved.
You can learn more about the best Security Practices with S3.
https://aws.amazon.com/premiumsupport/knowledge-center/secure-s3-resources/

can someone hack into my s3 with "AWS-cognito-identity-poolID" that is hard-coded?

First i was hardcoded my aws "accessKey" and "securityKey" in client side JS file, but it was very insecure so i read about "aws-cognito", and implemented new JS in following manner :
Still i am confused with one thing that can someone hack into my s3 with "AWS-cognito-identity-poolID" that is hard-coded ? Or any other security steps should i take ?
Thanks,
Jaikey
Definition of Hack
I am not sure what hacking means in the context of your question.
I assume that you actually mean "that anyone can do something different than uploading a file" which includes deleting or accessing objects inside your bucket.
Your solution
As Ninad already mentioned above, you can use your current approach by enabling "Enable access to unauthenticated identities" [1]. You will then need to create two roles of which one is for "unauthenticated users". You could grant that role PutObject permissions to the S3 bucket. This would allow everyone who visits your page to upload objects to the S3 bucket. I think that is what you intend and it is fine from a security point of view since the IdentityPoolId is a public value (i.e. not confidential).
Another solution
I guess, you do not need to use Amazon Cognito to achieve what you want. It is probably sufficient to add a bucket policy to S3 which grants permission for PutObject to everyone.
Is this secure?
However, I would not recommend to enable direct public write access to your S3 bucket.
If someone would abuse your website by spamming your upload form, you will incure S3 charges for put operations and data storage.
It would be a better approach to send the data through Amazon CloudFront and apply a WAF with rate-based rules [2] or implement a custom rate limiting service in front of your S3 upload. This would ensure that you can react appropriately upon malicious activity.
References
[1] https://docs.aws.amazon.com/cognito/latest/developerguide/identity-pools.html
[2] https://aws.amazon.com/about-aws/whats-new/2019/08/lower-threshold-for-aws-waf-rate-based-rules/
Yes, s3 bucket is secure if you are using through "AWS-Cognito-Identity-Pool" at client side, also enable CORS which allow action only from specific domain that ensure if someone try direct upload or list bucket, will get "access-denied".
Also make sure that you have set the file r/w credentials of the hard-coded access so that it can only be read by the local node and nobody else. By the way, the answer is always a yes, it is only a matter how much someone is willing to engage themselves to "hack". Follow what people said here, and you are safe.