Fully evaluate S3 access - amazon-web-services

I am kinda new to S3 and I am aware that access to my bucket/objects can be given either through bucket policy or acl. The thing is that acl access can be given per object, so it is not clear to me how to fully review who was given access, even to a single object in the bucket. My question is how can I easily and accurately evaluate that either from the aws web management console or from boto3, in order to ensure that I am the only one who has access to my bucket and all of its objects.

It's not easy.
First, let's review the ways that permission might be granted:
Access Control Lists (ACLs) are object-level permissions that can grant public access, or access to a specific user. They are a remnant of the early way that Amazon S3 worked and can be annoying to manage. In fact, a new feature was recently made available that allows ACLs to be disabled -- and that's what AWS recommends!
Bucket Policies allow permissions to be assigned to a bucket, or a path within a bucket. This is a great way to make a bucket public and the only way to provide cross-account access to a bucket.
IAM Policies can be applied to an IAM User, IAM Group or IAM Role. These policies can grant permission to access Amazon S3 resources within the same account. This is a great way to assign permissions to specific IAM Users rather than doing it via a Bucket Policy.
The Amazon S3 console does allow you to Review bucket access using Access Analyzer for S3:
Access Analyzer for S3 alerts you to S3 buckets that are configured to allow access to anyone on the internet or other AWS accounts, including AWS accounts outside of your organization. For each public or shared bucket, you receive findings into the source and level of public or shared access. For example, Access Analyzer for S3 might show that a bucket has read or write access provided through a bucket access control list (ACL), a bucket policy, a Multi-Region Access Point policy, or an access point policy. Armed with this knowledge, you can take immediate and precise corrective action to restore your bucket access to what you intended.
However, it won't give you a full list of who can access which buckets.
You want to "ensure that you are the only one who has access to the bucket" -- this would require checking the Bucket Policy and the permissions of all IAM Users. There's no short-cut for doing this.

I think the other answer answers most of the options in a very good detail.
But usually each bucket contains either public data, non-public data or sensitive data. For any bucket which should not contain public data just disable it and the CloudFormation
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-s3-bucket.html
mentions this https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket-publicaccessblockconfiguration.html for restricting public access.
Additionally the bucket supports encryption, when you allow KMS encryption you can also control access to data via the KMS key. That is something worth to consider for sensitive data.
Otherwise - yes, it is really hard to make sure there is no policy in the account which would not allow any user in your account to get access to something they should not have access to. One way how to do it may be to just control who can modify the the IAM policies (the iam:* permissions). There are also automatic tools to control policies and find vulnerabilities. For just one purpose it is not that hard to create one too.
Even if the bucket is completely private, the objects can be made public by other means - typically via CloudFront.

From petrch's answer
Otherwise - yes, it is really hard to make sure there is no policy in the account which would not allow any user in your account to get access to something they should not have access to
At least, it will be simpler now.
From "Controlling ownership of objects and disabling ACLs for your bucket":
Starting in April 2023, Amazon S3 will change the default settings for S3 Block Public Access and Object Ownership (ACLs disabled) for all new S3 buckets.
For new buckets created after this update,
all S3 Block Public Access settings will be enabled, and
S3 access control lists (ACLs) will be disabled.
These defaults are the recommended best practices for securing data in Amazon S3.
You can adjust these settings after creating your bucket.
For more information, see Default settings for new S3 buckets FAQ and Heads-Up: Amazon S3 Security Changes Are Coming in April of 2023 in the AWS News Blog.

Related

AWS s3 bucket effective permissions

Is there an easy way to see what are the effective access permissions for a specific bucket? To be more specific about the environment, access to buckets is granted through identity policies, there are more than 170 IAM roles and users and 1000+ policies (not all of them are attached to IAM role or user). I need to see who has the s3:GetObject, s3:PutObject and s3:DeleteObject permission on a specific bucket. Is there some tool that can give me that kind of report? I can write a script that goes through all roles, policies attached to them, pulls out statements that contain specific bucket and then I can cross reference allows and denys, but I'm sure there is some smarter way of doing this.
I am not aware of any better way than you described. You can export your IAM settings (unless you already have them in CloudFormation or CDK scripts) as described at https://aws.amazon.com/blogs/security/a-simple-way-to-export-your-iam-settings/.
Then you can scan (manually or programatically) for policies of interest and to what users or roles are they attached.
From Using Access Analyzer for S3 - Amazon Simple Storage Service:
Access Analyzer for S3 alerts you to S3 buckets that are configured to allow access to anyone on the internet or other AWS accounts, including AWS accounts outside of your organization. For each public or shared bucket, you receive findings into the source and level of public or shared access. For example, Access Analyzer for S3 might show that a bucket has read or write access provided through a bucket access control list (ACL), a bucket policy, or an access point policy. Armed with this knowledge, you can take immediate and precise corrective action to restore your bucket access to what you intended.

Restrict AWS S3 bucket access, even from account owner

I want to restrict access to AWS S3 buckets only to the applications running code on them. All other users should not be able to view or access the bucket, even the AWS account owner.
From the solutions mentioned on amazon, the account owner has access to all the services.
One solution can be to encrypt the data in the buckets and the application code holds the logic to decrypt it. But I am looking for a better solution, if there is any.
Any solution appreciated!
With the exception of the root login, all content in Amazon S3 is private by default. No users can access it unless permissions are granted via IAM or a Bucket Policy.
So, the simple answer is to only grant access to the specific entities that should have access. This is made slightly more complicated where organizations wish to grant "administrator" access, with a policy that might grant access to all Amazon S3 buckets. This can be corrected by applying a Deny policy that overrides the Allow policy. For example, Admins might have Allow access to all S3 buckets, but a Deny policy might then remove their access to specific buckets.
Where super-sensitive information is kept in Amazon S3, a common practice is to put the sensitive data in an Amazon S3 bucket in a different AWS account. This way, even Admins and the root login cannot have access to the bucket. Only the root login of the other account can obtain access.
Another common practice is to protect the root login with Multi-Factor Authentication (MFA), such as a physical device, then lock that physical device in a safe. Or, split the password into two halves, with different people having each half. This prevents the root login from being used, but allows its use in critical situations.
Encrypting information in the app is good, but if the app is able to decrypt it, then somebody with access to the app source code can figure out how to decrypt it too. Even if encryption keys are kept in the AWS Secrets Manager, the root login would have access to those secrets.
Bottom line: Secure the root login using a physical MFA that is locked away, then use IAM policies to grant access as desired.
Also, be careful how access is granted to an application. If it is done via an IAM Role, then make sure that other people can't assume that role and get the same access themselves.

IAM S3 replication of data to only accounts that are part of the Organisation

Is there a way to restrict the replication of S3 data to only AWS Accounts that are part of the Organisation ?
I have looked at using IAM Policy, Bucket Policy and Boundaries but i cannot see how to restrict or allow based on destination AWS Account information
Amazon S3 objects are private by default. Nobody inside or outside the AWS account can access the data unless permissions are granted.
It is easy to prevent access to external people (who do not have credentials for the AWS account) because the only way they can access data is if there are S3 Bucket Policies that permit access, or if there are IAM Roles that they are allowed to assume.
Internal, however, will pose more challenges. You say that you want to prevent the data from being copied to a user's personal AWS account. This can be easily accomplished by not granting access to the source S3 bucket. However, as soon as they have access for a normal business purpose, they could download/copy the objects however they wish. A copy command is simply reading data from one location and copying to somewhere else, which is indistinguishable from reading data for a normal business purpose.
Some companies go to extreme lengths, such as only granting permission when the data is access via a particular IP address or network connection, which maps to a room of computers that have their USB ports and floppy disk drives disabled.
The best method is to not grant access to production data. Only allow production apps to access the production data.
Bottom line: If you don't want people to have access to data, then don't grant them access. However, if you do grant access, it would be difficult to limit what they then do with the data.

If an account owner of an S3 bucket makes it read only, can this be undone by the same account owner?

I have one AWS account and multiple IAM users. I have a bucket and at certain times, I want that restricted to read only (I would like to have other users to have to purposely reactivate read & write access).
If I set my bucket to read only, can this be undone again?
Yes. You can change bucket and object permissions using a variety of methods including using an ACL, Bucket Policy or individual object permissions.
You can read how to change the ACL of the bucket here.
Yes you can change it later on and the best way to mange this is using the IAM policy for the user group. If the need occurs to be managed at ACL level then that can be done but not a preferred way.
Amazon S3 buckets are private by default. Nobody has access to the content.
You can grant access in several ways:
Access Control Lists (ACLs) on specific objects, which is good if you want only specific objects to be accessible
A Bucket Policy on a bucket or a portion of a bucket, which is good if you want to grant public access for anything in the bucket or path
IAM Policies that can grant access to specific IAM Users and IAM Groups
Generating Pre-Signed URLs that grant time-limited access to a specific object, which is used by applications to generate authenticated access for specific users
So, there's no actual concept of "setting a bucket to read-only". Instead, you would create a Bucket Policy or an IAM Policy that grants GetObject permission to everyone or a specific set of users.
If you wish to change access permissions for a particular time period, you should modify the policy that actually grants the permissions. You will need to do this at the start and at the end of that time period.

Accessing a s3 bucket with access key id and secret

Is it possible to access an S3 bucket from another account using the access key ID and secret access key?
I know that the keys are typically for API/CLI access, but I was wondering if I could use it from my account using those two alone.
A workaround would be to run a CLI on AWS and repeatedly sync two folders.
Edit: If I don't have access to the original account, how would I proceed then?
I have the keys, and want to add it to a second account - but cant make any changes to the first
Is it possible to access an S3 bucket from another account using the access key ID and secret access key?
Yes, if it is configured. Access to S3 is S3 bucket is determined by who you are, your IAM policy, what action you need to and what is the bucket configuration is(policy, permissions, block public access, ...).
You can read the documentation to see what are the different factors affecting the access for a certain request: https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-s3-evaluates-access-control.html
To configure cross account access, you have three choices:
Configure bucket policy that allows certain or all API actions from another account or principals in that account. This is very flexible and allows almost all S3 bucket actions while staying secure.
Configure ACL policy that allows another account. This is for before bucket policy existed, however it may be simpler to use in some cases
Configure cross account IAM roles. This is basically providing permissions to another AWS account. This is the most flexible than the other options as it can include any action.
The above 3 ways are documented here: https://aws.amazon.com/premiumsupport/knowledge-center/cross-account-access-s3/
Now while in the document using policy is for programmatic access only, as of right now you can also use it in the console, although this is not a documented feature. If you have access via bucket policy, you can simply open the bucket in the console by typing the bucket name in the URL(replace BUCKET-NAME with your bucket name): https://s3.console.aws.amazon.com/s3/buckets/BUCKET-NAME
Another question would be how to access the console if you only have a IAM access key id and secret. To access the console your IAM user needs a password and you can't use the IAM user without it, however if you enough permissions you can set a password for yourself. Another thing you can is that if there's a IAM role you can assume into(if you have enough permissions you can create your own), then you can simply use a tool that can generate a console link using federation API. Here's a few I'm aware of:
https://github.com/trek10inc/awsume-console-plugin
https://github.com/jnawk/aws-electron
https://github.com/NetSPI/aws_consoler
The short answer is "yes it is possible".
As of "how?" there are numerous options. You can use boto3 or the aws sdk in the language of your choice, running it in a lambda, EC2 or ECS container, etc.
You could even go as far as implementing yourself SigV4 to sign your requests (that's what the AWS SDK does internally).