I want to restrict access to AWS S3 buckets only to the applications running code on them. All other users should not be able to view or access the bucket, even the AWS account owner.
From the solutions mentioned on amazon, the account owner has access to all the services.
One solution can be to encrypt the data in the buckets and the application code holds the logic to decrypt it. But I am looking for a better solution, if there is any.
Any solution appreciated!
With the exception of the root login, all content in Amazon S3 is private by default. No users can access it unless permissions are granted via IAM or a Bucket Policy.
So, the simple answer is to only grant access to the specific entities that should have access. This is made slightly more complicated where organizations wish to grant "administrator" access, with a policy that might grant access to all Amazon S3 buckets. This can be corrected by applying a Deny policy that overrides the Allow policy. For example, Admins might have Allow access to all S3 buckets, but a Deny policy might then remove their access to specific buckets.
Where super-sensitive information is kept in Amazon S3, a common practice is to put the sensitive data in an Amazon S3 bucket in a different AWS account. This way, even Admins and the root login cannot have access to the bucket. Only the root login of the other account can obtain access.
Another common practice is to protect the root login with Multi-Factor Authentication (MFA), such as a physical device, then lock that physical device in a safe. Or, split the password into two halves, with different people having each half. This prevents the root login from being used, but allows its use in critical situations.
Encrypting information in the app is good, but if the app is able to decrypt it, then somebody with access to the app source code can figure out how to decrypt it too. Even if encryption keys are kept in the AWS Secrets Manager, the root login would have access to those secrets.
Bottom line: Secure the root login using a physical MFA that is locked away, then use IAM policies to grant access as desired.
Also, be careful how access is granted to an application. If it is done via an IAM Role, then make sure that other people can't assume that role and get the same access themselves.
Related
I am kinda new to S3 and I am aware that access to my bucket/objects can be given either through bucket policy or acl. The thing is that acl access can be given per object, so it is not clear to me how to fully review who was given access, even to a single object in the bucket. My question is how can I easily and accurately evaluate that either from the aws web management console or from boto3, in order to ensure that I am the only one who has access to my bucket and all of its objects.
It's not easy.
First, let's review the ways that permission might be granted:
Access Control Lists (ACLs) are object-level permissions that can grant public access, or access to a specific user. They are a remnant of the early way that Amazon S3 worked and can be annoying to manage. In fact, a new feature was recently made available that allows ACLs to be disabled -- and that's what AWS recommends!
Bucket Policies allow permissions to be assigned to a bucket, or a path within a bucket. This is a great way to make a bucket public and the only way to provide cross-account access to a bucket.
IAM Policies can be applied to an IAM User, IAM Group or IAM Role. These policies can grant permission to access Amazon S3 resources within the same account. This is a great way to assign permissions to specific IAM Users rather than doing it via a Bucket Policy.
The Amazon S3 console does allow you to Review bucket access using Access Analyzer for S3:
Access Analyzer for S3 alerts you to S3 buckets that are configured to allow access to anyone on the internet or other AWS accounts, including AWS accounts outside of your organization. For each public or shared bucket, you receive findings into the source and level of public or shared access. For example, Access Analyzer for S3 might show that a bucket has read or write access provided through a bucket access control list (ACL), a bucket policy, a Multi-Region Access Point policy, or an access point policy. Armed with this knowledge, you can take immediate and precise corrective action to restore your bucket access to what you intended.
However, it won't give you a full list of who can access which buckets.
You want to "ensure that you are the only one who has access to the bucket" -- this would require checking the Bucket Policy and the permissions of all IAM Users. There's no short-cut for doing this.
I think the other answer answers most of the options in a very good detail.
But usually each bucket contains either public data, non-public data or sensitive data. For any bucket which should not contain public data just disable it and the CloudFormation
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-s3-bucket.html
mentions this https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket-publicaccessblockconfiguration.html for restricting public access.
Additionally the bucket supports encryption, when you allow KMS encryption you can also control access to data via the KMS key. That is something worth to consider for sensitive data.
Otherwise - yes, it is really hard to make sure there is no policy in the account which would not allow any user in your account to get access to something they should not have access to. One way how to do it may be to just control who can modify the the IAM policies (the iam:* permissions). There are also automatic tools to control policies and find vulnerabilities. For just one purpose it is not that hard to create one too.
Even if the bucket is completely private, the objects can be made public by other means - typically via CloudFront.
From petrch's answer
Otherwise - yes, it is really hard to make sure there is no policy in the account which would not allow any user in your account to get access to something they should not have access to
At least, it will be simpler now.
From "Controlling ownership of objects and disabling ACLs for your bucket":
Starting in April 2023, Amazon S3 will change the default settings for S3 Block Public Access and Object Ownership (ACLs disabled) for all new S3 buckets.
For new buckets created after this update,
all S3 Block Public Access settings will be enabled, and
S3 access control lists (ACLs) will be disabled.
These defaults are the recommended best practices for securing data in Amazon S3.
You can adjust these settings after creating your bucket.
For more information, see Default settings for new S3 buckets FAQ and Heads-Up: Amazon S3 Security Changes Are Coming in April of 2023 in the AWS News Blog.
Is there a way to restrict the replication of S3 data to only AWS Accounts that are part of the Organisation ?
I have looked at using IAM Policy, Bucket Policy and Boundaries but i cannot see how to restrict or allow based on destination AWS Account information
Amazon S3 objects are private by default. Nobody inside or outside the AWS account can access the data unless permissions are granted.
It is easy to prevent access to external people (who do not have credentials for the AWS account) because the only way they can access data is if there are S3 Bucket Policies that permit access, or if there are IAM Roles that they are allowed to assume.
Internal, however, will pose more challenges. You say that you want to prevent the data from being copied to a user's personal AWS account. This can be easily accomplished by not granting access to the source S3 bucket. However, as soon as they have access for a normal business purpose, they could download/copy the objects however they wish. A copy command is simply reading data from one location and copying to somewhere else, which is indistinguishable from reading data for a normal business purpose.
Some companies go to extreme lengths, such as only granting permission when the data is access via a particular IP address or network connection, which maps to a room of computers that have their USB ports and floppy disk drives disabled.
The best method is to not grant access to production data. Only allow production apps to access the production data.
Bottom line: If you don't want people to have access to data, then don't grant them access. However, if you do grant access, it would be difficult to limit what they then do with the data.
I have an aws s3 bucket and want to share and sync data with my team and some other individuals (and later access this data in the cloud). This is easy with the aws cli (aws s3 sync ...), but since we are now in the situation where multiple other individuals from outside are involved, they don't have an aws-account.
What is the preferred strategy here? Is there a way to get something like a read/write access-token, which then could get passed to the aws-cli?
You probably want to setup IAM users and give the access either though a bucket policy or on the user level.
With bucket policies you can easily define what paths users are able to edit and access.
When you create an IAM user you also have the option of creating one for Programmatic(CLI) access only which will give you a set of credentials for that user only. Just use aws configure and set the access and token key.
You also probably want to make sure you are using an IAM user yourself as it's generally recommended for security.
Is it possible to access an S3 bucket from another account using the access key ID and secret access key?
I know that the keys are typically for API/CLI access, but I was wondering if I could use it from my account using those two alone.
A workaround would be to run a CLI on AWS and repeatedly sync two folders.
Edit: If I don't have access to the original account, how would I proceed then?
I have the keys, and want to add it to a second account - but cant make any changes to the first
Is it possible to access an S3 bucket from another account using the access key ID and secret access key?
Yes, if it is configured. Access to S3 is S3 bucket is determined by who you are, your IAM policy, what action you need to and what is the bucket configuration is(policy, permissions, block public access, ...).
You can read the documentation to see what are the different factors affecting the access for a certain request: https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-s3-evaluates-access-control.html
To configure cross account access, you have three choices:
Configure bucket policy that allows certain or all API actions from another account or principals in that account. This is very flexible and allows almost all S3 bucket actions while staying secure.
Configure ACL policy that allows another account. This is for before bucket policy existed, however it may be simpler to use in some cases
Configure cross account IAM roles. This is basically providing permissions to another AWS account. This is the most flexible than the other options as it can include any action.
The above 3 ways are documented here: https://aws.amazon.com/premiumsupport/knowledge-center/cross-account-access-s3/
Now while in the document using policy is for programmatic access only, as of right now you can also use it in the console, although this is not a documented feature. If you have access via bucket policy, you can simply open the bucket in the console by typing the bucket name in the URL(replace BUCKET-NAME with your bucket name): https://s3.console.aws.amazon.com/s3/buckets/BUCKET-NAME
Another question would be how to access the console if you only have a IAM access key id and secret. To access the console your IAM user needs a password and you can't use the IAM user without it, however if you enough permissions you can set a password for yourself. Another thing you can is that if there's a IAM role you can assume into(if you have enough permissions you can create your own), then you can simply use a tool that can generate a console link using federation API. Here's a few I'm aware of:
https://github.com/trek10inc/awsume-console-plugin
https://github.com/jnawk/aws-electron
https://github.com/NetSPI/aws_consoler
The short answer is "yes it is possible".
As of "how?" there are numerous options. You can use boto3 or the aws sdk in the language of your choice, running it in a lambda, EC2 or ECS container, etc.
You could even go as far as implementing yourself SigV4 to sign your requests (that's what the AWS SDK does internally).
We have an app similar to Dropbox where we store user's files in S3. The only way for the user to do so is via the app (similar to dropbox).
Due to valid privacy concerns, we want to restrict the access of that S3 bucket so the contents of the bucket can only be access from via the app - for which we've created the API token and use that to access the contents.
We don't even want the root account to be able to traverse the contents of that specific S3 bucket.
However, in the event that some administrative intervention is needed, we want to be able to grant a specific user account the ability to physically traverse into the S3 bucket and do whatever is required at that point of time. Once said administrative task is completed, the access should be revoked.
Have you folks encountered this kind of scenario? How would one go about implementing something like this?
Thanks for your thoughts
The root credentials associated with an AWS account has full permissions. And even if they don't, they have the ability to remove blocking permissions. Companies typically secure their root login by adding a multi-factor authentication (MFA) device and locking it in a safe.
You can easily write a Bucket Policy that denies all access except to a specific IAM User, and your application can either use that IAM User to either:
Download content and then serve it directly to clients, or
Create pre-signed URLs that grant time-limited access to private objects in Amazon S3, then provide those URLs to clients so that they can download the object from Amazon S3
Temporary access can be granted by modifying the Bucket Policy. You should, of course, restrict the ability for your normal users to modify Bucket Policies.
For this extra-sensitive information, you should probably create a separate AWS Account. That way, only a select few people would have any access to that account, which prevents accidentally granting access. For example, if a Systems Administrator is granted access to all S3 buckets in Account A, they would have no access to buckets in Account B.