S3 bucket policy vs access control list - amazon-web-services

On AWS website, it suggests using the following bucket policy to make the S3 bucket public:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::example-bucket/*"
]
}
]
}
What's the difference between that and just setting it through the Access Control List?

Bottom line: 1) Access Control Lists (ACLs) are legacy (but not deprecated), 2) bucket/IAM policies are recommended by AWS, and 3) ACLs give control over buckets AND objects, policies are only at the bucket level.
Decide which to use by considering the following: (As noted below by John Hanley, more than one type could apply and the most restrictive/least privilege permission will apply.)
Use S3 bucket policies if you want to:
Control access in S3 environment
Know who can access a bucket
Stay under 20kb policy size max
Use IAM policies if you want to:
Control access in IAM environment, for potentially more than just buckets
Manage very large numbers of buckets
Know what a user can do in AWS
Stay under 2-10kb policy size max, depending if user/group/role
Use ACLs if you want to:
Control access to buckets and objects
Exceed 20kb policy size max
Continue using ACLs and you're happy with them
https://aws.amazon.com/blogs/security/iam-policies-and-bucket-policies-and-acls-oh-my-controlling-access-to-s3-resources/

If you want to implement fine grained control over individual objects in your bucket use ACLs. If you want to implement global control, such as making an entire bucket public, use policies.
ACLs were the first authorization mechanism in S3. Bucket policies are the newer method, and the method used for almost all AWS services. Policies can implement very complex rules and permissions, ACLs are simplistic (they have ALLOW but no DENY). To manage S3 you need a solid understanding of both.
The real complication happens when you implement both ACLs and policies. The end permission set will be the least privilege union of both.

AWS has outlined the specific use cases for the different access policy options here
They lay out...
When to Use an Object ACL
when objects are not owned by bucket owner
permissions vary by object
When to Use a Bucket ACL
to grant write permission to the Amazon S3 Log Delivery group to write access log objects to your bucket
When to Use a Bucket Policy
to manage cross-account permissions for all Amazon S3 permissions (ACLs can only do read, write, read ACL, write ACL, and "full control" - all of the previous permissions)
When to Use a User Policy
if you want to manage permissions individually by attaching policies to users (or user groups) rather than at the bucket level using a Bucket Policy

Related

How to give access of s3 bucket residing in Account A to different iam users from multiple aws accounts?

I am working on aws SAM project and i have a requirement of giving access to my S3 bucket to multiple iam users from unknown aws accounts but i can't make bucket publicly accessible. I want to secure my bucket as well as i want any iam user from any aws account to access the contents of my S3 bucket. Is this possible?
Below is the policy i tried and worked perfectly.
{
"Version": "2012-10-17",
"Id": "Policy1616828964582",
"Statement": [
{
"Sid": "Stmt1616828940658",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/STS_Role_demo"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::new-demo-bkt/*"
}
]
}
Above policy is for one user but i want any user from other AWS account to access my contents without making the bucket and objects public so how can i achieve this?
This might be possible using a set of Conditions on the incoming requests.
I can think of two options:
You create an IAM role that your SAM application uses even when running in other accounts
You create S3 bucket policies that allow unknown users access
If you decide to look into S3 bucket policies, I suggest using an S3 Access Point to better manage access policies.
Access points are named network endpoints that are attached to buckets
that you can use to perform S3 object operations, such as GetObject
and PutObject. Each access point has distinct permissions and network
controls that S3 applies for any request that is made through that
access point. Each access point enforces a customized access point
policy that works in conjunction with the bucket policy that is
attached to the underlying bucket.
You can use a combination of S3 Conditions to restrict access. For example, your SAM application could include specific condition keys when making S3 requests, and the bucket policy then allows access based on those conditions.
You can also apply global IAM conditions to S3 policies.
This isn't great security though, malicious actors might be able to figure out the headers and spoof requests to your bucket. As noted on some conditions such as aws:UserAgent:
This key should be used carefully. Since the aws:UserAgent value is
provided by the caller in an HTTP header, unauthorized parties can use
modified or custom browsers to provide any aws:UserAgent value that
they choose. As a result, aws:UserAgent should not be used to
prevent unauthorized parties from making direct AWS requests. You can
use it to allow only specific client applications, and only after
testing your policy.

Disable AWS S3 Management Console

Is it possible to disable AWS S3 management console for the security reasons?
We don't want anyone including root/admin users to access customer files directly from the AWS S3. We should just have programmatic access to the files stored in S3.
If this is not possible, is it possible to stop listing the directories inside the bucket for all users ?
This is a tricky one to implement, however the following should be able to fulfill the requirements.
Programmatic Access Only
You need to define exactly which actions should be denied you would not want to block access completely otherwise you might lose the ability to do anything.
If you're in AWS you should use IAM roles, and a VPC endpoint to connect to the S3 service. Both of these support the ability to control access within your S3 buckets Bucket Policy.
You would use this to deny List* actions where the source is not the VPC endpoint. You could also deny where its not a specific subset of roles.
This works for all programmatic use cases and for people who login as an IAM user from the console, however this does not deny access to the root user.
Also bear in mind for any IAM user/IAM role that they do not have access unless you explicitly give it to them via an IAM policy.
Denying Access To The Root User
There is currently only one way to deny access to the root user of an AWS account (although you should share these credentials with anyone, even within your company) as that is using a Service Control Policy.
To do this the account would need to be part of an AWS organisation (as an organisational unit). If/once it is you would create a SCP that denies access to the root principal for the specific actions that you want.
An example of this policy for you would be
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "RestrictS3ForRoot",
"Effect": "Deny",
"Action": [
"s3:List*"
],
"Resource": [
"*"
],
"Condition": {
"StringLike": {
"aws:PrincipalArn": [
"arn:aws:iam::*:root"
]
}
}
}
]
}
Yes, it is possible to disable the Management Console: Don't give users a password.
When creating IAM Users, there are two ways to provide credentials:
Sign-in Credentials (for the Console)
Access Key (for API calls)
Only give users an Access Key and they won't be able to login to the console.
However, please note that when when using the Management Console, users have exactly the same permissions as using an Access Key. Thus, if they can do it in the console, then they can do it via an API call (if they have an Access Key).
If your goal is to prevent anyone from accessing customer files, then you can add a Bucket Policy with a Deny on s3:* for the bucket, where the Principal is not the customer.
Please note, however, that the Root login can remove such a policy.
If the customers really want to keep their own data private, then they would need to create their own AWS account and keep their files within it, without granting you access.

Allow access to S3 Bucket from all EC2 instances of specific Account

Is there any way to allow all instances created by a specific AWS account access to an S3 bucket?
I would like to provide data that should be very simple for clients to download to their instances. Ideally, automatically via the post_install script option of AWS ParallelCluster.
However, it seems like this requires a lot of setup, as is described in this tutorial by AWS:
https://aws.amazon.com/premiumsupport/knowledge-center/s3-instance-access-bucket/
This is not feasible for me. Clients should not have to create IAM roles.
The best I came up with at the moment is allowing S3 bucket access to a specific AWS account and then working with access keys:
export AWS_ACCESS_KEY_ID=<key-id>
export AWS_SECRETE_ACCESS_KEY=<secret-key>
aws s3 cp s3://<bucket> . --recursive
Unfortunately, this is also not ideal as I would like to provide ready-to-use AWS Parallelcluster post_install scripts. These scripts should automatically download the required data on cluster startup.
Is there any way to allow all instances created by a specific AWS account access to an S3 bucket?
Yes. It's a 2 step process. In summary:
1) On your side, the bucket must trust the account id of the other accounts that will access it, and you must decide which calls you will allow.
2) On the other accounts that will access the bucket, the instances must be authorised to run AWS API calls on your bucket using IAM policies.
In more detail:
Step 1: let's work through this and break it down.
On your bucket, you'll need to configure a bucket policy like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "111",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT_ID_TO_TRUST:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME_HERE/*"
}
]
}
You can find more examples of bucket policies in the AWS documentation here.
WARNING 1: "arn:aws:iam::ACCOUNT_ID:root" will trust everything that has permissions to connect to your bucket on the other AWS account. This shouldn't be a problem for what you're trying to do, but it's best you completely understand how this policy works to prevent any accidents.
WARNING 2: Do not grant s3:* - you will need to scope down the permissions to actions such as s3:GetObject etc. There is a website to help you generate these policies here. s3:* will contain delete permissions which if used incorrectly could result in nasty surprises.
Now, once that's done, great work - that's things on your end covered.
Step 2: The other accounts that want to read the data will have to assign an instance role to the ec2 instances they launch and that role will need a policy attached to it granting access to your bucket. Those instances can then run AWS CLI commands on your bucket, provided your bucket policy authorises the call on your side and the instance policy authorises the call on their side.
The policy that needs to be attached to the instance role should look something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME_HERE/*"
}
]
}
Keep in mind, just because this policy grants s3:* it doesn't mean they can do anything on your bucket, not unless you have s3:* in your bucket policy. Actions of this policy will be limited to whatever you've scoped the permissions to in your bucket policy.
This is not feasible for me. Clients should not have to create IAM roles.
If they have an AWS account it's up to them on how they choose to access the bucket as long as you define a bucket policy that trusts their account the rest is on them. They can create an ec2 instance role and grant it permissions to your bucket, or an IAM User and grant it access to your bucket. It doesn't matter.
The best I came up with at the moment is allowing S3 bucket access to a specific AWS account and then working with access keys:
If the code will run on an ec2 instance, it's bad practice to use access keys and instead should use an ec2 instance role.
Ideally, automatically via CloudFormation on instance startup.
I think you mean via instance userdata, which you can define through CloudFormation.
You say "Clients should not have to create IAM roles". This is perfectly correct.
I presume that you are creating the instances for use by the clients. If so, then you should create an IAM Role that has access to the desired bucket.
Then, when you create an Amazon EC2 instance for your clients, associate the IAM Role to the instance. Your clients will then be able to use the AWS Command-Line Interface (CLI) to access the S3 bucket (list, upload, download, or whatever permissions you put into the IAM Role).
If you want the data to be automatically downloaded when you first create their instance, then you can add User Data script that will execute when the instance starts. This can download the files from S3 to the instance.

S3 - Revoking "full_control" permission from owned object

While writing S3 server implementation, ran into question I can't really find answer anywhere.
For example I'm the bucket owner, and as well owner of uploaded object.
In case I revoke "full_control" permission from object owner (myself), will I be able to access and modify that object?
What's the expected behaviour in following example:
s3cmd setacl --acl-grant full_control:ownerID s3://bucket/object
s3cmd setacl --acl-revoke full_control:ownerID s3://bucket/object
s3cmd setacl --acl-grant read:ownerID s3://bucket/object
Thanks
So there's the official answer from AWS support:
The short answer for that question would be yes, the bucket/object
owner has permission to read and update the bucket/object ACL,
provided that there is no bucket policy attached that explicitly
removes these permissions from the owner. For example, the following
policy would prevent the owner from doing anything on the bucket,
including changing the bucket's ACL:
{
"Id": "Policy1531126735810",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example bucket policy",
"Action": "s3:*",
"Effect": "Deny",
"Resource": "arn:aws:s3:::<bucket>",
"Principal": "*"
}
]
}
However, as root (bucket owner) you'd still have permission to delete
that policy, which would then restore your permissions as bucket owner
to update the ACL.
By default, all S3 resources, buckets, objects and subresources, are
private; only the resource owner, which is the AWS account that
created it, can access the resource[1]. As the resource owner (AWS
account), you can optionally grant permission to other users by
attaching an access policy to the users.
Example: let's say you created an IAM user called -S3User1-, and gave
it permission to create buckets in S3 and update its ACLs. The user in
question then goes ahead and create a bucket and name it
"s3user1-bucket". After that, he goes further and remove List objects,
Write objects, Read bucket permission and Write bucket permissions
from the root account on the ACL section. At this point, if you log in
as root and attempt to read the objects in that bucket, an "Access
Denied" error will be thrown. However, as root you'll be able to go to
the "Permissions" section of the bucket and add these permissions
back.
These days it is recommended to use the official AWS Command-Line Interface (CLI) rather than s3cmd.
You should typically avoid using object-level permissions to control access. It is best to make them all "bucket-owner full control" and then use Bucket Policies to grant access to the bucket or a path.
If you wish to provide per-object access, it is recommended to use Amazon S3 pre-signed URLs, which give time-limited access to a private object. Once the time expires, the URL no longer works. Your application would be responsible for determining whether a user is permitted to access an object, and then generates the pre-signed URL (eg as a link or href on an HTML page).

How should I set up my bucket policy so I can deploy to S3?

I've been working on this a long time and I am getting nowhere.
I created a user and it gave me
AWSAccessKeyId
AWSSecretKey
I created a bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:PutObjectAcl",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::abc9876/*"
}
]
}
Now when I use a gulp program to upload to the bucket I see this:
[20:53:58] Starting 'deploy'...
[20:53:58] Finished 'deploy' after 25 ms
[20:53:58] [cache] app.js
Process terminated with code 0.
To me it looks like it should have worked but when I go to the console I cannot see anything in my bucket.
Can someone tell me if my bucket policy looks correct and give me some suggestions on what I could do to test out the uploading. Could I for example test this out from the command line?
There are multiple ways to manage access control on S3. These different mechanisms can be used simultaneously, and the authorization of a request will be the result of the interaction of all the rules in all these mechanisms. Things can get confusing!
Let's try to make things easier to understand. You have:
IAM policies - these are policies you define for specific Users or Groups (or Roles, but let's not get into that...).
S3 bucket policies - these are policies that you define at the bucket level.
S3 ACLs (access control lists) - these are rules that you define both at the bucket level and the object level. This is that permissions area mentioned on a comment to another answer.
Whenever you send a request to S3, e.g. downloading an object, the request will be processed by an authorization system. This system will calculate the union of all the policies/rules described above, and then will follow a process that can be simplified as follows:
If there is any rule explicitly denying the request, it's denied. Period. Otherwise...
Otherwise, if there is any rule explicitly allowing the request, it's allowed. Period. Otherwise...
Otherwise, the request is denied.
Let's say you have all the mechanisms in place. For the request to be accepted, you must not have any rules Denying that request, and need to have at least one rule allowing that request.
Making your policies easier to understand...
My suggestion to you is to simplify your policies. Choose one access control mechanism and use stick to that one.
In your specific situation, from your very brief description, I feel that using IAM policies could be a good idea. You can use either an IAM User Policy (that you define and attach specifically to your IAM User) or an IAM Group Policy (that you define and attach to a group your IAM User belongs to). Let's forget about IAM Roles, that is a whole different story.
Then delete your ACLs and Bucket Policies. Your requests should be allowed then.
As an additional hint, make sure the software you are using to upload objects to S3 is actually using those 2 API calls: PutObject and PutObjectAcl. Keep in mind that S3 supports multi-part upload, through the use of a different set of API calls. If your tool is doing multi-part uploads under the hood, then be sure to allow those API calls as well (many tools will, including the AWS CLI, and many SDKs have a higher level S3 API that will do that as well)!
For more information on this matter, I'd suggest the following post from the AWS Security Blog:
IAM policies and Bucket Policies and ACLs! Oh My! (Controlling Access to S3 Resources)
You don't need to define "Principal": "*" , since you have already created a IAM user
The Bucket Policy looks fine, if there was a problem with access it would have given you an appropriate error.
Just make sure your "Keyname" is correct while calling AWS APIs, the keyname which uniquely identifies the object in a bucket.
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html