AWS S3 Bucket Policy - Principle Syntax - amazon-web-services

Right now I have my policy defined on my S3 bucket but it seems like the principles I have defined are root and when someone under an account who isn't root isn't falling into the allow part of the policy
"Principal": {
"AWS": [
"arn:aws:iam::123:root",
"arn:aws:iam::456:root",
"arn:aws:iam::789:root",
"arn:aws:iam::101:root"
]
},
I tired to specify it as
"arn:aws:iam::123:*"
but that doesn't work.
I also tried arn:aws:iam::123:user/sample#yahoo.com but that too doesn't seem to be correct as it fails with Invalid principal in policy

When granting cross-account permissions, you need both of:
A bucket policy on Bucket-A in Account-A (as above)
Permissions on the users in their own account to access Bucket-A (which can include wide permissions such as s3:*, but that's rarely a good idea)
Not only does the bucket need to permit access, but the users in the originating account must be granted permission to use S3 for the desired actions (eg s3:GetObject) on Bucket-A (or all buckets).
See: Bucket Owner Granting Cross-Account Bucket Permissions - Amazon Simple Storage Service

Related

Access denied when principal set to aws lambda.amazonaws.com

I access a bucket with a lambda function and i get an "Access Denied" error, when i access it with a lambda function via boto3. If i set principal to "*" in the Bucket, all works fine. What is the issue?
"Sid": "DenyIncorrectEncryptionHeader",
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
...
In the bucket policy, the principal to give allow permission to should be the lambda execution role and not the lambda service (lambda.amazonaws.com).
Adding more details, if the lambda execution role is in the same AWS account as the bucket, then an allow permission in the role should suffice. As long as there is no explicit deny in the bucket policy.
However, if the role and bucket are in separate accounts, the role has to have allow permission and the bucket policy has to give allow permission to the role.
The steps would be:
Create an IAM Role with a use-case of Lambda (this creates a Trust Policy that allows the AWS Lambda service to assume the role)
Add a Policy to the IAM Role that grants the required Amazon S3 permissions
Configure the AWS Lambda function to use this IAM Role
There is no need to use an Amazon S3 Bucket Policy for your stated requirements.
As a general rule, Bucket Policies are used when granting permission to everyone, and IAM policies should be used when granting access to specific Users or Groups. (However, there can be other situations for using Bucket Policies, such as granting cross-account access.)

AWS lambda not able to access cross account s3 bucket object but from cli its working

I am getting forbidden error while accessing cross-account s3 buckets, but I am able to access bucket using aws s3 cli.
I have checked the following things:
I have tested code in June and was working and not changed in the last 4 months.
Lambda role (not changed in the last 4 months):
{
"Action": "s3:*",
"Resource": [
"*"
],
"Effect": "Allow"
},
code is working with s3 bucket in the same account.
in account 2 all list objects, write objects, Read bucket permissions, and Write bucket permissions access is given.
I am able to list bucket contents from aws cli and it's not working with lambda.
Found out the issue, it was happening because I didn't apply object level acl to read object
But still, there is one issue that there can be multiple files for whom I want the head object to determine the size of the file and asking the customer to put object acl one by one on each object is not user friendly so is there a way to put read object acl on bucket level.
Scenario:
Lambda-A in Account-A
Bucket-B in Account-B
Lambda-A wants to access objects in Bucket-B
To do this, two things are required:
Lambda-A must have an IAM Role with Amazon S3 permissions to access the remote bucket (eg similar to Role you show in your question). However, be careful, the role you show grants TOTAL S3 permissions, including deleting objects and deleting buckets! You should always scope-down the necessary permissions for that the Lambda function requires.
Also, Account-B must permit access to the Lambda function, since it owns the bucket. This can accomplished in two ways:
Add a Bucket Policy to Bucket-B that grants access to the IAM Role being used by the Lambda function, or
The Lambda function can assume an IAM Role in Account-B that has been granted access to Bucket-B
Your method of granting access by making individual objects public is not a great way of granting access.

S3 - Revoking "full_control" permission from owned object

While writing S3 server implementation, ran into question I can't really find answer anywhere.
For example I'm the bucket owner, and as well owner of uploaded object.
In case I revoke "full_control" permission from object owner (myself), will I be able to access and modify that object?
What's the expected behaviour in following example:
s3cmd setacl --acl-grant full_control:ownerID s3://bucket/object
s3cmd setacl --acl-revoke full_control:ownerID s3://bucket/object
s3cmd setacl --acl-grant read:ownerID s3://bucket/object
Thanks
So there's the official answer from AWS support:
The short answer for that question would be yes, the bucket/object
owner has permission to read and update the bucket/object ACL,
provided that there is no bucket policy attached that explicitly
removes these permissions from the owner. For example, the following
policy would prevent the owner from doing anything on the bucket,
including changing the bucket's ACL:
{
"Id": "Policy1531126735810",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example bucket policy",
"Action": "s3:*",
"Effect": "Deny",
"Resource": "arn:aws:s3:::<bucket>",
"Principal": "*"
}
]
}
However, as root (bucket owner) you'd still have permission to delete
that policy, which would then restore your permissions as bucket owner
to update the ACL.
By default, all S3 resources, buckets, objects and subresources, are
private; only the resource owner, which is the AWS account that
created it, can access the resource[1]. As the resource owner (AWS
account), you can optionally grant permission to other users by
attaching an access policy to the users.
Example: let's say you created an IAM user called -S3User1-, and gave
it permission to create buckets in S3 and update its ACLs. The user in
question then goes ahead and create a bucket and name it
"s3user1-bucket". After that, he goes further and remove List objects,
Write objects, Read bucket permission and Write bucket permissions
from the root account on the ACL section. At this point, if you log in
as root and attempt to read the objects in that bucket, an "Access
Denied" error will be thrown. However, as root you'll be able to go to
the "Permissions" section of the bucket and add these permissions
back.
These days it is recommended to use the official AWS Command-Line Interface (CLI) rather than s3cmd.
You should typically avoid using object-level permissions to control access. It is best to make them all "bucket-owner full control" and then use Bucket Policies to grant access to the bucket or a path.
If you wish to provide per-object access, it is recommended to use Amazon S3 pre-signed URLs, which give time-limited access to a private object. Once the time expires, the URL no longer works. Your application would be responsible for determining whether a user is permitted to access an object, and then generates the pre-signed URL (eg as a link or href on an HTML page).

S3 bucket policy vs access control list

On AWS website, it suggests using the following bucket policy to make the S3 bucket public:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::example-bucket/*"
]
}
]
}
What's the difference between that and just setting it through the Access Control List?
Bottom line: 1) Access Control Lists (ACLs) are legacy (but not deprecated), 2) bucket/IAM policies are recommended by AWS, and 3) ACLs give control over buckets AND objects, policies are only at the bucket level.
Decide which to use by considering the following: (As noted below by John Hanley, more than one type could apply and the most restrictive/least privilege permission will apply.)
Use S3 bucket policies if you want to:
Control access in S3 environment
Know who can access a bucket
Stay under 20kb policy size max
Use IAM policies if you want to:
Control access in IAM environment, for potentially more than just buckets
Manage very large numbers of buckets
Know what a user can do in AWS
Stay under 2-10kb policy size max, depending if user/group/role
Use ACLs if you want to:
Control access to buckets and objects
Exceed 20kb policy size max
Continue using ACLs and you're happy with them
https://aws.amazon.com/blogs/security/iam-policies-and-bucket-policies-and-acls-oh-my-controlling-access-to-s3-resources/
If you want to implement fine grained control over individual objects in your bucket use ACLs. If you want to implement global control, such as making an entire bucket public, use policies.
ACLs were the first authorization mechanism in S3. Bucket policies are the newer method, and the method used for almost all AWS services. Policies can implement very complex rules and permissions, ACLs are simplistic (they have ALLOW but no DENY). To manage S3 you need a solid understanding of both.
The real complication happens when you implement both ACLs and policies. The end permission set will be the least privilege union of both.
AWS has outlined the specific use cases for the different access policy options here
They lay out...
When to Use an Object ACL
when objects are not owned by bucket owner
permissions vary by object
When to Use a Bucket ACL
to grant write permission to the Amazon S3 Log Delivery group to write access log objects to your bucket
When to Use a Bucket Policy
to manage cross-account permissions for all Amazon S3 permissions (ACLs can only do read, write, read ACL, write ACL, and "full control" - all of the previous permissions)
When to Use a User Policy
if you want to manage permissions individually by attaching policies to users (or user groups) rather than at the bucket level using a Bucket Policy

How to secure an S3 bucket to an Instance's Role?

Using cloudformation I have launched an EC2 instance with a role that has an S3 policy which looks like the following
{"Statement":[{"Action":"s3:*","Resource":"*","Effect":"Allow"}]}
In S3 the bucket policy is like so
{
"Version": "2012-10-17",
"Id": "MyPolicy",
"Statement": [
{
"Sid": "ReadAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456678:role/Production-WebRole-1G48DN4VC8840"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::web-deploy/*"
}
]
}
When I login to the instance and attempt to curl any object I upload into the bucket (without acl modifications) I receive and Unauthorized 403 error.
Is this the correct way to restrict access to a bucket to only instances launched with a specific role?
The EC2 instance role is more than sufficient to put/read to any of your S3 buckets, but you need to use the instance role, which is not done automatically by curl.
You should use for example aws s3 cp <local source> s3://<bucket>/<key>, which will automatically used the instance role.
There are three ways to grant access to an object in Amazon S3:
Object ACL: Specific objects can be marked as "Public", so anyone can access them.
Bucket Policy: A policy placed on a bucket to determine what access to Allow/Deny, either publicly or to specific Users.
IAM Policy: A policy placed on a User, Group or Role, granting them access to an AWS resource such as an Amazon S3 bucket.
If any of these policies grant access, the user can access the object(s) in Amazon S3. One exception is if there is a Deny policy, which overrides an Allow policy.
Role on the Amazon EC2 instance
You have granted this role to the Amazon EC2 instance:
{"Statement":[{"Action":"s3:*","Resource":"*","Effect":"Allow"}]}
This will provide credentials to the instance that can be accessed by the AWS Command-Line Interface (CLI) or any application using the AWS SDK. They will have unlimited access to Amazon S3 unless there is also a Deny policy that otherwise restricts access.
If anything, that policy is granting too much permission. It is allowing an application on that instance to do anything it wants to your Amazon S3 storage, including deleting it all! It is better to assign least privilege, only giving permission for what the applications need to do.
Amazon S3 Bucket Policy
You have also created a Bucket Policy, which allows anything that has assumed the Production-WebRole-1G48DN4VC8840 role to retrieve the contents of the web-deploy bucket.
It doesn't matter what specific permissions the role itself has -- this policy means that merely using the role to access the web-deploy bucket will allow it to read all files. Therefore, this policy alone would be sufficient to your requirement of granting bucket access to instances using the Role -- you do not also require the policy within the role itself.
So, why can't you access the content? It is because using a straight CURL does not identify your role/user. Amazon S3 receives the request and treats it as anonymous, thereby not granting access.
Try accessing the data via the CLI or programmatically via an SDK call. For example, this CLI command would download an object:
aws s3 cp s3://web-deploy/foo.txt foo.txt
The CLI will automatically grab credentials related to your role, allowing access to the objects.