Access Denied while Applying S3 Bucket Policy - amazon-web-services

Am new to AWS. i have created bucket for static hosting and generated my bucket policy as mentioned below
But when am trying to save, it gives Access Denied error.
And for your information am the owner of bucket and the root user.

First try this command,
aws s3 ls
This will tell you that you are able to access s3 services or not.
If it works than try to add that specific bucket in the resource section of your policy like below
"Resource": [
"BUCKET_NAME",
"BUCKET_NAME/*"
]
I hope this helps!

Related

How do you allow granting public read access to objects uploaded to AWS S3?

I have created a policy that allows access to a single S3 bucket in my account. I then created a group that has only this policy and a user that is part of that group.
The user can view, delete and upload files to the bucket, as expected. However, the user does not seem to be able to grant public read access to uploaded files.
When the Grant public read access to this object(s) option is selected, the upload fails.
The bucket is hosting a static website and I want to allow the frontend developer to upload files and make them public.
The policy for the user role is below:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": "arn:aws:s3:::my-bucket"
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
This is what happens when the IAM user tries to grant public access to the uploaded file:
The proxy error seems unrelated, but essentially the upload is stuck and nothing happens. If they don't select the Grant public access option, the upload goes through immediately (despite the proxy error showing up as well).
To reproduce your situation, I did the following:
Created a new Amazon S3 bucket with default settings (Block Public Access = On)
Created an IAM User (with no policies attached)
Created an IAM Group (with no policies attached)
Added the IAM User to the IAM Group
Attached your policy (from the Question) to the IAM Group (updating the bucket name) as an inline policy
Logged into the Amazon S3 management console as the new IAM User
At this point, the user received an Access Denied error because they were not permitted to list all Amazon S3 buckets. Thus, the console was not usable.
Instead, I ran this AWS CLI command:
aws s3 cp foo.txt s3://new-bucket/ --acl public-read
The result was:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
However, the operation succeeded with:
aws s3 cp foo.txt s3://new-bucket/
This means that the --acl is the component that was denied.
I then went to Block Public Access for the bucket and turned OFF the option called "Block public access to buckets and objects granted through new access control lists (ACLs)". My settings were:
I then ran this command again:
aws s3 cp foo.txt s3://new-bucket/ --acl public-read
It worked!
To verify this, I went back into Block Public Access and turned ON all options (via the top checkbox). I re-ran the command and it was Access Denied again, confirming that the cause was the Block Public Access setting.
Bottom line: Turn off the first Block Public Access setting.
You can do it through AWS CLI Update object's ACL
Option 1:
object that's already stored on Amazon S3, you can run this command to update the ACL for public read access:
aws s3api put-object-acl --bucket <<S3 Bucket Name>> --key <<object>> --acl public-read
Option 2:
Run this command to grant full control of the object to the AWS account owner and read access to everyone else:
aws s3api put-object-acl --bucket <<S3 Bucket Name>> --key <<object>> --grant-full-control emailaddress=<<Accountowneremail#emaildomain.com>> --grant-read uri=http://acs.amazonaws.com/groups/global/AllUsers
I found that certain actions (like renaming an object) will fail when executed from the console (but will succeed from the CLI!) when ListAllMyBuckets is not granted for all s3 resources. Adding the following to the IAM policy resolved the issue:
{
"Sid": "AccessS3Console",
"Action": [
"s3:ListAllMyBuckets"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::*"
}
Some of the actions I tested that failed from the console but succeeded from CLI:
Renaming an object. The console displays "Error - Failed to rename the file to ". Workaround: deleting and re-uploading the object with a new name.
Uploading an object with "Grant public read access to this object(s)". The console's status bar shows that the operation is stuck in "in progress". Workaround: Uploading the object without granting public read access, and then right clicking on it and selecting "Make public".
I experienced these issues after following the instructions here
https://aws.amazon.com/premiumsupport/knowledge-center/s3-console-access-certain-bucket/ which describe how to restrict access to a single bucket (and preventing seeing the full list of buckets in the account). The post didn't mention the caveats.
To limit a user's Amazon S3 console access to only a certain bucket or
folder (prefix), change the following in the user's AWS Identity and
Access Management (IAM) permissions:
Remove permission to the s3:ListAllMyBuckets action.
Add permission to s3:ListBucket only for the bucket or folder that you want the user to access.
Note: To allow the user to upload and download objects from the bucket or folder, you must also include s3:PutObject and s3:GetObject.
Warning: After you change these permissions, the user gets an Access
Denied error when they access the main Amazon S3 console. The user
must access the bucket using a direct console link to the bucket or
folder.

aws s3 access denied when changing bucket policy for root user

I am logged in as a root user for aws account. A bucket has been created for static website hosting. I have also unchecked all the options on Public access Settings as you can see in this image below.
After that I tried to update the bucket policy to this from docs
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"PublicReadGetObject",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::example-bucket/*"
]
}
]
}
But I keep getting Access denied Error. I don't get it what have I missed. I have tried following things.
I have found other SO posts this which tells to uncheck block
new public bucket policies option which I have already done but
why does it not work for me?
I destroyed the bucket and redid everything from scratch but same issue
persists.
I also created a new IAM user with roles to access everything. This also didn't solve the issue.
I can however manually change the s3 objects to public through Make Public option in s3 menu under s3 Overview tab. This has been solving my problem temporarily for now but I have to keep doing this every time I re upload the files.
So my Question is. Why do I keep getting access denied even for root user?
I followed your steps and it worked fine for me:
I created a new bucket
I clicked on the bucket, went into the Permissions tab and edited the Public access settings
I turned off the two settings under Manage public bucket policies for this bucket
I added a Bucket Policy by taking your policy (above) and changed the bucket name to match my bucket
I successfully saved the Bucket Policy
It then gave me a warning note:
This bucket has public access
You have provided public access to this bucket. We highly recommend that you never grant any kind of public access to your S3 bucket.
I cannot see why you would be receiving Access Denied. My only suggestion is to try it with a different browser just in case there's some strange issue.
You could also add the Bucket Policy via the the AWS CLI: put-bucket-policy — AWS CLI Command Reference
This also successfully added a Bucket Policy onto my bucket.

S3 - Revoking "full_control" permission from owned object

While writing S3 server implementation, ran into question I can't really find answer anywhere.
For example I'm the bucket owner, and as well owner of uploaded object.
In case I revoke "full_control" permission from object owner (myself), will I be able to access and modify that object?
What's the expected behaviour in following example:
s3cmd setacl --acl-grant full_control:ownerID s3://bucket/object
s3cmd setacl --acl-revoke full_control:ownerID s3://bucket/object
s3cmd setacl --acl-grant read:ownerID s3://bucket/object
Thanks
So there's the official answer from AWS support:
The short answer for that question would be yes, the bucket/object
owner has permission to read and update the bucket/object ACL,
provided that there is no bucket policy attached that explicitly
removes these permissions from the owner. For example, the following
policy would prevent the owner from doing anything on the bucket,
including changing the bucket's ACL:
{
"Id": "Policy1531126735810",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example bucket policy",
"Action": "s3:*",
"Effect": "Deny",
"Resource": "arn:aws:s3:::<bucket>",
"Principal": "*"
}
]
}
However, as root (bucket owner) you'd still have permission to delete
that policy, which would then restore your permissions as bucket owner
to update the ACL.
By default, all S3 resources, buckets, objects and subresources, are
private; only the resource owner, which is the AWS account that
created it, can access the resource[1]. As the resource owner (AWS
account), you can optionally grant permission to other users by
attaching an access policy to the users.
Example: let's say you created an IAM user called -S3User1-, and gave
it permission to create buckets in S3 and update its ACLs. The user in
question then goes ahead and create a bucket and name it
"s3user1-bucket". After that, he goes further and remove List objects,
Write objects, Read bucket permission and Write bucket permissions
from the root account on the ACL section. At this point, if you log in
as root and attempt to read the objects in that bucket, an "Access
Denied" error will be thrown. However, as root you'll be able to go to
the "Permissions" section of the bucket and add these permissions
back.
These days it is recommended to use the official AWS Command-Line Interface (CLI) rather than s3cmd.
You should typically avoid using object-level permissions to control access. It is best to make them all "bucket-owner full control" and then use Bucket Policies to grant access to the bucket or a path.
If you wish to provide per-object access, it is recommended to use Amazon S3 pre-signed URLs, which give time-limited access to a private object. Once the time expires, the URL no longer works. Your application would be responsible for determining whether a user is permitted to access an object, and then generates the pre-signed URL (eg as a link or href on an HTML page).

How can AWS CloudFormation Lambda resource access code file in S3 if it is KMS encrypted?

My Lambda function deployment via CloudFormation works OK when the Lambda's code file in S3 bucket is not encrypted, but fails when I use KMS encrypted code file.
I have AWS CloudFormation stack that contains Lambda resources. My Python code ZIP file is in an S3 bucket. The Lambda resources in my CFN template contain "Code" property that points to S3Bucket and S3Key where zip is located. The bucket policy allows my role the actions s3:GetObject, s3:PutObject, s3:ListBucket. The stack build works fine when code ZIP file is unencrypted. But when I use a KMS encrypted zip file in bucket, I get the error:
"Your access has been denied by S3, please make sure your request credentials have permission to GetObject for my-bucket/my-folder/sample.zip. S3 Error Code: AccessDenied. S3 Error Message: Access Denied"
Do I need to enhance my S3 bucket policy to support accessing KMS encrypted files? How is that done? (The error message seems misleading, since my bucket policy already does allow my role GetObject access.) Thanks.
Since you are almost certain that the request is failing for encrypted objects, you have to give the "role" you are referring permission to use the KMS CMK and it must be done via the KMS key policy (and/or IAM policy).
If you are using a customer managed CMK, then you can refer here and add the IAM role as the Key User. If you are using AWS managed CMK (identifiable by the AWS icon), you can add permission policy to the IAM role as following:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": [
"kms:*"
],
"Resource": [
"arn:aws:kms:*:account_id:key/key_id"
]
}
}
Note:
Above policy allows all KMS API for the specific key but you can tweak it to give minimum required permission.
For customer managed CMKs, it is also possible to manage the permission to KMS CMK via the IAM policy (along with key policy), since we don't know the key policy, I just included the option to manage via the key policy itself.

How to secure an S3 bucket to an Instance's Role?

Using cloudformation I have launched an EC2 instance with a role that has an S3 policy which looks like the following
{"Statement":[{"Action":"s3:*","Resource":"*","Effect":"Allow"}]}
In S3 the bucket policy is like so
{
"Version": "2012-10-17",
"Id": "MyPolicy",
"Statement": [
{
"Sid": "ReadAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456678:role/Production-WebRole-1G48DN4VC8840"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::web-deploy/*"
}
]
}
When I login to the instance and attempt to curl any object I upload into the bucket (without acl modifications) I receive and Unauthorized 403 error.
Is this the correct way to restrict access to a bucket to only instances launched with a specific role?
The EC2 instance role is more than sufficient to put/read to any of your S3 buckets, but you need to use the instance role, which is not done automatically by curl.
You should use for example aws s3 cp <local source> s3://<bucket>/<key>, which will automatically used the instance role.
There are three ways to grant access to an object in Amazon S3:
Object ACL: Specific objects can be marked as "Public", so anyone can access them.
Bucket Policy: A policy placed on a bucket to determine what access to Allow/Deny, either publicly or to specific Users.
IAM Policy: A policy placed on a User, Group or Role, granting them access to an AWS resource such as an Amazon S3 bucket.
If any of these policies grant access, the user can access the object(s) in Amazon S3. One exception is if there is a Deny policy, which overrides an Allow policy.
Role on the Amazon EC2 instance
You have granted this role to the Amazon EC2 instance:
{"Statement":[{"Action":"s3:*","Resource":"*","Effect":"Allow"}]}
This will provide credentials to the instance that can be accessed by the AWS Command-Line Interface (CLI) or any application using the AWS SDK. They will have unlimited access to Amazon S3 unless there is also a Deny policy that otherwise restricts access.
If anything, that policy is granting too much permission. It is allowing an application on that instance to do anything it wants to your Amazon S3 storage, including deleting it all! It is better to assign least privilege, only giving permission for what the applications need to do.
Amazon S3 Bucket Policy
You have also created a Bucket Policy, which allows anything that has assumed the Production-WebRole-1G48DN4VC8840 role to retrieve the contents of the web-deploy bucket.
It doesn't matter what specific permissions the role itself has -- this policy means that merely using the role to access the web-deploy bucket will allow it to read all files. Therefore, this policy alone would be sufficient to your requirement of granting bucket access to instances using the Role -- you do not also require the policy within the role itself.
So, why can't you access the content? It is because using a straight CURL does not identify your role/user. Amazon S3 receives the request and treats it as anonymous, thereby not granting access.
Try accessing the data via the CLI or programmatically via an SDK call. For example, this CLI command would download an object:
aws s3 cp s3://web-deploy/foo.txt foo.txt
The CLI will automatically grab credentials related to your role, allowing access to the objects.