AWS S3 Bucket Permissions - Access Denied - amazon-web-services

I am trying to give myself permission to download existing files in an S3 bucket. I've modified the Bucket Policy, as follows:
{
"Sid": "someSID",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/AWSLogs/123123123123/*",
"Principal": {
"AWS": [
"arn:aws:iam::123123123123:user/myuid"
]
}
}
My understanding is that addition to the policy should give me full rights to "bucketname" for my account "myuid", including all files that are already in that bucket. However, I'm still getting Access Denied errors when I try to download any of those files via the link that comes up in the console.
Any thoughts?

Step 1
Click on your bucket name, and under the permissions tab, make sure that Block new public bucket policies is unchecked
Step 2
Then you can apply your bucket policy
Hope that helps

David, You are right but I found that, in addition to what bennie said below, you also have to grant view (or whatever access you want) to 'Authenticated Users'.
But a better solution might be to edit the user's policy to just grant access to the bucket:
{
"Statement": [
{
"Sid": "Stmt1350703615347",
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::mybucket/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": ["arn:aws:s3:::mybucket"],
"Condition": {}
}
]
}
The first block grants all S3 permissions to all elements within the bucket. The second block grants list permission on the bucket itself.

Change resource arn:aws:s3:::bucketname/AWSLogs/123123123123/* to arn:aws:s3:::bucketname/* to have full rights to bucketname

for show website static in s3:
This is bucket policies:
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"PublicReadGetObject",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::example-bucket/*"
]
}
]
}

Use below method for uploading any file for public readable form using TransferUtility in Android.
transferUtility.upload(String bucketName, String key, File file, CannedAccessControlList cannedAcl)
Example
transferUtility.upload("MY_BUCKET_NAME", "FileName", your_file, CannedAccessControlList.PublicRead);

To clarify: It is really not documented well, but you need two access statements.
In addition to your statement that allows actions to resource "arn:aws:s3:::bucketname/AWSLogs/123123123123/*", you also need a second statement that allows ListBucket to "arn:aws:s3:::bucketname", because internally the Aws client will try to list the bucket to determine it exists before doing its action.
With the second statement, it should look like:
"Statement": [
{
"Sid": "someSID",
"Action": "ActionThatYouMeantToAllow",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/AWSLogs/123123123123/*",
"Principal": {
"AWS": [
"arn:aws:iam::123123123123:user/myuid"
]
},
{
"Sid": "someOtherSID",
"Action": "ListBucket",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname",
"Principal": {
"AWS": [
"arn:aws:iam::123123123123:user/myuid"
]
}
]
Note: If you're using IAM, skip the "Principal" part.

If you have an encrypted bucket, you will need kms allowed.

Possible reason: if files have been put/copy by another AWS Account user then you can not access the file since still file owner is not you. The AWS account user who has been placed files in your directory has to grant access during a put or copy operation.
For a put operation, the object owner can run this command:
aws s3api put-object --bucket destination_awsexamplebucket --key dir-1/my_images.tar.bz2 --body my_images.tar.bz2 --acl bucket-owner-full-control
For a copy operation of a single object, the object owner can run one of these commands:
aws s3api copy-object --bucket destination_awsexammplebucket --key source_awsexamplebucket/myobject --acl bucket-owner-full-control
ref : AWS Link

Giving public access to Bucket to add policy is NOT A RIGHT way.
This exposes your bucket to public even for a short amount of time.
You will face this error even if you are admin access (Root user will not face it)
According to aws documentation you have to add "PutBucketPolicy" to you IAM user.
So Simply add a S3 Policy to you IAM User as in below screenshot , mention your Bucket ARN for make it safer and you don't have to make you bucket public again.

No one metioned MFA. For Amazon users who have enabled MFA, please use this:
aws s3 ls s3://bucket-name --profile mfa.
And prepare the profile mfa first by running
aws sts get-session-token --serial-number arn:aws:iam::123456789012:mfa/user-name --token-code 928371 --duration 129600. (replace 123456789012, user-name and 928371).

This can also happen if the encryption algorithm in the S3 parameters is missing. If bucket's default encryption is set to enabled, ex. Amazon S3-managed keys (SSE-S3), you need to pass ServerSideEncryption: "AES256"|"aws:kms"|string to your bucket's param.
const params = {
Bucket: BUCKET_NAME,
Body: content,
Key: fileKey,
ContentType: "audio/m4a",
ServerSideEncryption: "AES256" // Here ..
}
await S3.putObject(params).promise()

Go to this link and generate a Policy.
In the Principal field give *
In the Actions set the Get Objects
Give the ARN as arn:aws:s3:::<bucket_name>/*
Then add statement and then generate policy, you will get a JSON file and then just copy that file and paste it in the Bucket Policy.
For More Details go here.

Related

Can't copy from from an S3 bucket in another account

Added an update (an EDIT) at the bottom
Info
I have two AWS accounts. One with an S3 bucket and a second one that needs access to it.
On the account with the S3 bucket, the bucket policy looks like this:
{
"Sid": "DelegateS3ToSecAcc",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::Second-AWS-ACC-ID:root"
},
"Action": [
"s3:List*",
"s3:Get*"
],
"Resource": [
"arn:aws:s3:::BUCKET-NAME/*",
"arn:aws:s3:::BUCKET-NAME"
]
},
In the second account, that tries to get the file from S3, I've attached the following IAM Policy (There are other policies too but this should give it access):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*",
"s3-object-lambda:Get*",
"s3-object-lambda:List*"
],
"Resource": "*"
}
]
}
Problem
Despite everything, when I run the following command:
aws s3 cp s3://BUCKET-NAME/path/to/file/copied/from/URI.txt .
I get:
fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden
Did I do something wrong? What did I miss? All my web the web results suggested making sure in the bucket policy I have /* and that the IAM policy allows S3 access but it's already there.
EDIT: aws s3 ls works on the file! It means it just relates to permissions somehow. It works from another AWS that may have uploaded the file. Just need to figure out how to open it up.
The aws s3 cp command does lots of weird stuff, including (it seems) calling head-object.
Try calling the pure S3 API instead:
aws s3api get-object --bucket BUCKET-NAME --key path/to/file/copied/from/URI.txt .

AWS upload file to S3 Access Denied with user who has full s3 access - React Native

I'm trying to use the react-native-s3-upload package to upload files to an S3 bucket in my React Native App. This only works if I set "Block public access" to 'off' in S3. Otherwise I get <Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>. The access key and secret key provided with put requests are for an IAM user that belongs to a group with AmazonS3FullAccess. I also have this policy attached to the bucket:
{
"Version": "2012-10-17",
"Id": "Policyxxxxxxx",
"Statement": [
{
"Sid": "Stmtxxxxxxx",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxxxxxx:user/<user name>"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::<bucket name>"
}
]
}
I've tried all sorts of solutions but nothing seems to work. If I replace the secret key and access keys with dummy text then it returns <Error><Code>InvalidAccessKeyId</Code><Message> so it's definitely signing me in with the keys but seems to be ignoring the permissions.
If your IAM user has AmazonS3FullAccess policy, it should connect to bucket just fine.
I think that the problem is that default object acl is public,
https://www.npmjs.com/package/react-native-s3-upload
acl - The Access Control List of this object. Defaults to public-read
You need to set it to private.

AWS Lambda put data to cross account s3 bucket

Here is what I am trying to do:
I have access logs in account A which are encrypted default by AWS and I have lambda and s3 bucket in account B. I want to trigger the lambda when a new object lands on the account A s3 bucket and lambda in account B downloads the data and writes it to account B s3 bucket. Below are the blocks I am facing.
First approach:
I was able to get the trigger from account A s3 new object to lambda in account B however, the lambda in account B is not able to download the object - Access Denied error. After looking for a couple of days, I figured that it is because the Access logs are encrypted by default and there is no way I can add lambda role to the encryption role policy so that it can encrypt/decrypt the log files. So moved on to the second approach.
Second approach:
I have moved my lambda to Account A. Now the source s3 bucket and lambda are in Account A and destination s3 bucket is in Account B. Now I can process the Access logs in the Account A via Lambda in Account A but when it writes the file in the Account B s3 bucket I get Access denied error while downloaded/reading the file.
Lambda role policy:
In addition to full s3 access and full lambda access.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1574387531641",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
},
{
"Sid": "Stmt1574387531642",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::Account-B-bucket",
"arn:aws:s3:::Account-B-bucket/*"
]
}
]
}
Trust relationship
{ "Version": "2012-10-17", "Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com",
"AWS": "arn:aws:iam::Account-B-ID:root"
},
"Action": "sts:AssumeRole"
} ] }
Destination - Account B s3 bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::Account-A-ID:role/service-role/lambda-role"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::Account-B-Bucket",
"arn:aws:s3:::Account-B-Bucket/*"
]
},
{
"Sid": "Stmt11111111111111",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::Account-A-ID:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::Account-B-Bucket",
"arn:aws:s3:::Account-B-Bucket/*"
]
}
] }
I am stuck here. I want lambda to be able to decrypt the access logs and read/process the data and write it to different account s3 bucket. Am I missing something? Help is much appreciated!
Adding file metadata:
File property screenshot
Lambda Code:
s3 = boto3.client('s3')
# reading access logs from account A. Lambda is also running in account A.
response = s3.get_object(Bucket=access_log_bucket, Key=access_log_key)
body = response['Body']
content = io.BytesIO(body.read())
# processing access logs
processed_content = process_it(content)
# writting to account B s3 bucket
s3.put_object(Body=processed_content,
Bucket=processed_bucket,
Key=processed_key)
Rather than downloading and then uploading the object, I would recommend that you use the copy_object() command.
The benefit of using copy_object() is that the object will be copied directly by Amazon S3, without the need to first download the object.
When doing so, the credentials you use must have read permissions on the source bucket and write permissions on the destination bucket. (However, if you are 'processing' the data, this of course won't apply.)
As part of this command, you can specify an ACL:
ACL='bucket-owner-full-control'
This is required because the object is being written from credentials in Account A to a bucket owned by Account B. Using bucket-owner-full-control will pass control of the object to Account B. (It is not required if using credentials from Account B and 'pulling' an object from Account A.)
Thanks John Rotenstein for the direction. I found the solution. I only needed to add ACL='bucket-owner-full-control' in the put_object. Below is the complete boto3 cmd.
s3.put_object(
ACL='bucket-owner-full-control'
Body=processed_content,
Bucket=processed_bucket,
Key=processed_key)

Granting write access for the Authenticated Users to S3 bucket

I want to give read access to all AWS authenticated users to a bucket. Note I don't want my bucket to be publicly available. Old amazon console seems to give that provision which I no longer see -
Old S3 bucket ACL -
New bucket Acl -
How can I achieve old behavior? Can I do it using bucket policies -
Again I don't want
{
"Id": "Policy1510826508027",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1510826503866",
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::athakur",
"Principal": {
"AWS": [
"*"
]
}
}
]
}
That support is removed in the new s3 console and has to be set via ACL.
You can use the put-bucket-acl api to set the Any Authenticated AWS User as grantee.
The grantee for this is:
<Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Group"><URI><replaceable>http://acs.amazonaws.com/groups/global/AuthenticatedUsers</replaceable></URI></Grantee>
Refer http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTacl.html for more info.
We can give entire ACL string in the aws cli command as ExploringApple explained or just do -
aws s3api put-bucket-acl --bucket bucketname --grant-full-control uri=http://acs.amazonaws.com/groups/global/AuthenticatedUsers
Docs - http://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-acl.html

Amazon s3 – 403 Forbidden with Correct Bucket Policy

I'm trying to make all of the images I've stored in my s3 bucket publicly readable, using the following bucket policy.
{
"Id": "Policy1380877762691",
"Statement": [
{
"Sid": "Stmt1380877761162",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::<bucket-name>/*",
"Principal": {
"AWS": [
"*"
]
}
}
]
}
I have 4 other similar s3 buckets with the same bucket policy, but I keep getting 403 errors.
The images in this bucket were transferred using s3cmd sync as I'm trying to migrate the contents of the bucket to a new account.
The only difference that I can see is that
i'm using an IAM user with admin access, instead of the root user
the files dont have a
"grantee : everyone open/download file" permission on each of the
files, something the files had in the old bucket
If you want everyone to access your S3 objects in the bucket, the principal should be "*", i.e., like this:
{
"Id": "Policy1380877762691",
"Statement": [
{
"Sid": "Stmt1380877761162",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::<bucket-name>/*",
"Principal": "*"
}
}
]
}
Source: http://docs.aws.amazon.com/IAM/latest/UserGuide/AccessPolicyLanguage_ElementDescriptions.html#Principal
I've managed to solve it by running the s3cmd command again but adding --acl-public to the end of it. Seems to have fixed my issue
I Know this is an old question, but for whoever is having this issue and working from the AWS Console. Go to the bucket in AWS S3 console:
Open the permissions tab.
Open Public Access settings.
Click edit
Then in the editing page :
Uncheck Block new public bucket policies (Recommended)
Uncheck Block public and cross-account access if bucket has public policies (Recommended)
Click save
CAUTION
PLEASE NOTE THAT THIS WILL MAKE YOUR BUCKET ACCESSIBLE BY ANYONE ON THE INTERNET, EVENT IF THEY DO NOT HAVE AN AWS ACCOUNT, THEY STILL CAN ACCESS THE BUCKET AND THE BUCKET'S CONTENTS. PLEASE HANDLE WITH CAUTION!
From AWS Documentation
http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::examplebucket/*"]
}
]
}
Not sure if the order or attributes matter here. I would give this one a try.