I've 2 AWS accounts. Account A has S3 bucket 'BUCKET' in which I've put file using Java api. I've configured my 'BUCKET' policy to allow cross-account file publishing.
But, when I try to open this file from Account A, it says AccessDeniedAccess Denied with hostId and requestId.
This file is published through Account B using java api, and this file has same size as that published through api. I tried to change file sizes and the new sizes were shown on AWS S3 console.
Here is my bucket policy:
{
"Version": "2008-10-17",
"Id": "Policy1357935677554",
"Statement": [
{
"Sid": "Stmt1357935647218",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNTB:user/accountb-user"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::bucket-name"
},
{
"Sid": "Stmt1357935676138",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNTB:user/accountb-user"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::bucket-name/*"
},
{
"Sid": "Stmt1357935676138",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNTB:user/accountb-user"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket-name/*"
},
{
"Sid": "Stmt1357935647218",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNTA:user/accounta-user"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::bucket-name"
},
{
"Sid": "Stmt1357935676138",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNTA:user/accounta-user"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket-name/*"
},
{
"Sid": "Stmt1357935676138",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNTA:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket-name/*"
}
]
}
The problem is when I try to download/open this file from Account A, I'm not able to open it.
The problem is that by default, when AWS (cli or SDK) upload a file it grants access to the uploader only through s3 ACLs.
In that case, to allow the owner to read the uploaded file, the uploader has to explicitly grant access to the owner of the bucket during the upload. Ex:
with the aws CLI (documentation here): aws s3api put-object --bucket <bucketname> --key <filename> --acl bucket-owner-full-control
with the nodejs API (documentation here): you have to set the params.ACL property of the AWS.S3.upload method to "bucket-owner-full-control"
In parallel, you can also ensure that the Bucket Owner Has Full Control with the bucket policy, (additional documentation here):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Grant Owner Full control dev",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNTB:root"
},
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::bucket-name/*"
],
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
}
]
}
Related
I would like to create the AWS Config access grant to the Amazon S3 Bucket and the policy is provided below that I write according to the link https://docs.aws.amazon.com/config/latest/developerguide/s3-bucket-policy.html:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AWSConfigBucketPermissionsCheck",
"Effect": "Allow",
"Principal": {
"Service": [
"config.amazonaws.com"
]
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::targetBucketName"
},
{
"Sid": "AWSConfigBucketExistenceCheck",
"Effect": "Allow",
"Principal": {
"Service": [
"config.amazonaws.com"
]
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::targetBucketName"
},
{
"Sid": "AWSConfigBucketDelivery",
"Effect": "Allow",
"Principal": {
"Service": [
"config.amazonaws.com"
]
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::targetBucketName/[optional] prefix/AWSLogs/sourceAccountID-WithoutHyphens/Config/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
}
]
}
I want to know about the part provided below:
"Resource": "arn:aws:s3:::targetBucketName/[optional] prefix/AWSLogs/sourceAccountID-WithoutHyphens/Config/*",
Whats the meaning of the prefix and how do I fill the prefix/AWSLogs/sourceAccountID-WithoutHyphens/Config/* part of the policy?
Thanks.
The prefix is what you define when you configure logging to S3. This is optional. Config writes the logs to S3 bucket using a standard path/key format which is "prefix/AWSLogs/sourceAccountID-WithoutHyphens/Config/*".
If you configure Config logging to S3 from console, you won't have to worry about the bucket policy as it will be created automatically. You simply give the bucket name and optional prefix.
I am the owner of AWS AccountC and need List and Get Permissions to BucketName owned by another person/team.
The bucket policy created is attached below. Policy for AccountA and AccountB were already existing and I added the policy for AccountC as given below
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AccessA",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::AccountA:root",
"arn:aws:iam::AccountA:user/ABC-Prod"
]
},
"Action": [
"s3:GetObject",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::BucketName/*",
"arn:aws:s3:::BucketName"
]
},
{
"Sid": "AccessB",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::AccountB:user/service-user",
"arn:aws:iam::AccountB:role/BatchUserRole"
]
},
"Action": "*",
"Resource": [
"arn:aws:s3:::BucketName/*",
"arn:aws:s3:::BucketName"
]
},
{
"Sid": "AccessC",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AccountC:root"
},
"Action": "s3:List*",
"Resource": "arn:aws:s3:::BucketName"
},
{
"Sid": "AccessD",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AccountC:root"
},
"Action": "s3:Get*",
"Resource": "arn:aws:s3:::BucketName/*"
}
]
}
I am able to list contents of BucketName using
aws s3 ls BucketName.
However, when I try
aws s3 cp --recursive BucketName/folderName/ ., it gives me an Access Denied error
An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
Block public access is enabled on the bucket, however I believe it should not affect since the Bucket policy is added
Tried multiple way to write the policy but the error persists. Can someone please help me understand what I might be missing here? Would be really grateful
I have a lambda function using a role with the following policy excerpt
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::ipwl-lambda-config/*",
"arn:aws:s3:::ipwl-lambda-config"
]
}
My bucket policy looks like the following
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyUnEncryptedObjectUploads",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::ipwl-lambda-config/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "aws:kms"
}
}
},
{
"Sid": "AllowLambda",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::accountid:role/iam_for_lambda"
},
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::ipwl-lambda-config/*",
"arn:aws:s3:::ipwl-lambda-config"
]
}
]
}
I've allowed GetObject and ListBucket on both the role and the bucket policy. However when my function runs
s3_obj = s3_res.Object(s3_bucket, s3_object)
I get
[ERROR] ClientError: An error occurred (AccessDenied) when calling the
GetObject operation: Access Denied
What more permissions do I have to add? The object is there, I can get it when I run the code locally using an admin role.
Update
I've checked to make sure the bucket and object names are correct dozens of times. The exception is actually coming from the second line here according to the stacktrace
s3_res = boto3.resource('s3')
s3_obj = s3_res.Object(s3_bucket, s3_object)
data = s3_obj.get()['Body'].read()
KMS should only be a factor for PutObject. We have a support account so I may check with them and update with their findings.
To download a KMS-encrypted object from S3, you not only need to be able to get the object. You also need to be able to decrypt the AWS KMS key.
Here's an example of an IAM policy that your Lambda function should have:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "s3get",
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::ipwl-lambda-config/*"
},
{
"Sid": "kmsdecrypt",
"Effect": "Allow",
"Action": "kms:Decrypt",
"Resource": "arn:aws:kms:example-region-1:123456789012:key/example-key-id"
}
]
}
The key policy also needs to allow the IAM role to decrypt the key, something like this:
{
"Sid": "kmsdecrypt",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/xyz"
},
"Action": "kms:Decrypt",
"Resource": "*"
}
Here is my policy which grants read/write access still not able to write into S3 bucket
Problem
Still getting below error:
Failed to upload /tmp/test.txt to bucketname/Automation_Result_2019-07-09 04:20:32_.csv: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ConsoleAccess",
"Effect": "Allow",
"Action": [
"s3:GetAccountPublicAccessBlock",
"s3:GetBucketAcl",
"s3:GetBucketLocation",
"s3:GetBucketPolicyStatus",
"s3:GetBucketPublicAccessBlock",
"s3:ListAllMyBuckets"
],
"Resource": "*"
},
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": [
"arn:aws:s3:::bucketname"
]
},
{
"Sid": "AllObjectActions",
"Effect": "Allow",
"Action": "s3:*Object",
"Resource": [
"arn:aws:s3:::bucketname/*"
]
}
]
}
Bucket policy
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "DenyIncorrectEncryptionHeader",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::bucket-name/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "aws:kms"
}
}
},
{
"Sid": "DenyUnEncryptedObjectUploads",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::bucket-name/*",
"Condition": {
"Null": {
"s3:x-amz-server-side-encryption": "true"
}
}
}
]
}
Python code (within Lambda function)
Relevant part of code
s3 = boto3.resource('s3', config=Config(signature_version='s3v4'))
target_bucket = 'bucket-name'
target_file = "Output/Automation_Result_"+EST+"_.txt"
s3.meta.client.upload_file('/tmp/test.txt', target_bucket, target_file, ExtraArgs={"ServerSideEncryption": "aws:kms", "SSEKMSKeyId":"XXXXXXX-XXXX-XXXX" })
This is how my bucket public access looks like!
It works fine for me!
I took your policy, renamed the bucket and attached it to a user as their only policy.
I was then able to successfully copy an object to and from the bucket.
If it is not working for you, then either you are not using the credentials that are associated with this policy, or there is another policy that is preventing the access, such as a Deny policy or a scope-limiting policy.
Is it possible to create an S3 bucket policy that allows read and write access from aws machine learning ? I tried below bucket policy, but not work.
bucket policy:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "machinelearning.amazonaws.com"
},
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::cxra/*"
},
{
"Effect": "Allow",
"Principal": {
"Service": "machinelearning.amazonaws.com"
},
"Action": "s3:PutObjectAcl",
"Resource": "arn:aws:s3:::cxra/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Effect": "Allow",
"Principal": {
"Service": "machinelearning.amazonaws.com"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::cxra"
},
{
"Effect": "Allow",
"Principal": {
"Service": "machinelearning.amazonaws.com"
},
"Action": [
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::cxra"
}
]
}
Error screen
Double check your policy against Granting Amazon ML Permissions to Output Predictions to Amazon S3 and Granting Amazon ML Permissions to Read Your Data from Amazon S3
As they are documented as two separate policies, I suggest you follow that, rather than joining the two policies in one single document. It will be easier to troubleshoot.