How to resolve access denied after saving a bad bucket policy? - amazon-web-services

Using terraform I've setup my stack, I just altered the bucket policy and applied but now I've found the bucket policy is denying all actions including management and altering the policy.
How might I update the policy so I can delete the bucket?
I am not able to access the bucket policy any more, but what was applied is still in my terraform state. If I attempt a destroy on the bucket it reveals the following (I've masked id's and account).
The following is just a sample as there are 5 action blocks and each contains a dozen userid's.
- Statement = [
- {
- Action = [
- "s3:ListBucketVersions",
- "s3:ListBucketMultipartUploads",
- "s3:ListBucket",
]
- Condition = {
- StringLike = {
- aws:userid = [
- "AROAXXXXXXXXXXXXXXXXA:*",
- "AROAXXXXXXXXXXXXXXXXB:*",
]
}
- StringNotLike = {
- aws:userid = [
- "*:AROAAXXXXXXXXXXXXXXXA:user1",
- "*:AROAAXXXXXXXXXXXXXXXA:user2",
- "*:AROAAXXXXXXXXXXXXXXXA:*",
]
}
}
- Effect = "Deny"
- Principal = "*"
- Resource = "arn:aws:s3:::my-account-bucket-name"
- Sid = "Deny bucket-level read operations except for authorised users"
},

Based on the comments.
It seems that the new policy resulted in denying access to everyone. In such cases, AWS explains what to do in a blog post titled:
I accidentally denied everyone access to my Amazon S3 bucket. How do I regain access?
The process involves accessing the account as root user and deleting the bucket policy.

Related

AWS DMS replication task from Postgres RDS to Redshift getting AccessDenied on S3 bucket

We have deployed a DMS replication task to replicate our entire Postgres database to Redshift. The tables are getting created with the correct schemas, but the data isn't coming through to Redshift and getting held up in the S3 bucket DMS uses as an intermediary step. This is all deployed via Terraform.
We've configured the IAM roles as described in the replication instance Terraform docs with all three of dms-access-for-endpoint, dms-cloudwatch-logs-role, and dms-vpc-role IAM roles created. The IAM roles are deployed via a different stack to where DMS is deployed from as the roles are used by another, successfully deployed, DMS instance running a different task.
data "aws_iam_policy_document" "dms_assume_role_document" {
statement {
actions = ["sts:AssumeRole"]
principals {
identifiers = [
"s3.amazonaws.com",
"iam.amazonaws.com",
"redshift.amazonaws.com",
"dms.amazonaws.com",
"redshift-serverless.amazonaws.com"
]
type = "Service"
}
}
}
# Database Migration Service requires the below IAM Roles to be created before
# replication instances can be created. See the DMS Documentation for
# additional information: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#CHAP_Security.APIRole
# * dms-vpc-role
# * dms-cloudwatch-logs-role
# * dms-access-for-endpoint
resource "aws_iam_role" "dms_access_for_endpoint" {
name = "dms-access-for-endpoint"
assume_role_policy = data.aws_iam_policy_document.dms_assume_role_document.json
managed_policy_arns = ["arn:aws:iam::aws:policy/service-role/AmazonDMSRedshiftS3Role"]
force_detach_policies = true
}
resource "aws_iam_role" "dms_cloudwatch_logs_role" {
name = "dms-cloudwatch-logs-role"
description = "Allow DMS to manage CloudWatch logs."
assume_role_policy = data.aws_iam_policy_document.dms_assume_role_document.json
managed_policy_arns = ["arn:aws:iam::aws:policy/service-role/AmazonDMSCloudWatchLogsRole"]
force_detach_policies = true
}
resource "aws_iam_role" "dms_vpc_role" {
name = "dms-vpc-role"
description = "DMS IAM role for VPC permissions"
assume_role_policy = data.aws_iam_policy_document.dms_assume_role_document.json
managed_policy_arns = ["arn:aws:iam::aws:policy/service-role/AmazonDMSVPCManagementRole"]
force_detach_policies = true
}
However, on runtime, we're seeing the following logs in CloudWatch:
2022-09-01T16:51:38 [SOURCE_UNLOAD ]E: Not retriable error: <AccessDenied> Access Denied [1001705] (anw_retry_strategy.cpp:118)
2022-09-01T16:51:38 [SOURCE_UNLOAD ]E: Failed to list bucket 'dms-sandbox-redshift-intermediate-storage': error code <AccessDenied>: Access Denied [1001713] (s3_dir_actions.cpp:105)
2022-09-01T16:51:38 [SOURCE_UNLOAD ]E: Failed to list bucket 'dms-sandbox-redshift-intermediate-storage' [1001713] (s3_dir_actions.cpp:209)
We also enabled S3 server access logs on the bucket itself to see whether this would give us more information. This is what we're seeing (anonymised):
<id> dms-sandbox-redshift-intermediate-storage [01/Sep/2022:15:43:32 +0000] 10.128.69.80 arn:aws:sts::<account>:assumed-role/dms-access-for-endpoint/dms-session-for-replication-engine <code> REST.GET.BUCKET - "GET /dms-sandbox-redshift-intermediate-storage?delimiter=%2F&max-keys=1000 HTTP/1.1" 403 AccessDenied 243 - 30 - "-" "aws-sdk-cpp/1.8.80/S3/Linux/4.14.276-211.499.amzn2.x86_64 x86_64 GCC/4.9.3" - <code> SigV4 ECDHE-RSA-AES128-GCM-SHA256 AuthHeader s3.eu-west-2.amazonaws.com TLSv1.2 -
The above suggests that a service dms-session-for-replication is the service in question that is receiving the AccessDenied responses, but we're unable to pinpoint what this is and how we can fix it.
We attempted to add a bucket policy to the S3 bucket itself but this did not work (this also includes the S3 server access logs bucket):
resource "aws_s3_bucket" "dms_redshift_intermediate" {
# Prefixed with `dms-` as that's what the AmazonDMSRedshiftS3Role policy filters on
bucket = "dms-sandbox-redshift-intermediate-storage"
}
resource "aws_s3_bucket_logging" "log_bucket" {
bucket = aws_s3_bucket.dms_redshift_intermediate.id
target_bucket = aws_s3_bucket.log_bucket.id
target_prefix = "log/"
}
resource "aws_s3_bucket" "log_bucket" {
bucket = "${aws_s3_bucket.dms_redshift_intermediate.id}-logs"
}
resource "aws_s3_bucket_acl" "log_bucket" {
bucket = aws_s3_bucket.log_bucket.id
acl = "log-delivery-write"
}
resource "aws_s3_bucket_policy" "dms_redshift_intermediate_policy" {
bucket = aws_s3_bucket.dms_redshift_intermediate.id
policy = data.aws_iam_policy_document.dms_redshift_intermediate_policy_document.json
}
data "aws_iam_policy_document" "dms_redshift_intermediate_policy_document" {
statement {
actions = [
"s3:*"
]
principals {
identifiers = [
"dms.amazonaws.com",
"redshift.amazonaws.com"
]
type = "Service"
}
resources = [
aws_s3_bucket.dms_redshift_intermediate.arn,
"${aws_s3_bucket.dms_redshift_intermediate.arn}/*"
]
}
}
How do we fix the <AccessDenied> issues that we're seeing on CloudWatch and enable data loading to Redshift? DMS is able to PUT items in the S3 bucket as we're seeing encrypted CSVs appearing in there (the server access logs also confirm this), but DMS is unable to then GET the files back out of it for Redshift. The AccessDenied responses also suggest that it's an IAM role issue not a security group issue but our IAM roles are configured as per the docs so we're confused as to what could be causing this issue.
What we thought was an IAM issue, was actually a security group issue. The COPY command for Redshift was struggling to access S3. By adding a 443 egress rule for HTTPS to the Redshift security group, we were able to pull data through again
resource "aws_security_group_rule" "https_443_egress" {
type = "egress"
description = "Allow HTTP egress from DMS SG"
protocol = "tcp"
to_port = 443
from_port = 443
security_group_id = aws_security_group.redshift.id
cidr_blocks = ["0.0.0.0/0"]
}
So if you're experiencing the same issue as the question, check whether Redshift has access to S3 via HTTPS.
You are right this is an IAM role issue, make sure the role in questions has the following statements added to the policy document,
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource":"arn:aws:s3:::<yourbucketnamehere>/*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource":"arn:aws:s3:::<yourbucketnamehere>"
},
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::*"
}

How allow or prevent aws iam users to run sendCommand?

I have created a aws_iam_policy_document with ssm and ec2 actions :
data "aws_iam_policy_document" "AdminEc2Actions" {
statement {
effect = "Allow"
sid = "sid1"
actions = [
"ssm:SendCommand",
"ec2:Start*",
"ssm:StartSession"
]
resources = [
"*"
]
}
}
# Create IAM policy for ec2
resource "aws_iam_policy" "AdminEc2Actions" {
name = "test-AdminEc2Actions-policies"
description = "ec2 policies"
path = "/"
policy = data.aws_iam_policy_document.AdminEc2Actions.json
}
# Attaches customer managed IAM poloicy to an IAM admins group
resource "aws_iam_group_policy_attachment" "AdminEc2Actions" {
group = aws_iam_group.groups[0].name
policy_arn = aws_iam_policy.AdminEc2Actions.arn
}
But, for some reason, any changes in actions sometimes are taken into account sometimes are ignored. eg. when I run this command, I got :
[ssm-user#ip-xxx.yy.zz.1]$ aws s3 mv /tmp/file* s3://test-dent-backup1/daily/
move failed: ../../tmp/file1.txt to s3://test-dent-backup1/daily/file1.txt [Errno 1] Operation not permitted: '/tmp/file1.txt'
Any suggestions?

Append principal to bucket policy document instead overwrite it

I'm developing a SPA with several envs: dev, preprod, prod
Each env have a corresponding CloudFront distribution and bucket website.
We also have a static website with user manual that is served on behavior /documentation/*
This static website is stored on a separate bucket
All environments share the same documentation, so there is only one bucket for all envs.
The project is a company portal, so user documentation should not be accessible publicly.
To achieve that, we are using OAI, so bucket is accessible only through CloudFront (a lambda#edge ensure user has a valid token and redirect him otherwise, so the documentation is private).
Everything is fine when I deploy on dev using
terraform workspace select dev
terraform apply -var-file=dev.tfvars
But when I try to deploy on preprod
terraform workspace select preprod
terraform apply -var-file=preprod.tfvars
Terraform changes OAI ID this way
# module.s3.aws_s3_bucket_policy.documentation_policy will be updated in-place
~ resource "aws_s3_bucket_policy" "documentation_policy" {
bucket = "my-bucket"
~ policy = jsonencode(
~ {
~ Statement = [
~ {
Action = "s3:GetObject"
Effect = "Allow"
~ Principal = {
~ AWS = "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E3U64NEVQ9IQHH" -> "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E3ORU58OAALJAP"
}
Resource = "arn:aws:s3:::my-bucket/*"
Sid = ""
},
]
Version = "2012-10-17"
}
)
}
Whereas I would like the principal to added this way:
# module.s3.aws_s3_bucket_policy.documentation_policy will be updated in-place
~ resource "aws_s3_bucket_policy" "documentation_policy" {
bucket = "my-bucket"
~ policy = jsonencode(
~ {
Statement = [
{
Action = "s3:GetObject"
Effect = "Allow"
Principal = {
AWS = "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E3U64NEVQ9IQHH"
}
Resource = "arn:aws:s3:::my-bucket/*"
Sid = ""
},
+ {
+ Action = "s3:GetObject"
+ Effect = "Allow"
+ Principal = {
+ AWS = "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E3ORU58OAALJAP"
+ }
+ Resource = "arn:aws:s3:::my-bucket/*"
+ Sid = ""
+ },
]
Version = "2012-10-17"
}
)
}
Is there any way to achieve this using terraform 0.13.5
For information, here is my documentation-bucket.tf which I import in each workspace once created
resource "aws_s3_bucket" "documentation" {
bucket = var.documentation_bucket
tags = {
BillingProject = var.billing_project
Environment = var.env
Terraform = "Yes"
}
logging {
target_bucket = var.website_logs_bucket
target_prefix = "s3-access-logs/${var.documentation_bucket}/"
}
lifecycle {
prevent_destroy = true
}
}
data "aws_iam_policy_document" "documentation" {
statement {
actions = ["s3:GetObject"]
resources = ["${aws_s3_bucket.documentation.arn}/*"]
principals {
type = "AWS"
identifiers = [aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn]
}
}
}
resource "aws_s3_bucket_policy" "documentation_policy" {
bucket = aws_s3_bucket.documentation.id
policy = data.aws_iam_policy_document.documentation.json
}
Best regards
Assumptions:
Based on what you said it seems you manage the same resource in different state files (assumption based on "[...] which I import in each workspace once created")
You basically created a split-brain situation by doing so.
Assumption number two: you are deploying a single S3 bucket and multiple CloudFront distributions accessing this single bucket all in the same AWS Account.
Answer:
While it is totally fine to do so, this is not how it is supposed to be set up. A single resource should only be managed by a single terraform state (workspace) or you will see this expected but unwanted behavior of having an unstable state.
I would suggest to manage the S3 bucket in a single workspace configuration or even create a new workspace called 'shared'.
In this workspace, you can use terraform_remote_state data source to import the state of the other workspaces and build a policy including all your OAIs extracted from the other states. Of course, you can do so without creating a new shared workspace.
I hope this helps, while it might not be the expected solution - and maybe my assumptions are wrong.
Last words:
It's not considered good practice to share resources between environments, as data will most likely stay when you decommission environments, and managing access can get complex and insecure.
Better keep versions of the environments as close as possible like in Dev/Prod Parity of the 12 factors app, But try not to share resources. If you feel you need to share resources, take some time, and challenge your architecture again.

Why does S3 bucket created in terraform needs bucket policy to grant access to lambda

We use a combination of cloud formation and terraform where some common resources like DynamoDB, S3 are created using terraform and others like APIGateway are created using serverless and cloudformation. All resources are in the same AWS account
I have an S3 bucket in terraform
resource "aws_s3_bucket" "payment_bucket" {
bucket = "payment-bucket-${var.env_name}"
acl = "private"
tags = merge(
module.tags.base_tags,
{
"Name" = "payment-bucket-${var.env_name}"
}
)
lifecycle {
ignore_changes = [tags]
}
}
This creates a private bucket payment-bucket-dev in my AWS account when I run the tf-apply
We have an APIGateway in the same AWS account which is created using serverless and one of the lambda needs accesses to this bucket so I have created an IAM role for the lambda function to grant permission to access the bucket.
makePayment:
name: makePayment-${self:provider.stage}
handler: src/handler/makePayment.default
events:
- http:
path: /payment
method: post
private: true
cors: true
iamRoleStatementsName: ${self:service}-${self:provider.stage}-makePayment-role
iamRoleStatements:
- Effect: Allow
Action:
- s3:PutObject
Resource:
- arn:aws:s3:::#{AWS::Region}:#{AWS::AccountId}:payment-bucket-${self:provider.stage}/capture/batch/*
But when I run this lambda make-payment-dev , it throws an AccessDenied error unless I add bucket policy granting access to the lambda role
resource "aws_s3_bucket_policy" "payment_service_s3_bucket_policy" {
..
..
}
Why do I need to add S3 bucket policy when both s3 bucket and the lambda function and role are in the same account? Am I missing something?
Also, If I created the bucket using AWS::S3::Bucket as part of the cloud formation stack the Apigateway is in (we are using serverless), I don't need add bucket policy and it all works fine.
I think the problem is simply that the S3 bucket ARN is incorrect.
S3 bucket ARNs do not have account IDs or regions in them. Use arn:aws:s3:::mybucket/myprefix/*.
The answer depends on what AWS IAM role is applying the terraform plan because the AWS s3 bucket canned ACL rule: "private" restricts bucket access as: Owner gets FULL_CONTROL. No one else has access rights (default). per documentation: https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html
You have to be relatively explicit at this point as to who can access the bucket. Often if I'm going with private ACL but want every other role in my AWS account to have access to the bucket I attach a bucket policy to the terraform aws_s3_bucket resource to first allow access to the bucket. Then I explicitly grant the lambda's role access to said bucket via another inline policy.
In your case it would look something like this:
// Allow access to the bucket
data "aws_iam_policy_document" "bucket_policy" {
statement {
sid = "S3 bucket policy for account access"
actions = [
"s3:ListBucket",
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
]
principals {
type = "AWS"
identifiers = [
"arn:aws:iam::{your_account_id_here}:root",
]
}
resources = [
"arn:aws:s3:::test_bucket_name",
"arn:aws:s3:::test_bucket_name/*",
]
condition {
test = "StringEquals"
variable = "aws:PrincipalArn"
values = ["arn:aws:iam::{your_account_id_here}:role/*"]
}
}
}
resource "aws_s3_bucket" "this" {
bucket = "test_bucket_name"
acl = "private"
policy = data.aws_iam_policy_document.bucket_policy.json
}
// Grant the lambda IAM role permissions to the bucket
data "aws_iam_policy_document" "grant_bucket_access" {
statement {
sid = "AccessToTheAppAuxFilesBucket"
actions = [
"s3:ListBucket",
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
]
resources = [
"arn:aws:s3:::test_bucket_name/*",
"arn:aws:s3:::test_bucket_name"
]
}
}
// Data call to pull the arn of the lambda's IAM Role
data "aws_iam_role" "cloudformation_provisioned_role" {
name = "the_name_of_the_lambdas_iam_role"
}
resource "aws_iam_role_policy" "iam_role_inline_policy" {
name = "s3_bucket_access"
role = data.aws_iam_role.cloudformation_provisioned_role.arn
policy = data.aws_iam_policy_document.grant_bucket_access.json
}
It's an open bug. acl and force_destroy aren't well imported with terraform import : https://github.com/hashicorp/terraform-provider-aws/issues/6193

aws lambda function getting access denied when getObject from s3

I am getting an acccess denied error from S3 AWS service on my Lambda function.
This is the code:
// dependencies
var async = require('async');
var AWS = require('aws-sdk');
var gm = require('gm').subClass({ imageMagick: true }); // Enable ImageMagick integration.
exports.handler = function(event, context) {
var srcBucket = event.Records[0].s3.bucket.name;
// Object key may have spaces or unicode non-ASCII characters.
var key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));
/*
{
originalFilename: <string>,
versions: [
{
size: <number>,
crop: [x,y],
max: [x, y],
rotate: <number>
}
]
}*/
var fileInfo;
var dstBucket = "xmovo.transformedimages.develop";
try {
//TODO: Decompress and decode the returned value
fileInfo = JSON.parse(key);
//download s3File
// get reference to S3 client
var s3 = new AWS.S3();
// Download the image from S3 into a buffer.
s3.getObject({
Bucket: srcBucket,
Key: key
},
function (err, response) {
if (err) {
console.log("Error getting from s3: >>> " + err + "::: Bucket-Key >>>" + srcBucket + "-" + key + ":::Principal>>>" + event.Records[0].userIdentity.principalId, err.stack);
return;
}
// Infer the image type.
var img = gm(response.Body);
var imageType = null;
img.identify(function (err, data) {
if (err) {
console.log("Error image type: >>> " + err);
deleteFromS3(srcBucket, key);
return;
}
imageType = data.format;
//foreach of the versions requested
async.each(fileInfo.versions, function (currentVersion, callback) {
//apply transform
async.waterfall([async.apply(transform, response, currentVersion), uploadToS3, callback]);
}, function (err) {
if (err) console.log("Error on excecution of watefall: >>> " + err);
else {
//when all done then delete the original image from srcBucket
deleteFromS3(srcBucket, key);
}
});
});
});
}
catch (ex){
context.fail("exception through: " + ex);
deleteFromS3(srcBucket, key);
return;
}
function transform(response, version, callback){
var imageProcess = gm(response.Body);
if (version.rotate!=0) imageProcess = imageProcess.rotate("black",version.rotate);
if(version.size!=null) {
if (version.crop != null) {
//crop the image from the coordinates
imageProcess=imageProcess.crop(version.size[0], version.size[1], version.crop[0], version.crop[1]);
}
else {
//find the bigger and resize proportioned the other dimension
var widthIsMax = version.size[0]>version.size[1];
var maxValue = Math.max(version.size[0],version.size[1]);
imageProcess=(widthIsMax)?imageProcess.resize(maxValue):imageProcess.resize(null, maxValue);
}
}
//finally convert the image to jpg 90%
imageProcess.toBuffer("jpg",{quality:90}, function(err, buffer){
if (err) callback(err);
callback(null, version, "image/jpeg", buffer);
});
}
function deleteFromS3(bucket, filename){
s3.deleteObject({
Bucket: bucket,
Key: filename
});
}
function uploadToS3(version, contentType, data, callback) {
// Stream the transformed image to a different S3 bucket.
var dstKey = fileInfo.originalFilename + "_" + version.size + ".jpg";
s3.putObject({
Bucket: dstBucket,
Key: dstKey,
Body: data,
ContentType: contentType
}, callback);
}
};
This is the error on Cloudwatch:
AccessDenied: Access Denied
This is the stack error:
at Request.extractError (/var/runtime/node_modules/aws-sdk/lib/services/s3.js:329:35)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:596:14)
at Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:21:10)
at AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:37:9)
at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:598:12)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:115:18)
Without any other description or info
on S3 bucket permissions allow to everyone put list and delete.
What can I do to access the S3 bucket?
PS: on Lambda event properties the principal is correct and has administrative privileges.
Interestingly enough, AWS returns 403 (access denied) when the file does not exist. Be sure the target file is in the S3 bucket.
If you are specifying the Resource don't forget to add the sub folder specification as well. Like this:
"Resource": [
"arn:aws:s3:::BUCKET-NAME",
"arn:aws:s3:::BUCKET-NAME/*"
]
Your Lambda does not have privileges (S3:GetObject).
Go to IAM dashboard, check the role associated with your Lambda execution. If you use AWS wizard, it automatically creates a role called oneClick_lambda_s3_exec_role. Click on Show Policy. It should show something similar to the attached image. Make sure S3:GetObject is listed.
I ran into this issue and after hours of IAM policy madness, the solution was to:
Go to S3 console
Click bucket you are interested in.
Click 'Properties'
Unfold 'Permissions'
Click 'Add more permissions'
Choose 'Any Authenticated AWS User' from dropdown. Select 'Upload/Delete' and 'List' (or whatever you need for your lambda).
Click 'Save'
Done.
Your carefully written IAM role policies don't matter, neither do specific bucket policies (I've written those too to make it work). Or they just don't work on my account, who knows.
[EDIT]
After a lot of tinkering the above approach is not the best. Try this:
Keep your role policy as in the helloV post.
Go to S3. Select your bucket. Click Permissions. Click Bucket Policy.
Try something like this:
{
"Version": "2012-10-17",
"Id": "Lambda access bucket policy",
"Statement": [
{
"Sid": "All on objects in bucket lambda",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AWSACCOUNTID:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::BUCKET-NAME/*"
},
{
"Sid": "All on bucket by lambda",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AWSACCOUNTID:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::BUCKET-NAME"
}
]
}
Worked for me and does not require for you to share with all authenticated AWS users (which most of the time is not ideal).
If you have encryption set on your S3 bucket (such as AWS KMS), you may need to make sure the IAM role applied to your Lambda function is added to the list of IAM > Encryption keys > region > key > Key Users for the corresponding key that you used to encrypt your S3 bucket at rest.
In my screenshot, for example, I added the CyclopsApplicationLambdaRole role that I have applied to my Lambda function as a Key User in IAM for the same AWS KMS key that I used to encrypt my S3 bucket. Don't forget to select the correct region for your key when you open up the Encryption keys UI.
Find the execution role you've applied to your Lambda function:
Find the key you used to add encryption to your S3 bucket:
In IAM > Encryption keys, choose your region and click on the key name:
Add the role as a Key User in IAM Encryption keys for the key specified in S3:
If all the other policy ducks are in a row, S3 will still return an Access Denied message if the object doesn't exist AND the requester doesn't have ListBucket permission on the bucket.
From https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html:
...If the object you request does not exist, the error Amazon S3
returns depends on whether you also have the s3:ListBucket permission.
If you have the s3:ListBucket permission on the bucket, Amazon S3 will
return an HTTP status code 404 ("no such key") error. if you don’t
have the s3:ListBucket permission, Amazon S3 will return an HTTP
status code 403 ("access denied") error.
I too ran into this issue, I fixed this by providing s3:GetObject* in the ACL as it is attempting to obtain a version of that object.
I tried to execute a basic blueprint Python lambda function [example code] and I had the same issue. My execition role was lambda_basic_execution
I went to S3 > (my bucket name here) > permissions .
Because I'm beginner, I used the Policy Generator provided by Amazon rather than writing JSON myself: http://awspolicygen.s3.amazonaws.com/policygen.html
my JSON looks like this:
{
"Id": "Policy153536723xxxx",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt153536722xxxx",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::tokabucket/*",
"Principal": {
"AWS": [
"arn:aws:iam::82557712xxxx:role/lambda_basic_execution"
]
}
}
]
And then the code executed nicely:
I solved my problem following all the instruction from the AWS - How do I allow my Lambda execution role to access my Amazon S3 bucket?:
Create an AWS Identity and Access Management (IAM) role for the Lambda function that grants access to the S3 bucket.
Modify the IAM role's trust policy.
Set the IAM role as the Lambda function's execution role.
Verify that the bucket policy grants access to the Lambda function's execution role.
I was trying to read a file from s3 and create a new file by changing content of file read (Lambda + Node). Reading file from S3 did not had any problem. As soon I tried writing to S3 bucket I get 'Access Denied' error.
I tried every thing listed above but couldn't get rid of 'Access Denied'. Finally I was able to get it working by giving 'List Object' permission to everyone on my bucket.
Obviously this not the best approach but nothing else worked.
After searching for a long time i saw that my bucket policy was only allowed read access and not put access:
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicListGet",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:List*",
"s3:Get*",
"s3:Put*"
],
"Resource": [
"arn:aws:s3:::bucketName",
"arn:aws:s3:::bucketName/*"
]
}
]
}
Also another issue might be that in order to fetch objects from cross region you need to initialize new s3 client with other region name like:
const getS3Client = (region) => new S3({ region })
I used this function to get s3 client based on region.
I was struggling with this issue for hours. I was using AmazonS3EncryptionClient and nothing I did helped. Then I noticed that the client is actually deprecated, so I thought I'd try switching to the builder model they have:
var builder = AmazonS3EncryptionClientBuilder.standard()
.withEncryptionMaterials(new StaticEncryptionMaterialsProvider(encryptionMaterials))
if (accessKey.nonEmpty && secretKey.nonEmpty) builder = builder.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(accessKey.get, secretKey.get)))
builder.build()
And... that solved it. Looks like Lambda has trouble injecting the credentials in the old model, but works well in the new one.
I was getting the same error "AccessDenied: Access Denied" while cropping s3 images using lambda function. I updated the s3 bucket policy and IAM role inline policy as per the document link given below.
But still, I was getting the same error. Then I realised, I was trying to give "public-read" access in a private bucket. After removed ACL: 'public-read' from S3.putObject problem get resolved.
https://aws.amazon.com/premiumsupport/knowledge-center/access-denied-lambda-s3-bucket/
I had this error message in aws lambda environment when using boto3 with python:
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
It turns out I needed an extra permission because I was using object tags. If your objects have tags you will need
s3:GetObject AND s3:GetObjectTagging for getting the object.
I have faced the same problem when creating Lambda function that should have read S3 bucket content. I created the Lambda function and S3 bucket using AWS CDK. To solve this within AWS CDK, I used magic from the docs.
Resources that use execution roles, such as lambda.Function, also
implement IGrantable, so you can grant them access directly instead of
granting access to their role. For example, if bucket is an Amazon S3
bucket, and function is a Lambda function, the code below grants the
function read access to the bucket.
bucket.grantRead(function);