Setting up AWS S3 bucket permissions - amazon-web-services

I have an S3 bucket (e.g. mybucket) which currently has permissions set as follows:
Block all public access
On
|
|_Block public access to buckets and objects granted through new access control lists (ACLs)
| On
|
|_Block public access to buckets and objects granted through any access control lists (ACLs)
| On
|
|_Block public access to buckets and objects granted through new public bucket or access point policies
| On
|
|_Block public and cross-account access to buckets and objects through any public bucket or access point policies
On
Inside this bucket I have two folders:
images (e.g. https://mybucket.s3.us-west-2.amazonaws.com/images/puppy.jpg)
private (e.g. https://mybucket.s3.us-west-2.amazonaws.com/private/mydoc.doc)
I want the images folder to be publicly accessible so that I can display images on my site.
I want the private folder to be restricted and can only be accessible using an IAM account programmatically.
How do I set these permissions? I've tried switching the above permissions off, I've also clicked on the images and unders actions, clicked on 'make public'. I then attempt the following to upload:
$image = Image::make($file->getRealPath())->resize(360, 180);
Storage::disk('s3')->put($filePath, $image->stream());
File gets uploaded but when I try to display the image as follows I get 403 error:
<img src="{{ Storage::disk('s3')->url($file->path) }}" />
And to download a private documents i have the following:
$response = [
'Content-Type' => $file->mime_type,
'Content-Length' => $file->size,
'Content-Description' => 'File Transfer',
'Content-Disposition' => "attachment; filename={$file->name}",
'Content-Transfer-Encoding' => 'binary',
];
return Response::make(Storage::disk('s3')->get($file->path), 200, $response);
What's the correct way to set up these permissions?
I'm new to S3 storage.

Amazon S3 Block Public Access is a bucket-level configuration that will prevent you making any of the objects in that bucket public. So, if you want to make one or more objects public, e.g. images/*, then you need to disable S3 Block Public Access for this bucket.
That, in and of itself, will not make any of your objects public, of course. S3 buckets are private by default. To make the objects in images/ public, you will need to configure an S3 bucket policy, for example:
{
"Id": "id101",
"Statement": [
{
"Sid": "publicimages",
"Action": "s3:GetObject",
"Effect": "Allow",
"Resource": "arn:aws:s3:::mybucket/images/*",
"Principal": "*"
}
]
}

Related

Allow cognito users to view images in s3

I have granted the Cognito user access to the S3 object matching that user attribute with Cognito Identity Pool.
How can I view images in a bucket from a URL in a browser logged in as the Cognito user?
What I want to
To split the s3 bucket into tenants and only allow image files from the tenant to which the Cognito user belongs to be viewed.
To display images with img tag <img src="">.
What I did
S3 object arn: arn:aws:s3:::sample-tenant-bucket/public/1/photos/test.jpg
Create Cognito User with custom attributes
aws cognito-idp admin-create-user \
--user-pool-id "ap-northeast-1_xxxx" \
--username "USER_NAME" \
--user-attributes Name=custom:tenant_id,Value=1
Cognito Identity Pool
Mapping of user attributes to principal tags
"tenantId": "custom:tenant_id"
Added sts:TagSession permission to the authenticated role
Attach policy to the authenticated role
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "*",
"Resource": "arn:aws:s3:::sample-tenant-bucket/public/${aws:PrincipalTag/tenantId}/*",
"Effect": "Allow"
}
]
}
Hypothesis
Typing the object URL (https://sample-tenant-bucket.s3.ap-northeast-1.amazonaws.com/public/1/1670773819-1.jpg) into the address bar results in a 403, even when logged in with an AWS account as an administrator, not as a Cognito User. Therefore,
I thought that Object URLs could not be used for non-public objects.
The Presigned URL seems to work, but I was wondering if there is another way to do it, since it is a method that does not depend on whether the user is already logged in to the AWS account or not.
Maybe this question is the same as asking how to display an image in a bucket by URL in a browser that is already logged in with the object owner's AWS account

Amplify S3 Image - 404 Not Found

I am trying to access an image that I have uploaded to my S3 bucket. I created my bucket using the Amplify CLI (amplify add storage) and granted access to all of my cognito groups. I have also granted my AuthRole AmazonS3FullAccess. My Bucket is set to allow all public access as well.
I have tried all the different ways I can find online to access this image and the only way that works so far is to leave it open to the public and use the image url directly. But even if I use the public method of accessing the image using Amplify's tools, I get the 404 error. Below is my code, am I doing something wrong with the url generation?
resources:
https://docs.amplify.aws/ui/storage/s3-image/q/framework/react
https://docs.amplify.aws/lib/storage/getting-started/q/platform/js#using-amazon-s3
import React, { Component} from 'react'
import Amplify, { Auth, Storage } from 'aws-amplify';
import { AmplifyS3Image} from "#aws-amplify/ui-react";
import { Card } from 'reactstrap';
// FYI, this all matches my aws-exports and matches what I see online in the console
Amplify.configure({
Auth: {
identityPoolId: 'us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX', //REQUIRED - Amazon Cognito Identity Pool ID
region: 'us-east-1', // REQUIRED - Amazon Cognito Region
userPoolId: 'us-east-1_XXXXXXXXX', //OPTIONAL - Amazon Cognito User Pool ID
userPoolWebClientId: 'XXXXXXXXXXXXXXXXX', //OPTIONAL - Amazon Cognito Web Client ID
},
Storage: {
AWSS3: {
bucket: 'xxxxxxxxx-storage123456-prod', //REQUIRED - Amazon S3 bucket name
region: 'us-east-1', //OPTIONAL - Amazon service region
}
}
});
class TestPage extends Component {
constructor(props) {
super(props);
this.state = { }
};
async componentDidMount() {
const user = await Auth.currentAuthenticatedUser();
const deviceKey = user.signInUserSession.accessToken.payload['device_key']
console.log( deviceKey, user );
const storageGetPicUrl = await Storage.get('test_public.png', {
level: 'protected',
bucket: 'xxxxxxxxx-storage123456-prod',
region: 'us-east-1',
});
console.log(storageGetPicUrl);
this.setState({
user,
deviceKey,
profilePicImg: <img height="40px" src={'https://xxxxxxxxx-storage123456-prod.s3.amazonaws.com/test_public.png'} />,
profilePicPrivate: <AmplifyS3Image imgKey={"test_default.png"} />,
profilePicPublic: <AmplifyS3Image imgKey={"test_public.png"} />,
profilePicPrivate2: <AmplifyS3Image imgKey={"test_default.png"} level="protected" identityId={deviceKey} />,
profilePicPublic2: <AmplifyS3Image imgKey={"test_public.png"} level="protected" identityId={deviceKey} />,
profilePicStorage: <img src={storageGetPicUrl} />,
});
};
render() {
return (
<table>
<tbody>
<tr><td><Card>{this.state.profilePicImg}</Card></td></tr>
<tr><td><Card>{this.state.profilePicPrivate}</Card></td></tr>
<tr><td><Card>{this.state.profilePicPublic}</Card></td></tr>
<tr><td><Card>{this.state.profilePicPrivate2}</Card></td></tr>
<tr><td><Card>{this.state.profilePicPublic2}</Card></td></tr>
<tr><td><Card>{this.state.profilePicStorage}</Card></td></tr>
</tbody>
</table>
);
};
};
export default TestPage;
Okay, I've got it figured out! There were 2 problems. One, AWS storage requires you to organize your folder structure in the bucket a certain way for access. Two, I had to update my bucket policy to point at my AuthRole.
When you configure your storage bucket, Amplify CLI will setup your S3 bucket with access permission in such a way that contents in 'public' folder can be accessed by everyone in who's logged into your app. 'private' for user specific contents,' protected' for user specific and can be accessed by other users in the platform. SOURCE
The bucket policy itself needs to be updated to give authentication to your AuthRole which you are using with your webpage login. For me this was the AuthRole that my Cognito users are linked to. This link helped me set the Actions in my policy, but I think it's an old policy format. This link helped me with getting the policy right.
My image is located at: public/test.png within my bucket. The folder name 'public' is necessary to match up with the level specified in the Storage call below. I tested this by setting all permissions Block Public Access. I ran my code without the change to my policy and the images would not load, so they were definitely blocked. After updating the policy, the images loaded perfectly.
Simplified version of the parts of my code that matter:
import { Storage } from 'aws-amplify';
// image name should be relative to the public folder
// example: public/images/test.png => Storage.get('images/test.png' ...
const picUrl= await Storage.get('test.png', {
level: 'public',
bucket: 'bucket-name',
region: 'us-east-1',
});
const img = <img width="40px" name="test" src={picUrl} alt="testImg" />
My bucket polcy:
{
"Version": "2012-10-17",
"Id": "Policy1234",
"Statement": [
{
"Sid": "AllowReadWriteObjects",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::the-rest-of-my-authrole-arn"
},
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::bucket-name/*"
]
}
]
}

AWS CloudFront with multiple S3 origins

I would like to configure an AWS CloudFront CDN to serve HTML static content from two AWS S3 buckets. One bucket should host the objects in the root, the second one should host objects in a specific subpath.
S3 config
The first bucket, myapp.home, should host the home page and all other objects directly under "/".
The second bucket, myapp.subpage, should be used for the same purpose but for a specific set of URLs starting with "/subpage/".
Both buckets have been configured with "static website hosting" option enabled and with a default document "index.html", which has been uploaded to both.
Both buckets have been made public using the following policy (in the case of myapp.subpage the Resource has been adapted accordingly)
{
"Version": "2012-10-17",
"Id": "Policy1529690634746",
"Statement": [
{
"Sid": "Stmt1529690623267",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::myapp.home/*"
}
]
}
CloudFront config
The CDN is configured to respond to a name "host.domain.tld".
The CDN is configured having 2 origins:
the bucket myapp.home, having these properties:
Origin Domain Name: myapp.home.s3.amazonaws.com
Origin Path: empty
Origin Type: S3 Origin
the bucket myapp.subpage, having these properties:
Origin Domain Name: myapp.subpage.s3.amazonaws.com
Origin Path: empty
Origin Type: S3 Origin
These origins are linked to 2 Cache Behaviors:
First Behavior
Origin: the bucket myapp.subpage:
Precedence: 0
Path Pattern: subpage/*
Second Behavior
Origin: the bucket myapp.home:
Precedence: 1
Path Pattern: Default (*)
The problem
The myapp.home origin seems to work correctly, but myapp.subpath instead always returns an AccessDenied error using all of the following URIs:
host.domain.tld/subpath
host.domain.tld/subpath/
host.domain.tld/subpath/index.html
Update: I also tried substituting the origins using the S3 website domains, e.g. myapp.subpath.s3-website-eu-west-1.amazonaws.com, instead of the plain buckets domains: the homepage still works anyway, but the subpath this time returns a 404 with Message: "The specified key does not exist" for all URIs above.
What am i doing wrong?
Thanks in advance
The "subpage/*" in first behaviors is the directory in myapp.subpage.
Make a directory named subpage in the bucket, then put index.html into this bucket.
Like below:
* myapp.subpage <bucket name>
* subpage <directory>
* index.html

S3 public read access restricted by IP range for object uploaded by third-party

I am trying to accomplish the following scenario:
1) Account A uploads a file to an S3 bucket owned by account B. At upload I specify full control for Account owner B
s3_client.upload_file(
local_file,
bucket,
remote_file_name,
ExtraArgs={'GrantFullControl': 'id=<AccountB_CanonicalID>'}
)
2) Account B defines a bucket policy that limits the access to the objects by IP (see below)
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowIPs",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucketB/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
<CIDR1>,
<CIDR2>
]
}
}
}
]
}
I get access denied if I try to download the file as anonymous user, even from the specific IP range. If at upload I add public read permission for everyone then I can download the file from any IP.
s3_client.upload_file(
local_file, bucket,
remote_file_name,
ExtraArgs={
'GrantFullControl': 'id=AccountB_CanonicalID', GrantRead':'uri="http://acs.amazonaws.com/groups/global/AllUsers"'
}
)
Question: is it possible to upload the file from Account A to Account B but still restrict public access by an IP range.
This is not possible. According to the documentation:
Bucket Policy – For your bucket, you can add a bucket policy to grant
other AWS accounts or IAM users permissions for the bucket and the
objects in it. Any object permissions apply only to the objects that
the bucket owner creates. Bucket policies supplement, and in many
cases, replace ACL-based access policies.
However, there is a workaround for this scenario. The problem is that the owner of the uploaded file is Account A. We need to upload the file in such a way that the owner of the file is Account B. To accomplish this we need to:
In Account B create a role for trusted entity (select "Another AWS account" and specify Account A). Add upload permission for the bucket.
In Account A create a policy that allows AssumeRole action and as resource specify the ARN of the role created in step 1.
To upload the file from boto3 you can use the following code. Note the use of cachetools to deal with limited TTL of temporary credentials.
import logging
import sys
import time
import boto3
from cachetools import cached, TTLCache
CREDENTIALS_TTL = 1800
credentials_cache = TTLCache(1, CREDENTIALS_TTL - 60)
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(message)s')
logger = logging.getLogger()
def main():
local_file = sys.argv[1]
bucket = '<bucket_from_account_B>'
client = _get_s3_client_for_another_account()
client.upload_file(local_file, bucket, local_file)
logger.info('Uploaded to %s to %s' % (local_file, bucket))
#cached(credentials_cache)
def _get_s3_client_for_another_account():
sts = boto3.client('sts')
response = sts.assume_role(
RoleArn='<arn_of_role_created_in_step_1>',
DurationSeconds=CREDENTIALS_TTL
)
credentials = response['Credentials']
credentials = {
'aws_access_key_id': credentials['AccessKeyId'],
'aws_secret_access_key': credentials['SecretAccessKey'],
'aws_session_token': credentials['SessionToken'],
}
return boto3.client('s3', 'eu-central-1', **credentials)
if __name__ == '__main__':
main()

aws lambda function getting access denied when getObject from s3

I am getting an acccess denied error from S3 AWS service on my Lambda function.
This is the code:
// dependencies
var async = require('async');
var AWS = require('aws-sdk');
var gm = require('gm').subClass({ imageMagick: true }); // Enable ImageMagick integration.
exports.handler = function(event, context) {
var srcBucket = event.Records[0].s3.bucket.name;
// Object key may have spaces or unicode non-ASCII characters.
var key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));
/*
{
originalFilename: <string>,
versions: [
{
size: <number>,
crop: [x,y],
max: [x, y],
rotate: <number>
}
]
}*/
var fileInfo;
var dstBucket = "xmovo.transformedimages.develop";
try {
//TODO: Decompress and decode the returned value
fileInfo = JSON.parse(key);
//download s3File
// get reference to S3 client
var s3 = new AWS.S3();
// Download the image from S3 into a buffer.
s3.getObject({
Bucket: srcBucket,
Key: key
},
function (err, response) {
if (err) {
console.log("Error getting from s3: >>> " + err + "::: Bucket-Key >>>" + srcBucket + "-" + key + ":::Principal>>>" + event.Records[0].userIdentity.principalId, err.stack);
return;
}
// Infer the image type.
var img = gm(response.Body);
var imageType = null;
img.identify(function (err, data) {
if (err) {
console.log("Error image type: >>> " + err);
deleteFromS3(srcBucket, key);
return;
}
imageType = data.format;
//foreach of the versions requested
async.each(fileInfo.versions, function (currentVersion, callback) {
//apply transform
async.waterfall([async.apply(transform, response, currentVersion), uploadToS3, callback]);
}, function (err) {
if (err) console.log("Error on excecution of watefall: >>> " + err);
else {
//when all done then delete the original image from srcBucket
deleteFromS3(srcBucket, key);
}
});
});
});
}
catch (ex){
context.fail("exception through: " + ex);
deleteFromS3(srcBucket, key);
return;
}
function transform(response, version, callback){
var imageProcess = gm(response.Body);
if (version.rotate!=0) imageProcess = imageProcess.rotate("black",version.rotate);
if(version.size!=null) {
if (version.crop != null) {
//crop the image from the coordinates
imageProcess=imageProcess.crop(version.size[0], version.size[1], version.crop[0], version.crop[1]);
}
else {
//find the bigger and resize proportioned the other dimension
var widthIsMax = version.size[0]>version.size[1];
var maxValue = Math.max(version.size[0],version.size[1]);
imageProcess=(widthIsMax)?imageProcess.resize(maxValue):imageProcess.resize(null, maxValue);
}
}
//finally convert the image to jpg 90%
imageProcess.toBuffer("jpg",{quality:90}, function(err, buffer){
if (err) callback(err);
callback(null, version, "image/jpeg", buffer);
});
}
function deleteFromS3(bucket, filename){
s3.deleteObject({
Bucket: bucket,
Key: filename
});
}
function uploadToS3(version, contentType, data, callback) {
// Stream the transformed image to a different S3 bucket.
var dstKey = fileInfo.originalFilename + "_" + version.size + ".jpg";
s3.putObject({
Bucket: dstBucket,
Key: dstKey,
Body: data,
ContentType: contentType
}, callback);
}
};
This is the error on Cloudwatch:
AccessDenied: Access Denied
This is the stack error:
at Request.extractError (/var/runtime/node_modules/aws-sdk/lib/services/s3.js:329:35)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:596:14)
at Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:21:10)
at AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:37:9)
at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:598:12)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:115:18)
Without any other description or info
on S3 bucket permissions allow to everyone put list and delete.
What can I do to access the S3 bucket?
PS: on Lambda event properties the principal is correct and has administrative privileges.
Interestingly enough, AWS returns 403 (access denied) when the file does not exist. Be sure the target file is in the S3 bucket.
If you are specifying the Resource don't forget to add the sub folder specification as well. Like this:
"Resource": [
"arn:aws:s3:::BUCKET-NAME",
"arn:aws:s3:::BUCKET-NAME/*"
]
Your Lambda does not have privileges (S3:GetObject).
Go to IAM dashboard, check the role associated with your Lambda execution. If you use AWS wizard, it automatically creates a role called oneClick_lambda_s3_exec_role. Click on Show Policy. It should show something similar to the attached image. Make sure S3:GetObject is listed.
I ran into this issue and after hours of IAM policy madness, the solution was to:
Go to S3 console
Click bucket you are interested in.
Click 'Properties'
Unfold 'Permissions'
Click 'Add more permissions'
Choose 'Any Authenticated AWS User' from dropdown. Select 'Upload/Delete' and 'List' (or whatever you need for your lambda).
Click 'Save'
Done.
Your carefully written IAM role policies don't matter, neither do specific bucket policies (I've written those too to make it work). Or they just don't work on my account, who knows.
[EDIT]
After a lot of tinkering the above approach is not the best. Try this:
Keep your role policy as in the helloV post.
Go to S3. Select your bucket. Click Permissions. Click Bucket Policy.
Try something like this:
{
"Version": "2012-10-17",
"Id": "Lambda access bucket policy",
"Statement": [
{
"Sid": "All on objects in bucket lambda",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AWSACCOUNTID:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::BUCKET-NAME/*"
},
{
"Sid": "All on bucket by lambda",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AWSACCOUNTID:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::BUCKET-NAME"
}
]
}
Worked for me and does not require for you to share with all authenticated AWS users (which most of the time is not ideal).
If you have encryption set on your S3 bucket (such as AWS KMS), you may need to make sure the IAM role applied to your Lambda function is added to the list of IAM > Encryption keys > region > key > Key Users for the corresponding key that you used to encrypt your S3 bucket at rest.
In my screenshot, for example, I added the CyclopsApplicationLambdaRole role that I have applied to my Lambda function as a Key User in IAM for the same AWS KMS key that I used to encrypt my S3 bucket. Don't forget to select the correct region for your key when you open up the Encryption keys UI.
Find the execution role you've applied to your Lambda function:
Find the key you used to add encryption to your S3 bucket:
In IAM > Encryption keys, choose your region and click on the key name:
Add the role as a Key User in IAM Encryption keys for the key specified in S3:
If all the other policy ducks are in a row, S3 will still return an Access Denied message if the object doesn't exist AND the requester doesn't have ListBucket permission on the bucket.
From https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html:
...If the object you request does not exist, the error Amazon S3
returns depends on whether you also have the s3:ListBucket permission.
If you have the s3:ListBucket permission on the bucket, Amazon S3 will
return an HTTP status code 404 ("no such key") error. if you don’t
have the s3:ListBucket permission, Amazon S3 will return an HTTP
status code 403 ("access denied") error.
I too ran into this issue, I fixed this by providing s3:GetObject* in the ACL as it is attempting to obtain a version of that object.
I tried to execute a basic blueprint Python lambda function [example code] and I had the same issue. My execition role was lambda_basic_execution
I went to S3 > (my bucket name here) > permissions .
Because I'm beginner, I used the Policy Generator provided by Amazon rather than writing JSON myself: http://awspolicygen.s3.amazonaws.com/policygen.html
my JSON looks like this:
{
"Id": "Policy153536723xxxx",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt153536722xxxx",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::tokabucket/*",
"Principal": {
"AWS": [
"arn:aws:iam::82557712xxxx:role/lambda_basic_execution"
]
}
}
]
And then the code executed nicely:
I solved my problem following all the instruction from the AWS - How do I allow my Lambda execution role to access my Amazon S3 bucket?:
Create an AWS Identity and Access Management (IAM) role for the Lambda function that grants access to the S3 bucket.
Modify the IAM role's trust policy.
Set the IAM role as the Lambda function's execution role.
Verify that the bucket policy grants access to the Lambda function's execution role.
I was trying to read a file from s3 and create a new file by changing content of file read (Lambda + Node). Reading file from S3 did not had any problem. As soon I tried writing to S3 bucket I get 'Access Denied' error.
I tried every thing listed above but couldn't get rid of 'Access Denied'. Finally I was able to get it working by giving 'List Object' permission to everyone on my bucket.
Obviously this not the best approach but nothing else worked.
After searching for a long time i saw that my bucket policy was only allowed read access and not put access:
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicListGet",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:List*",
"s3:Get*",
"s3:Put*"
],
"Resource": [
"arn:aws:s3:::bucketName",
"arn:aws:s3:::bucketName/*"
]
}
]
}
Also another issue might be that in order to fetch objects from cross region you need to initialize new s3 client with other region name like:
const getS3Client = (region) => new S3({ region })
I used this function to get s3 client based on region.
I was struggling with this issue for hours. I was using AmazonS3EncryptionClient and nothing I did helped. Then I noticed that the client is actually deprecated, so I thought I'd try switching to the builder model they have:
var builder = AmazonS3EncryptionClientBuilder.standard()
.withEncryptionMaterials(new StaticEncryptionMaterialsProvider(encryptionMaterials))
if (accessKey.nonEmpty && secretKey.nonEmpty) builder = builder.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(accessKey.get, secretKey.get)))
builder.build()
And... that solved it. Looks like Lambda has trouble injecting the credentials in the old model, but works well in the new one.
I was getting the same error "AccessDenied: Access Denied" while cropping s3 images using lambda function. I updated the s3 bucket policy and IAM role inline policy as per the document link given below.
But still, I was getting the same error. Then I realised, I was trying to give "public-read" access in a private bucket. After removed ACL: 'public-read' from S3.putObject problem get resolved.
https://aws.amazon.com/premiumsupport/knowledge-center/access-denied-lambda-s3-bucket/
I had this error message in aws lambda environment when using boto3 with python:
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
It turns out I needed an extra permission because I was using object tags. If your objects have tags you will need
s3:GetObject AND s3:GetObjectTagging for getting the object.
I have faced the same problem when creating Lambda function that should have read S3 bucket content. I created the Lambda function and S3 bucket using AWS CDK. To solve this within AWS CDK, I used magic from the docs.
Resources that use execution roles, such as lambda.Function, also
implement IGrantable, so you can grant them access directly instead of
granting access to their role. For example, if bucket is an Amazon S3
bucket, and function is a Lambda function, the code below grants the
function read access to the bucket.
bucket.grantRead(function);