Athena reports "Insufficient permissions to execute the query. Caller does not have full access to table" - amazon-athena

I have Lake Formation permissions in place and my Athena query runs fine.
I would now like to limit an IAM user to only certain records, so I added a Lake Formation data filter. Once I do that, Athena reports Insufficient permissions to execute the query. Caller does not have full access to table.
Why is that?

The reason is buried in the documentation:
To run query operations against tables that use row- and cell-level
filtering, you must use a special workgroup called
AmazonAthenaLakeFormation.
You just need to create a workgroup with that special name(!).
But you're not done yet!
Once you switch to using that workgroup, you'll get a different error: Insufficient permissions to execute the query. Encountered an exception executed in context[planning query] with message[User: XXXXXXXXX is not authorized to perform: lakeformation:StartQueryPlanning on resource
To fix this, follow the instructions to grant the IAM permission lakeformation:StartQueryPlanning to the user.
But you're not done yet!
Once you add that IAM permission, you'll discover that StartQueryPlanning depends on other IAM permissions. Then it asks for more permissions and so on.
I ended up creating a policy called GlueReadOnly which solves the problem:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"glue:SearchTables",
"lakeformation:SearchDatabasesByLFTags",
"glue:GetCrawler",
"glue:GetDataCatalogEncryptionSettings",
"glue:BatchGetDevEndpoints",
"glue:CheckSchemaVersionValidity",
"glue:GetTableVersions",
"glue:GetPartitions",
"glue:GetMLTransform",
"lakeformation:GetWorkUnits",
"glue:GetWorkflowRunProperties",
"glue:GetSchema",
"glue:GetDevEndpoint",
"glue:GetSecurityConfiguration",
"glue:GetResourcePolicy",
"glue:GetTrigger",
"glue:GetUserDefinedFunction",
"glue:GetJobRun",
"glue:GetResourcePolicies",
"glue:GetUserDefinedFunctions",
"glue:GetClassifier",
"glue:GetSchemaByDefinition",
"glue:ListWorkflows",
"glue:GetJobs",
"glue:GetTables",
"glue:GetSchemaVersionsDiff",
"lakeformation:SearchTablesByLFTags",
"glue:GetTriggers",
"glue:GetWorkflowRun",
"lakeformation:GetLFTag",
"lakeformation:GetResourceLFTags",
"glue:GetMapping",
"glue:GetPartition",
"glue:GetDevEndpoints",
"lakeformation:GetQueryStatistics",
"glue:BatchGetWorkflows",
"lakeformation:GetDataLakeSettings",
"glue:ListDevEndpoints",
"glue:BatchGetJobs",
"glue:ListRegistries",
"glue:GetJob",
"glue:GetWorkflow",
"glue:ListSchemaVersions",
"lakeformation:StartQueryPlanning",
"glue:GetConnections",
"glue:GetCrawlers",
"glue:GetClassifiers",
"glue:GetCatalogImportStatus",
"glue:GetTableVersion",
"glue:GetConnection",
"glue:ListMLTransforms",
"glue:ListSchemas",
"glue:GetJobBookmark",
"glue:GetMLTransforms",
"glue:GetRegistry",
"lakeformation:GetEffectivePermissionsForPath",
"lakeformation:ListLFTags",
"lakeformation:GetWorkUnitResults",
"glue:BatchGetPartition",
"glue:GetMLTaskRuns",
"glue:GetSecurityConfigurations",
"glue:ListTriggers",
"glue:GetDatabases",
"lakeformation:GetQueryState",
"glue:ListJobs",
"glue:GetTags",
"glue:GetTable",
"glue:GetDatabase",
"glue:GetMLTaskRun",
"lakeformation:DescribeResource",
"glue:GetDataflowGraph",
"glue:BatchGetCrawlers",
"glue:GetSchemaVersion",
"glue:QuerySchemaVersionMetadata",
"glue:BatchGetTriggers",
"lakeformation:GetTableObjects",
"glue:GetWorkflowRuns",
"lakeformation:DescribeTransaction",
"glue:GetPlan",
"glue:ListCrawlers",
"glue:GetCrawlerMetrics",
"glue:GetJobRuns"
],
"Resource": "*"
}
]
}

Related

How can I get a Kafka Streams app with exactly_once_v2 guarantee to work with AWS MSK

I'm having issues getting a Spring Boot Kafka Streams app to run when I have it configured with processing.guarantee=exactly_once_v2 enabled. When I start my app up, it ultimately crashes (All my stream threads shut down) and i get the following exception logged:
[Producer clientId=my-application-c170d5b4-ffe4-4734-a3e4-41b068b21060-StreamThread-2-producer, transactionalId=my-application-c170d5b4-ffe4-4734-a3e4-41b068b21060-2] Transiting to fatal error state due to org.apache.kafka.common.errors.TransactionalIdAuthorizationException: Transactional Id authorization failed.
org.apache.kafka.streams.errors.StreamsException: Error encountered trying to initialize transactions [stream-thread [main]]
at org.apache.kafka.streams.processor.internals.StreamsProducer.initTransaction(StreamsProducer.java:169)
at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.initialize(RecordCollectorImpl.java:93)
at org.apache.kafka.streams.processor.internals.StreamTask.initializeIfNeeded(StreamTask.java:229)
at org.apache.kafka.streams.processor.internals.TaskManager.tryToCompleteRestoration(TaskManager.java:436)
at org.apache.kafka.streams.processor.internals.StreamThread.initializeAndRestorePhase(StreamThread.java:849)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:731)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:583)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:555)
Caused by: org.apache.kafka.common.errors.TransactionalIdAuthorizationException: Transactional Id authorization failed.
My configuration is as follows:
spring:
kafka:
producer:
bootstrap-servers: <urls>
properties:
security.protocol: SASL_SSL
sasl.mechanism: AWS_MSK_IAM
sasl.jaas.config: software.amazon.msk.auth.iam.IAMLoginModule required;
sasl.client.callback.handler.class: software.amazon.msk.auth.iam.IAMClientCallbackHandler
streams:
application-id: my-application
bootstrap-servers: <urls>
replicationFactor: 3
properties:
acks: all
retries: 3
processing:
guarantee: exactly_once_v2
num:
stream:
threads: 3
When I remove the exactly_once_v2 guarantee, it is able to work correctly.
In AWS im using the following policy statement:
{
"Effect": "Allow",
"Action": [
"kafka-cluster:Connect",
"kafka-cluster:AlterCluster",
"kafka-cluster:DescribeCluster",
"kafka-cluster:DescribeClusterDynamicConfiguration",
"kafka-cluster:ReadData",
"kafka-cluster:WriteData",
"kafka-cluster:*Topic*",
"kafka-cluster:WriteDataIdempotently
"kafka-cluster:DescribeTransactionalId",
"kafka-cluster:AlterTransactionalId",
"kafka-cluster:AlterGroup",
"kafka-cluster:DescribeGroup"
],
"Resource": [
"arn:aws:kafka:<arn>:cluster/my-cluster/*",
"arn:aws:kafka:<arn>:topic/my-cluster/*",
"arn:aws:kafka:<arn>:transactional-id/my-cluster/*",
"arn:aws:kafka:<arn>:group/my-cluster/*",
]
}
and I've also tried:
{
"Effect": "Allow",
"Action": [
"kafka-cluster:*"
],
"Resource": [
"arn:aws:kafka:<arn>:*/my-cluster/*"
]
}
and that didn't work either. Based on the RBAC role bindings defined in the confluent documentation, I thought I had the correct permissions set in my cluster, but that doesn't seem to be the case.
Does anyone have any insights on potential IAM policy statements or other configurations I may be missing to get EOS working for my kafka streams app?

AWS Glue gives AccessDeniedException

When I'm trying to create Glue Crawler, I get this error, even though I have full administration access on IAM
{"service":"AWSGlue","statusCode":400,"errorCode":"AccessDeniedException","requestId":"c1a564e7-d012-4e96-946f-a32be287e8ba","errorMessage":"Account 1234567890 is denied access.","type":"AwsServiceError"}
Open IAM
Policy Name: GlueActions (Type :Customer Inline)[
--- "Statement":[
"Resource":
...
"arn:aws:glue:*xxx:catalog"
...
],
"Effect":"Allow"
]
Ensure the above "catalog" is present else create the whole Customer Inline JSON

Cannot give aws-lambda access to aws-appsync API

I am working on a project where users can upload files into a S3 bucket, these uploaded files are mapped to a GraphQL key (which was generated by Amplify CLI), and an aws-lambda function is triggered. All of this is working, but the next step I want is for this aws-lambda function to create a second file with the same ownership attributes and POST the location of the saved second file to the GraphQL API.
I figured that this shouldn't be too difficult but I am having a lot of difficulty and can't understand where the problem lies.
BACKGROUND/DETAILS
I want the owner of the data (the uploader) to be the only user who is able to access the data, with the aws-lambda function operating in an admin role and able to POST/GET to API of any owner.
The GraphQL schema looks like this:
type FileUpload #model
#auth(rules: [
{ allow: owner}]) {
id: ID!
foo: String
bar: String
}
And I also found this seemingly-promising AWS guide which I thought would give an IAM role admin access (https://docs.amplify.aws/cli/graphql/authorization-rules/#configure-custom-identity-and-group-claims) which I followed by creating the file amplify/backend/api/<your-api-name>/custom-roles.json and saved it with
{
"adminRoleNames": ["<YOUR_IAM_ROLE_NAME>"]
}
I replaced "<YOUR_IAM_ROLE_NAME>" with an IAM Role which I have given broad access to, including this appsync access:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"appsync:*"
],
"Resource": "*"
}
]
}
Which is the role given to my aws-lambda function.
When I attempt to run a simple API query in my aws-lambda function with the above settings I get this error
response string:
{
"data": {
"getFileUpload": null
},
"errors": [
{
"path": [
"getFileUpload"
],
"data": null,
"errorType": "Unauthorized",
"errorInfo": null,
"locations": [
{
"line": 3,
"column": 11,
"sourceName": null
}
],
"message": "Not Authorized to access getFileUpload on type Query"
}
]
}
my actual python lambda script is
import http
API_URL = '<MY_API_URL>'
API_KEY = '<>MY_API_KEY'
HOST = API_URL.replace('https://','').replace('/graphql','')
def queryAPI():
conn = http.client.HTTPSConnection(HOST, 443)
headers = {
'Content-type': 'application/graphql',
'x-api-key': API_KEY,
'host': HOST
}
print('conn: ', conn)
query = '''
{
getFileUpload(id: "<ID_HERE>") {
description
createdAt
baseFilePath
}
}
'''
graphql_query = {
'query': query
}
query_data = json.dumps(graphql_query)
print('query data: ', query_data)
conn.request('POST', '/graphql', query_data, headers)
response = conn.getresponse()
response_string = response.read().decode('utf-8')
print('response string: ', response_string)
I pass in the API key and API URL above in addition to giving AWS-lambda the IAM role. I understand that only one is probably needed, but I am trying to get the process to work then pare it back.
QUESTION(s)
As far as I understand, I am
providing the appropriate #auth rules to my GraphQL schema based on my goals and (2 below)
giving my aws-lambda function sufficient IAM authorization (via both IAM role and API key) to override any potential restrictive #auth rules of my GraphQL schema
But clearly something is not working. Can anyone point me towards a problem that I am overlooking?
I had similar problem just yesterday.
It was not 1:1 what you're trying to do, but maybe it's still helpful.
So I was trying to give lambda functions permissions to access the data based on my graphql schema. The schema had different #auth directives, which caused the lambda functions to not have access to the data anymore. Even though I gave them permissions via the cli and IAM roles. Although the documentation says this should work, it didn't:
if you grant a Lambda function in your Amplify project access to the GraphQL API via amplify update function, then the Lambda function's IAM execution role is allow-listed to honor the permissions granted on the Query, Mutation, and Subscription types.
Therefore, these functions have special access privileges that are scoped based on their IAM policy instead of any particular #auth rule.
So I ended up adding #auth(rules: [{ allow: custom }]) to all parts of my schema that I want to access via lambda functions.
When doing this, make sure to add "lambda" as auth mode to your api via amplify update api.
In the authentication lambda function, you could then check if the user, who is invoking the function, has access to the requested query/S3 Data.

How to resolve access denied after saving a bad bucket policy?

Using terraform I've setup my stack, I just altered the bucket policy and applied but now I've found the bucket policy is denying all actions including management and altering the policy.
How might I update the policy so I can delete the bucket?
I am not able to access the bucket policy any more, but what was applied is still in my terraform state. If I attempt a destroy on the bucket it reveals the following (I've masked id's and account).
The following is just a sample as there are 5 action blocks and each contains a dozen userid's.
- Statement = [
- {
- Action = [
- "s3:ListBucketVersions",
- "s3:ListBucketMultipartUploads",
- "s3:ListBucket",
]
- Condition = {
- StringLike = {
- aws:userid = [
- "AROAXXXXXXXXXXXXXXXXA:*",
- "AROAXXXXXXXXXXXXXXXXB:*",
]
}
- StringNotLike = {
- aws:userid = [
- "*:AROAAXXXXXXXXXXXXXXXA:user1",
- "*:AROAAXXXXXXXXXXXXXXXA:user2",
- "*:AROAAXXXXXXXXXXXXXXXA:*",
]
}
}
- Effect = "Deny"
- Principal = "*"
- Resource = "arn:aws:s3:::my-account-bucket-name"
- Sid = "Deny bucket-level read operations except for authorised users"
},
Based on the comments.
It seems that the new policy resulted in denying access to everyone. In such cases, AWS explains what to do in a blog post titled:
I accidentally denied everyone access to my Amazon S3 bucket. How do I regain access?
The process involves accessing the account as root user and deleting the bucket policy.

aws lambda function getting access denied when getObject from s3

I am getting an acccess denied error from S3 AWS service on my Lambda function.
This is the code:
// dependencies
var async = require('async');
var AWS = require('aws-sdk');
var gm = require('gm').subClass({ imageMagick: true }); // Enable ImageMagick integration.
exports.handler = function(event, context) {
var srcBucket = event.Records[0].s3.bucket.name;
// Object key may have spaces or unicode non-ASCII characters.
var key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));
/*
{
originalFilename: <string>,
versions: [
{
size: <number>,
crop: [x,y],
max: [x, y],
rotate: <number>
}
]
}*/
var fileInfo;
var dstBucket = "xmovo.transformedimages.develop";
try {
//TODO: Decompress and decode the returned value
fileInfo = JSON.parse(key);
//download s3File
// get reference to S3 client
var s3 = new AWS.S3();
// Download the image from S3 into a buffer.
s3.getObject({
Bucket: srcBucket,
Key: key
},
function (err, response) {
if (err) {
console.log("Error getting from s3: >>> " + err + "::: Bucket-Key >>>" + srcBucket + "-" + key + ":::Principal>>>" + event.Records[0].userIdentity.principalId, err.stack);
return;
}
// Infer the image type.
var img = gm(response.Body);
var imageType = null;
img.identify(function (err, data) {
if (err) {
console.log("Error image type: >>> " + err);
deleteFromS3(srcBucket, key);
return;
}
imageType = data.format;
//foreach of the versions requested
async.each(fileInfo.versions, function (currentVersion, callback) {
//apply transform
async.waterfall([async.apply(transform, response, currentVersion), uploadToS3, callback]);
}, function (err) {
if (err) console.log("Error on excecution of watefall: >>> " + err);
else {
//when all done then delete the original image from srcBucket
deleteFromS3(srcBucket, key);
}
});
});
});
}
catch (ex){
context.fail("exception through: " + ex);
deleteFromS3(srcBucket, key);
return;
}
function transform(response, version, callback){
var imageProcess = gm(response.Body);
if (version.rotate!=0) imageProcess = imageProcess.rotate("black",version.rotate);
if(version.size!=null) {
if (version.crop != null) {
//crop the image from the coordinates
imageProcess=imageProcess.crop(version.size[0], version.size[1], version.crop[0], version.crop[1]);
}
else {
//find the bigger and resize proportioned the other dimension
var widthIsMax = version.size[0]>version.size[1];
var maxValue = Math.max(version.size[0],version.size[1]);
imageProcess=(widthIsMax)?imageProcess.resize(maxValue):imageProcess.resize(null, maxValue);
}
}
//finally convert the image to jpg 90%
imageProcess.toBuffer("jpg",{quality:90}, function(err, buffer){
if (err) callback(err);
callback(null, version, "image/jpeg", buffer);
});
}
function deleteFromS3(bucket, filename){
s3.deleteObject({
Bucket: bucket,
Key: filename
});
}
function uploadToS3(version, contentType, data, callback) {
// Stream the transformed image to a different S3 bucket.
var dstKey = fileInfo.originalFilename + "_" + version.size + ".jpg";
s3.putObject({
Bucket: dstBucket,
Key: dstKey,
Body: data,
ContentType: contentType
}, callback);
}
};
This is the error on Cloudwatch:
AccessDenied: Access Denied
This is the stack error:
at Request.extractError (/var/runtime/node_modules/aws-sdk/lib/services/s3.js:329:35)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:596:14)
at Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:21:10)
at AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:37:9)
at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:598:12)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:115:18)
Without any other description or info
on S3 bucket permissions allow to everyone put list and delete.
What can I do to access the S3 bucket?
PS: on Lambda event properties the principal is correct and has administrative privileges.
Interestingly enough, AWS returns 403 (access denied) when the file does not exist. Be sure the target file is in the S3 bucket.
If you are specifying the Resource don't forget to add the sub folder specification as well. Like this:
"Resource": [
"arn:aws:s3:::BUCKET-NAME",
"arn:aws:s3:::BUCKET-NAME/*"
]
Your Lambda does not have privileges (S3:GetObject).
Go to IAM dashboard, check the role associated with your Lambda execution. If you use AWS wizard, it automatically creates a role called oneClick_lambda_s3_exec_role. Click on Show Policy. It should show something similar to the attached image. Make sure S3:GetObject is listed.
I ran into this issue and after hours of IAM policy madness, the solution was to:
Go to S3 console
Click bucket you are interested in.
Click 'Properties'
Unfold 'Permissions'
Click 'Add more permissions'
Choose 'Any Authenticated AWS User' from dropdown. Select 'Upload/Delete' and 'List' (or whatever you need for your lambda).
Click 'Save'
Done.
Your carefully written IAM role policies don't matter, neither do specific bucket policies (I've written those too to make it work). Or they just don't work on my account, who knows.
[EDIT]
After a lot of tinkering the above approach is not the best. Try this:
Keep your role policy as in the helloV post.
Go to S3. Select your bucket. Click Permissions. Click Bucket Policy.
Try something like this:
{
"Version": "2012-10-17",
"Id": "Lambda access bucket policy",
"Statement": [
{
"Sid": "All on objects in bucket lambda",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AWSACCOUNTID:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::BUCKET-NAME/*"
},
{
"Sid": "All on bucket by lambda",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AWSACCOUNTID:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::BUCKET-NAME"
}
]
}
Worked for me and does not require for you to share with all authenticated AWS users (which most of the time is not ideal).
If you have encryption set on your S3 bucket (such as AWS KMS), you may need to make sure the IAM role applied to your Lambda function is added to the list of IAM > Encryption keys > region > key > Key Users for the corresponding key that you used to encrypt your S3 bucket at rest.
In my screenshot, for example, I added the CyclopsApplicationLambdaRole role that I have applied to my Lambda function as a Key User in IAM for the same AWS KMS key that I used to encrypt my S3 bucket. Don't forget to select the correct region for your key when you open up the Encryption keys UI.
Find the execution role you've applied to your Lambda function:
Find the key you used to add encryption to your S3 bucket:
In IAM > Encryption keys, choose your region and click on the key name:
Add the role as a Key User in IAM Encryption keys for the key specified in S3:
If all the other policy ducks are in a row, S3 will still return an Access Denied message if the object doesn't exist AND the requester doesn't have ListBucket permission on the bucket.
From https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html:
...If the object you request does not exist, the error Amazon S3
returns depends on whether you also have the s3:ListBucket permission.
If you have the s3:ListBucket permission on the bucket, Amazon S3 will
return an HTTP status code 404 ("no such key") error. if you don’t
have the s3:ListBucket permission, Amazon S3 will return an HTTP
status code 403 ("access denied") error.
I too ran into this issue, I fixed this by providing s3:GetObject* in the ACL as it is attempting to obtain a version of that object.
I tried to execute a basic blueprint Python lambda function [example code] and I had the same issue. My execition role was lambda_basic_execution
I went to S3 > (my bucket name here) > permissions .
Because I'm beginner, I used the Policy Generator provided by Amazon rather than writing JSON myself: http://awspolicygen.s3.amazonaws.com/policygen.html
my JSON looks like this:
{
"Id": "Policy153536723xxxx",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt153536722xxxx",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::tokabucket/*",
"Principal": {
"AWS": [
"arn:aws:iam::82557712xxxx:role/lambda_basic_execution"
]
}
}
]
And then the code executed nicely:
I solved my problem following all the instruction from the AWS - How do I allow my Lambda execution role to access my Amazon S3 bucket?:
Create an AWS Identity and Access Management (IAM) role for the Lambda function that grants access to the S3 bucket.
Modify the IAM role's trust policy.
Set the IAM role as the Lambda function's execution role.
Verify that the bucket policy grants access to the Lambda function's execution role.
I was trying to read a file from s3 and create a new file by changing content of file read (Lambda + Node). Reading file from S3 did not had any problem. As soon I tried writing to S3 bucket I get 'Access Denied' error.
I tried every thing listed above but couldn't get rid of 'Access Denied'. Finally I was able to get it working by giving 'List Object' permission to everyone on my bucket.
Obviously this not the best approach but nothing else worked.
After searching for a long time i saw that my bucket policy was only allowed read access and not put access:
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicListGet",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:List*",
"s3:Get*",
"s3:Put*"
],
"Resource": [
"arn:aws:s3:::bucketName",
"arn:aws:s3:::bucketName/*"
]
}
]
}
Also another issue might be that in order to fetch objects from cross region you need to initialize new s3 client with other region name like:
const getS3Client = (region) => new S3({ region })
I used this function to get s3 client based on region.
I was struggling with this issue for hours. I was using AmazonS3EncryptionClient and nothing I did helped. Then I noticed that the client is actually deprecated, so I thought I'd try switching to the builder model they have:
var builder = AmazonS3EncryptionClientBuilder.standard()
.withEncryptionMaterials(new StaticEncryptionMaterialsProvider(encryptionMaterials))
if (accessKey.nonEmpty && secretKey.nonEmpty) builder = builder.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(accessKey.get, secretKey.get)))
builder.build()
And... that solved it. Looks like Lambda has trouble injecting the credentials in the old model, but works well in the new one.
I was getting the same error "AccessDenied: Access Denied" while cropping s3 images using lambda function. I updated the s3 bucket policy and IAM role inline policy as per the document link given below.
But still, I was getting the same error. Then I realised, I was trying to give "public-read" access in a private bucket. After removed ACL: 'public-read' from S3.putObject problem get resolved.
https://aws.amazon.com/premiumsupport/knowledge-center/access-denied-lambda-s3-bucket/
I had this error message in aws lambda environment when using boto3 with python:
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
It turns out I needed an extra permission because I was using object tags. If your objects have tags you will need
s3:GetObject AND s3:GetObjectTagging for getting the object.
I have faced the same problem when creating Lambda function that should have read S3 bucket content. I created the Lambda function and S3 bucket using AWS CDK. To solve this within AWS CDK, I used magic from the docs.
Resources that use execution roles, such as lambda.Function, also
implement IGrantable, so you can grant them access directly instead of
granting access to their role. For example, if bucket is an Amazon S3
bucket, and function is a Lambda function, the code below grants the
function read access to the bucket.
bucket.grantRead(function);