I'm trying to get or list files from an S3 bucket. The bucket is set up as no private access, has no specific permissions added.
I'm trying to access from EC2 configured with a role that has full S3 access, this worked before.
I'm also trying to access from Lambda, configured with a role that has full S3 access, this is new, and never worked before.
According to the IAM simulator this should be allowed.
This is an excerpt from my Lambda (python):
import json
import boto3
from datetime import datetime
def lambda_handler(event, context):
bucket = 'mybucketname' # this the name itself, no url or arn or anything
# check if file exists
s3client = boto3.client('s3')
key = 'mypath/' + 'anotherbitofpath' + '/' + 'index.html'
print(f"key = {key}")
objs = s3client.list_objects_v2(
Bucket=bucket,
Prefix=key
)
print(f"objs = {objs}")
if any([w.key == path_s3 for w in objs]):
print("Exists!")
else:
print("Doesn't exist")
many thanks
I implemented this exact use case. I can access S3 objects from a Lambda function. The only difference is I implemented my code in Java. This method that tags objects works perfectly in a Lambda function.
private void tagExistingObject(S3Client s3, String bucketName, String key, String label, String LabelValue) {
try {
GetObjectTaggingRequest getObjectTaggingRequest = GetObjectTaggingRequest.builder()
.bucket(bucketName)
.key(key)
.build();
GetObjectTaggingResponse response = s3.getObjectTagging(getObjectTaggingRequest);
// Get the existing immutable list - cannot modify this list.
List<Tag> existingList = response.tagSet();
ArrayList<Tag> newTagList = new ArrayList(new ArrayList<>(existingList));
// Create a new tag.
Tag myTag = Tag.builder()
.key(label)
.value(LabelValue)
.build();
// push new tag to list.
newTagList.add(myTag);
Tagging tagging = Tagging.builder()
.tagSet(newTagList)
.build();
PutObjectTaggingRequest taggingRequest = PutObjectTaggingRequest.builder()
.key(key)
.bucket(bucketName)
.tagging(tagging)
.build();
s3.putObjectTagging(taggingRequest);
System.out.println(key + " was tagged with " + label);
} catch (S3Exception e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
}
The role i use has full access to S3 and there are no issues performing S3 operations from a Lambda function.
Update the bucket policy so that it specifies the ARN of the Lambda function's IAM role (execution role) as a Principal that has access to the action s3:GetObject. You can use a bucket policy similar to the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::YourAWSAccount:role/AccountARole"
},
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::YourBucketName/*"
]
}
]
}
The reason why I ask about this because I don't see any official documents mentioning performing PutRecord from AWS Lambda function to FireHose. I want to perform PutRecord from AWS Lambda on Kinesis FireHose. I have also given appropriate PutRecord policy to the AWS Lambda function that I am trying to PutRecord from. I get the following error when PutRecord action is performed from AWS Lambda using .Net 2.2
User: arn:aws:sts::accountnumber:assumed-role/listener-role/lambda is not authorized to perform: kinesis:PutRecord on resource: arn:aws:kinesis:us-west-1:accountnumber:assumed:stream/firehose-stream
I have a policy as follow
{
"permissionsBoundary": {},
"roleName": "listener-role",
"policies": [
{
"document": {
"Version": "2012-10-17",
"Statement": [
{....},
{
"Effect": "Allow",
"Action": [
"firehose:PutRecord",
"firehose:PutRecordBatch"
],
"Resource": [
"*"
]
}
]
},
"name": "policy",
"type": "inline"
}
],
"trustedEntities": [
"lambda.amazonaws.com"
]
}
.Net Snipped for Putting record on Kinesis FireHose
_kinesisClient is AmazonKinesisClient
MemoryStream recordStream = new MemoryStream();
IFormatter formatter = new BinaryFormatter();
formatter.Serialize(recordStream, data);
var request = new PutRecordRequest
{
PartitionKey = Guid.NewGuid().ToString(),
Data = recordStream,
StreamName = Environment.GetEnvironmentVariable("KinesisStream")
};
await _kinesisClient.PutRecordAsync(request);
You are trying to put to data in a Kinesis Data Stream. Your policy allows you to put data in a Kinesis Firehose. This can be somewhat confusing because of the different flavours of Kinesis. If indeed you are trying to put data in a Kinesis Data Stream, you should change your policy action to kinesis:Put*.
On the other hand, if you want to put data in a Kinesis Firehose, change your .NET code to something like this (I am not a .NET expert):
var putRecordRequest = new PutRecordRequest();
var deliveryStreamName = Environment.GetEnvironmentVariable("KinesisStream");
putRecordRequest.setDeliveryStreamName(deliveryStreamName);
var record = new Record().withData(ByteBuffer.wrap(data.getBytes()));
putRecordRequest.setRecord(record);
// Put record into the DeliveryStream
firehoseClient.putRecord(putRecordRequest);
I was using the wrong client to putrecord on the Kinesis Firehose.
The KinesisFireHose client looks something like this.
Nuget Package: AWSSDK.KinesisFirehose"
Version="3.3.103.28"
serviceCollection.AddScoped<IAmazonKinesisFirehose, AmazonKinesisFirehoseClient>();
Use the dependency injected IAmazonKinesisFirehose
var data = "{\"casenumber\": \"" + 123 + "\"}";
// convert string to stream
var byteArray = Encoding.UTF8.GetBytes(data);
var putRecordRequest = new PutRecordRequest {
DeliveryStreamName = Environment.GetEnvironmentVariable("KinesisFirehose"), // AWS console -> Data FIrehose -> "Firehose delivery streams"
Record = new Record {
Data = new MemoryStream(byteArray)
}
};
// Put record into the DeliveryStream
Console.WriteLine($ "PutRecordAsync: {data}");
Console.WriteLine("Writing EmitScanDataToKinesisAsync");
await _fireHoseClient.PutRecordAsync(putRecordRequest);
Console.WriteLine("End EmitScanDataToKinesisAsync");
I am a data analyst and new to AWS lambda functions. I have an s3 bucket where I store the Inventory data from our data-lake which is generated using Inventory feature under S3 Management tab.
So lets say the inventory data (reports) looks like this:
s3://my-bucket/allobjects/data/report-1.csv.gz
s3://my-bucket/allobjects/data/report-2.csv.gz
s3://my-bucket/allobjects/data/report-3.csv.gz
Regardless of the file contents, I have an Event setup for s3://my-bucket/allobjects/data/ which notifies an SNS topic during any event like GET or PUT. (I cant change this workflow due to strict governance)
Now, I am trying to create a Lambda Function with this SNS topic as a trigger and simply move the inventory-report files generated by the S3 Inventory feature under
s3://my-bucket/allobjects/data/
and repartition it as follows:
s3://my-object/allobjects/partitiondata/year=2019/month=01/day=29/report-1.csv.gz
s3://my-object/allobjects/partitiondata/year=2019/month=01/day=29/report-2.csv.gz
s3://my-object/allobjects/partitiondata/year=2019/month=01/day=29/report-3.csv.gz
How can I achieve this using the lambda function (node.js or python is fine) reading an SNS topic? Any help is appreciated.
I tried something like this based on some smaple code i found online but it didnt help.
console.log('Loading function');
var AWS = require('aws-sdk');
AWS.config.region = 'us-east-1';
exports.handler = function(event, context) {
console.log("\n\nLoading handler\n\n");
var sns = new AWS.SNS();
sns.publish({
Message: 'File(s) uploaded successfully',
TopicArn: 'arn:aws:sns:_my_ARN'
}, function(err, data) {
if (err) {
console.log(err.stack);
return;
}
console.log('push sent');
console.log(data);
context.done(null, 'Function Finished!');
});
};
The preferred method would be for the Amazon S3 Event to trigger the AWS Lambda function directly. But since you cannot alter this port, the flow would be:
The Amazon S3 Event will send a message to an Amazon SNS topic.
The AWS Lambda function is subscribed to the SNS topic, so it is triggered and receives the message from S3.
The Lambda function extracts the Bucket and Key, then calls S3 to copy_object() to another location. (There is no move command. You will need to copy the object to a new bucket/key.)
The content of the event field is something like:
{
"Records": [
{
"EventSource": "aws:sns",
"EventVersion": "1.0",
"EventSubscriptionArn": "...",
"Sns": {
"Type": "Notification",
"MessageId": "1c3189f0-ffd3-53fb-b60b-dd3beeecf151",
"TopicArn": "...",
"Subject": "Amazon S3 Notification",
"Message": "{\"Records\":[{\"eventVersion\":\"2.1\",\"eventSource\":\"aws:s3\",\"awsRegion\":\"ap-southeast-2\",\"eventTime\":\"2019-01-30T02:42:07.129Z\",\"eventName\":\"ObjectCreated:Put\",\"userIdentity\":{\"principalId\":\"AWS:AIDAIZCFQCOMZZZDASS6Q\"},\"requestParameters\":{\"sourceIPAddress\":\"54.1.1.1\"},\"responseElements\":{\"x-amz-request-id\":\"...",\"x-amz-id-2\":\"..."},\"s3\":{\"s3SchemaVersion\":\"1.0\",\"configurationId\":\"...\",\"bucket\":{\"name\":\"stack-lake\",\"ownerIdentity\":{\"principalId\":\"...\"},\"arn\":\"arn:aws:s3:::stack-lake\"},\"object\":{\"key\":\"index.html\",\"size\":4378,\"eTag\":\"...\",\"sequencer\":\"...\"}}}]}",
"Timestamp": "2019-01-30T02:42:07.212Z",
"SignatureVersion": "1",
"Signature": "...",
"SigningCertUrl": "...",
"UnsubscribeUrl": "...",
"MessageAttributes": {}
}
}
]
}
Thus, the name of the uploaded Object needs to be extracted from the Message.
You could use code like this:
import json
def lambda_handler(event, context):
for record1 in event['Records']:
message = json.loads(record1['Sns']['Message'])
for record2 in message['Records']:
bucket = record2['s3']['bucket']['name'])
key = record2['s3']['object']['key'])
# Do something here with bucket and key
return {
'statusCode': 200,
'body': json.dumps(event)
}
I am getting an acccess denied error from S3 AWS service on my Lambda function.
This is the code:
// dependencies
var async = require('async');
var AWS = require('aws-sdk');
var gm = require('gm').subClass({ imageMagick: true }); // Enable ImageMagick integration.
exports.handler = function(event, context) {
var srcBucket = event.Records[0].s3.bucket.name;
// Object key may have spaces or unicode non-ASCII characters.
var key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));
/*
{
originalFilename: <string>,
versions: [
{
size: <number>,
crop: [x,y],
max: [x, y],
rotate: <number>
}
]
}*/
var fileInfo;
var dstBucket = "xmovo.transformedimages.develop";
try {
//TODO: Decompress and decode the returned value
fileInfo = JSON.parse(key);
//download s3File
// get reference to S3 client
var s3 = new AWS.S3();
// Download the image from S3 into a buffer.
s3.getObject({
Bucket: srcBucket,
Key: key
},
function (err, response) {
if (err) {
console.log("Error getting from s3: >>> " + err + "::: Bucket-Key >>>" + srcBucket + "-" + key + ":::Principal>>>" + event.Records[0].userIdentity.principalId, err.stack);
return;
}
// Infer the image type.
var img = gm(response.Body);
var imageType = null;
img.identify(function (err, data) {
if (err) {
console.log("Error image type: >>> " + err);
deleteFromS3(srcBucket, key);
return;
}
imageType = data.format;
//foreach of the versions requested
async.each(fileInfo.versions, function (currentVersion, callback) {
//apply transform
async.waterfall([async.apply(transform, response, currentVersion), uploadToS3, callback]);
}, function (err) {
if (err) console.log("Error on excecution of watefall: >>> " + err);
else {
//when all done then delete the original image from srcBucket
deleteFromS3(srcBucket, key);
}
});
});
});
}
catch (ex){
context.fail("exception through: " + ex);
deleteFromS3(srcBucket, key);
return;
}
function transform(response, version, callback){
var imageProcess = gm(response.Body);
if (version.rotate!=0) imageProcess = imageProcess.rotate("black",version.rotate);
if(version.size!=null) {
if (version.crop != null) {
//crop the image from the coordinates
imageProcess=imageProcess.crop(version.size[0], version.size[1], version.crop[0], version.crop[1]);
}
else {
//find the bigger and resize proportioned the other dimension
var widthIsMax = version.size[0]>version.size[1];
var maxValue = Math.max(version.size[0],version.size[1]);
imageProcess=(widthIsMax)?imageProcess.resize(maxValue):imageProcess.resize(null, maxValue);
}
}
//finally convert the image to jpg 90%
imageProcess.toBuffer("jpg",{quality:90}, function(err, buffer){
if (err) callback(err);
callback(null, version, "image/jpeg", buffer);
});
}
function deleteFromS3(bucket, filename){
s3.deleteObject({
Bucket: bucket,
Key: filename
});
}
function uploadToS3(version, contentType, data, callback) {
// Stream the transformed image to a different S3 bucket.
var dstKey = fileInfo.originalFilename + "_" + version.size + ".jpg";
s3.putObject({
Bucket: dstBucket,
Key: dstKey,
Body: data,
ContentType: contentType
}, callback);
}
};
This is the error on Cloudwatch:
AccessDenied: Access Denied
This is the stack error:
at Request.extractError (/var/runtime/node_modules/aws-sdk/lib/services/s3.js:329:35)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:596:14)
at Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:21:10)
at AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:37:9)
at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:598:12)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:115:18)
Without any other description or info
on S3 bucket permissions allow to everyone put list and delete.
What can I do to access the S3 bucket?
PS: on Lambda event properties the principal is correct and has administrative privileges.
Interestingly enough, AWS returns 403 (access denied) when the file does not exist. Be sure the target file is in the S3 bucket.
If you are specifying the Resource don't forget to add the sub folder specification as well. Like this:
"Resource": [
"arn:aws:s3:::BUCKET-NAME",
"arn:aws:s3:::BUCKET-NAME/*"
]
Your Lambda does not have privileges (S3:GetObject).
Go to IAM dashboard, check the role associated with your Lambda execution. If you use AWS wizard, it automatically creates a role called oneClick_lambda_s3_exec_role. Click on Show Policy. It should show something similar to the attached image. Make sure S3:GetObject is listed.
I ran into this issue and after hours of IAM policy madness, the solution was to:
Go to S3 console
Click bucket you are interested in.
Click 'Properties'
Unfold 'Permissions'
Click 'Add more permissions'
Choose 'Any Authenticated AWS User' from dropdown. Select 'Upload/Delete' and 'List' (or whatever you need for your lambda).
Click 'Save'
Done.
Your carefully written IAM role policies don't matter, neither do specific bucket policies (I've written those too to make it work). Or they just don't work on my account, who knows.
[EDIT]
After a lot of tinkering the above approach is not the best. Try this:
Keep your role policy as in the helloV post.
Go to S3. Select your bucket. Click Permissions. Click Bucket Policy.
Try something like this:
{
"Version": "2012-10-17",
"Id": "Lambda access bucket policy",
"Statement": [
{
"Sid": "All on objects in bucket lambda",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AWSACCOUNTID:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::BUCKET-NAME/*"
},
{
"Sid": "All on bucket by lambda",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AWSACCOUNTID:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::BUCKET-NAME"
}
]
}
Worked for me and does not require for you to share with all authenticated AWS users (which most of the time is not ideal).
If you have encryption set on your S3 bucket (such as AWS KMS), you may need to make sure the IAM role applied to your Lambda function is added to the list of IAM > Encryption keys > region > key > Key Users for the corresponding key that you used to encrypt your S3 bucket at rest.
In my screenshot, for example, I added the CyclopsApplicationLambdaRole role that I have applied to my Lambda function as a Key User in IAM for the same AWS KMS key that I used to encrypt my S3 bucket. Don't forget to select the correct region for your key when you open up the Encryption keys UI.
Find the execution role you've applied to your Lambda function:
Find the key you used to add encryption to your S3 bucket:
In IAM > Encryption keys, choose your region and click on the key name:
Add the role as a Key User in IAM Encryption keys for the key specified in S3:
If all the other policy ducks are in a row, S3 will still return an Access Denied message if the object doesn't exist AND the requester doesn't have ListBucket permission on the bucket.
From https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html:
...If the object you request does not exist, the error Amazon S3
returns depends on whether you also have the s3:ListBucket permission.
If you have the s3:ListBucket permission on the bucket, Amazon S3 will
return an HTTP status code 404 ("no such key") error. if you don’t
have the s3:ListBucket permission, Amazon S3 will return an HTTP
status code 403 ("access denied") error.
I too ran into this issue, I fixed this by providing s3:GetObject* in the ACL as it is attempting to obtain a version of that object.
I tried to execute a basic blueprint Python lambda function [example code] and I had the same issue. My execition role was lambda_basic_execution
I went to S3 > (my bucket name here) > permissions .
Because I'm beginner, I used the Policy Generator provided by Amazon rather than writing JSON myself: http://awspolicygen.s3.amazonaws.com/policygen.html
my JSON looks like this:
{
"Id": "Policy153536723xxxx",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt153536722xxxx",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::tokabucket/*",
"Principal": {
"AWS": [
"arn:aws:iam::82557712xxxx:role/lambda_basic_execution"
]
}
}
]
And then the code executed nicely:
I solved my problem following all the instruction from the AWS - How do I allow my Lambda execution role to access my Amazon S3 bucket?:
Create an AWS Identity and Access Management (IAM) role for the Lambda function that grants access to the S3 bucket.
Modify the IAM role's trust policy.
Set the IAM role as the Lambda function's execution role.
Verify that the bucket policy grants access to the Lambda function's execution role.
I was trying to read a file from s3 and create a new file by changing content of file read (Lambda + Node). Reading file from S3 did not had any problem. As soon I tried writing to S3 bucket I get 'Access Denied' error.
I tried every thing listed above but couldn't get rid of 'Access Denied'. Finally I was able to get it working by giving 'List Object' permission to everyone on my bucket.
Obviously this not the best approach but nothing else worked.
After searching for a long time i saw that my bucket policy was only allowed read access and not put access:
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicListGet",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:List*",
"s3:Get*",
"s3:Put*"
],
"Resource": [
"arn:aws:s3:::bucketName",
"arn:aws:s3:::bucketName/*"
]
}
]
}
Also another issue might be that in order to fetch objects from cross region you need to initialize new s3 client with other region name like:
const getS3Client = (region) => new S3({ region })
I used this function to get s3 client based on region.
I was struggling with this issue for hours. I was using AmazonS3EncryptionClient and nothing I did helped. Then I noticed that the client is actually deprecated, so I thought I'd try switching to the builder model they have:
var builder = AmazonS3EncryptionClientBuilder.standard()
.withEncryptionMaterials(new StaticEncryptionMaterialsProvider(encryptionMaterials))
if (accessKey.nonEmpty && secretKey.nonEmpty) builder = builder.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(accessKey.get, secretKey.get)))
builder.build()
And... that solved it. Looks like Lambda has trouble injecting the credentials in the old model, but works well in the new one.
I was getting the same error "AccessDenied: Access Denied" while cropping s3 images using lambda function. I updated the s3 bucket policy and IAM role inline policy as per the document link given below.
But still, I was getting the same error. Then I realised, I was trying to give "public-read" access in a private bucket. After removed ACL: 'public-read' from S3.putObject problem get resolved.
https://aws.amazon.com/premiumsupport/knowledge-center/access-denied-lambda-s3-bucket/
I had this error message in aws lambda environment when using boto3 with python:
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
It turns out I needed an extra permission because I was using object tags. If your objects have tags you will need
s3:GetObject AND s3:GetObjectTagging for getting the object.
I have faced the same problem when creating Lambda function that should have read S3 bucket content. I created the Lambda function and S3 bucket using AWS CDK. To solve this within AWS CDK, I used magic from the docs.
Resources that use execution roles, such as lambda.Function, also
implement IGrantable, so you can grant them access directly instead of
granting access to their role. For example, if bucket is an Amazon S3
bucket, and function is a Lambda function, the code below grants the
function read access to the bucket.
bucket.grantRead(function);
Can you publish a message to an SNS topic using an AWS Lambda function backed by node.js?
Yes, you could write a Lambda function that publishes to an SNS topic. The code running in Lambda has access to the full AWS SDK for Java or Javascript, whichever your function is using. You just need to make sure you give the IAM role executing the function access to publish to your topic. In Javascript:
console.log("Loading function");
var AWS = require("aws-sdk");
exports.handler = function(event, context) {
var eventText = JSON.stringify(event, null, 2);
console.log("Received event:", eventText);
var sns = new AWS.SNS();
var params = {
Message: eventText,
Subject: "Test SNS From Lambda",
TopicArn: "arn:aws:sns:us-west-2:123456789012:test-topic1"
};
sns.publish(params, context.done);
};
It is also possible to handle SNS messages using Lambda functions. You might take a look at the sns-message function blueprint, offered through the Create a Lambda function button on the Lambda console.
First, you need to grant your Lambda IAM role permissions to publish to your SNS topic using proper IAM policy.
{
"Action" : [
"sns:Publish",
"sns:Subscribe"
],
"Effect" : "Allow",
"Resource" : [
{ "Ref" : "<your SNS topic ARN>" }
]
}
Then you can use following code to SNS publish to your SNS topic from your other Lambda or Node.js code.
var message = {};
var sns = new AWS.SNS();
sns.publish({
TopicArn: "<your SNS topic ARN>",
Message: JSON.stringify(message)
}, function(err, data) {
if(err) {
console.error('error publishing to SNS');
context.fail(err);
} else {
console.info('message published to SNS');
context.succeed(null, data);
}
});
To send the SNS response into lambda use this:
await sns.publish({
Message: snsPayload,
MessageStructure: 'json',
TargetArn: endPointArn}).promise()
(Answered here)