What is wrong with my AWS Lambda function? - amazon-web-services

I have followed this tutorial to create thumbnails of images to another bucket with AWS Lambda: http://docs.aws.amazon.com/lambda/latest/dg/walkthrough-s3-events-adminuser-create-test-function-upload-zip-test.html
I have done all the steps earlier in the tutorial but when I run the code below in Lambda test from the link above
{
"Records":[
{
"eventVersion":"2.0",
"eventSource":"aws:s3",
"awsRegion":"us-east-1",
"eventTime":"1970-01-01T00:00:00.000Z",
"eventName":"ObjectCreated:Put",
"userIdentity":{
"principalId":"AIDAJDPLRKLG7UEXAMPLE"
},
"requestParameters":{
"sourceIPAddress":"127.0.0.1"
},
"responseElements":{
"x-amz-request-id":"C3D13FE58DE4C810",
"x-amz-id-2":"FMyUVURIY8/IgAtTv8xRjskZQpcIZ9KG4V5Wp6S7S/JRWeUWerMUE5JgHvANOjpD"
},
"s3":{
"s3SchemaVersion":"1.0",
"configurationId":"testConfigRule",
"bucket":{
"name":"sourcebucket",
"ownerIdentity":{
"principalId":"A3NL1KOZZKExample"
},
"arn":"arn:aws:s3:::sourcebucket"
},
"object":{
"key":"HappyFace.jpg",
"size":1024,
"eTag":"d41d8cd98f00b204e9800998ecf8427e",
"versionId":"096fKKXTRTtl3on89fVO.nfljtsv6qko"
}
}
}
]
}
I get the error message
Unable to resize sourcebucket/HappyFace.jpg and upload to
sourcebucketresized/resized-HappyFace.jpg due to an error:
PermanentRedirect: The bucket you are attempting to access must be
addressed using the specified endpoint. Please send all future
requests to this endpoint. END RequestId: 345345...
I have changed the bucket name, eTag and image name. Do I need to change something else? My region are correct. Do I need to edit "principalId"? Where can I find it?
What is wrong.

In my case, the problem was the bucket region. In the example "us-east-1" is used, but my bucket is on "eu-west-1", so i had to change 2 things:
"awsRegion":"eu-west-1", in lambda test file
set region in my nodejs lambda code: AWS.config.update({"region": "eu-west-1"})
And of course you still need to set following values in in lambda test file:
name: 'your_bucket_name_here',
arn: 'arn:aws:s3:::your_bucket_name_here'
After this modifications it worked as expected

Your problem is about "endpoint".You must change "arn":"arn:aws:s3:::sourcebucket" to "arn":"arn:aws:s3:::(name_of_your_bucket)". Same to "name":"sourcebucket" to "name":"(name_of_your_bucket)".
In order to avoid more problems you must upload a jpg called HappyFace.jpg to your bucket or change in s3 put Test object code.
Regards

Try use this updated format (Please carefully configure the key, bucket name,arn and awsRegion to your own settings):
{
"Records": [
{
"eventVersion": "2.0",
"eventTime": "1970-01-01T00:00:00.000Z",
"requestParameters": {
"sourceIPAddress": "127.0.0.1"
},
"s3": {
"configurationId": "testConfigRule",
"object": {
"eTag": "0123456789abcdef0123456789abcdef",
"sequencer": "0A1B2C3D4E5F678901",
"key": "HappyFace.jpg",
"size": 1024
},
"bucket": {
"arn": "arn:aws:s3:::myS3bucket",
"name": "myS3bucket",
"ownerIdentity": {
"principalId": "EXAMPLE"
}
},
"s3SchemaVersion": "1.0"
},
"responseElements": {
"x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH",
"x-amz-request-id": "EXAMPLE123456789"
},
"awsRegion": "us-east-1",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "EXAMPLE"
},
"eventSource": "aws:s3"
}
]
}

Related

AWS S3 triggers Lambda with partial object key

I get the following trigger event for some of the files that are uploaded to a location s3://bucket/folder1:
{
"Records": [
{
"eventVersion": "2.1",
"eventSource": "aws:s3",
"awsRegion": "eu-west-3",
"eventTime": "2022-07-26T08: 30: 03.280Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "principalId"
},
"requestParameters": {
"sourceIPAddress": "sourceIPAddress"
},
"responseElements": {
"x-amz-request-id": "JZJMKZAYX3HMQTY8",
"x-amz-id-2": "iXfsgUn5v1SQuR+YAacurX2qP+B7f39StWcWEyebkDbJzZzazygE9tABlKpg5hcW6lNOqZgEQ2jupDb26T9dww8fTG1O2Q0l"
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "NjViZmRiNmQtMTM0NS00NGZmLThlYjgtYjc4YWE4MWE2ZGU3",
"bucket": {
"name": "bucket",
"ownerIdentity": {
"principalId": "principalId"
},
"arn": "arn:aws:s3: : :bucket"
},
"object": {
"key": "folder1/",
"size": 0,
"eTag": "d41d8cd98f00b204e9800998ecf8427e",
"versionId": "qsENjqz.CmAFvvRg4z0.8ug4K0rZmegS",
"sequencer": "0062DFA60B3BF1F737"
}
}
}
]
}
Note that the key contains only prefix without file name.
Partial keys lead to 404 errors in the Lambda function:
event["Records"][0]["s3"]["object"]["key"]
[ERROR] ClientError: An error occurred (404) when calling the HeadObject operation: Not Found
For the files I upload via S3 CLI, the object key is correct: folder1/file1.txt. How does it happen that I receive some 'object keys' without filenames?
The event
"key": "folder1/",
"size": 0,
means that someone literally created an object called "folder1/" with size 0. One way this happens is if you use S3 Console and use Create folder button in the console.

AWS Lambda S3 trigger on multiple uploads

I'm looking to implement an AWS Lambda function that is triggered by the input of an audio file to my s3 bucket, and then concatenates this file with the previously uploaded file (already stored in the bucket), and outputs this concatenated file back to the bucket. I'm quite new to Lambda and I'm wondering, is possible to pass in a list of file names to process into a Lambda function to transcode? Or does Lambda only accept one read at a time?
When Lambda gets invoked by S3 directly, it will get an event that looks similar to this one from the docs:
{
"Records": [
{
"eventVersion": "2.1",
"eventSource": "aws:s3",
"awsRegion": "us-east-2",
"eventTime": "2019-09-03T19:37:27.192Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "AWS:AIDAINPONIXQXHT3IKHL2"
},
"requestParameters": {
"sourceIPAddress": "205.255.255.255"
},
"responseElements": {
"x-amz-request-id": "D82B88E5F771F645",
"x-amz-id-2": "vlR7PnpV2Ce81l0PRw6jlUpck7Jo5ZsQ"
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "828aa6fc-f7b5-4305-8584-487c791949c1",
"bucket": {
"name": "lambda-artifacts-deafc19498e3f2df",
"ownerIdentity": {
"principalId": "A3I5XTEXAMAI3E"
},
"arn": "arn:aws:s3:::lambda-artifacts-deafc19498e3f2df"
},
"object": {
"key": "b21b84d653bb07b05b1e6b33684dc11b",
"size": 1305107,
"eTag": "b21b84d653bb07b05b1e6b33684dc11b",
"sequencer": "0C0F6F405D6ED209E1"
}
}
}
]
}
It gives you basic meta information about the object that was uploaded.
There's no way to customize this event, but a Lambda function can query external resources for more information.
You could also use S3-Batching, but that's probably not designed for your use case.

AWS S3 automatic object key name normalization to lower case

AFAIK, object names on AWS S3 are always case sensitive and it is impossible to configure AWS S3 to be case insensitive.
So, is it possible to configure something like AWS Lambda in order to normalize uploaded file names to lower case? Or what is the best practice to perform this task with AWS S3?
Yes, this is easily done by having a Lambda function subscribe to your S3 PUT Event.
{
"Records": [
{
"eventVersion": "2.0",
"eventTime": "1970-01-01T00:00:00.000Z",
"requestParameters": {
"sourceIPAddress": "127.0.0.1"
},
"s3": {
"configurationId": "testConfigRule",
"object": {
"eTag": "0123456789abcdef0123456789abcdef",
"sequencer": "0A1B2C3D4E5F678901",
"key": "HappyFace.jpg",
"size": 1024
},
"bucket": {
"arn": bucketarn,
"name": "sourcebucket",
"ownerIdentity": {
"principalId": "EXAMPLE"
}
},
"s3SchemaVersion": "1.0"
},
"responseElements": {
"x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH",
"x-amz-request-id": "EXAMPLE123456789"
},
"awsRegion": "us-east-1",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "EXAMPLE"
},
"eventSource": "aws:s3"
}
]
}
You can then grab event.Records[0].s3.bucket.name and event.Records[0].s3.object.key to make a copyObject request to AWS
Once your file has been copied successfully, you can then delete the original file.
Just make sure your Lambda is configured for PUT events only, because if you set it to ALL events, both COPY and DELETE will also trigger your function, making you enter in an infinite recursion.

How to get file size from AWS S3 "ObjectRemoved:Delete" event

Background
I'm storing some files (objects) in S3 bucket.
Whenever any file gets deleted from S3 bucket my Lambda function gets triggered.
The lambda function needs to subtract the object size from DynamoDB.
Problem:
S3 deleteObject event does not send object Size
Sample S3 deleteObject event
{
"Records": [
{
"eventVersion": "2.0",
"eventTime": "1970-01-01T00:00:00.000Z",
"requestParameters": {
"sourceIPAddress": "127.0.0.1"
},
"s3": {
"configurationId": "testConfigRule",
"object": {
"sequencer": "0A1B2C3D4E5F678901",
"key": "HappyFace.jpg"
},
"bucket": {
"arn": "arn:aws:s3:::mybucket",
"name": "sourcebucket",
"ownerIdentity": {
"principalId": "EXAMPLE"
}
},
"s3SchemaVersion": "1.0"
},
"responseElements": {
"x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH",
"x-amz-request-id": "EXAMPLE123456789"
},
"awsRegion": "us-east-1",
"eventName": "ObjectRemoved:Delete",
"userIdentity": {
"principalId": "EXAMPLE"
},
"eventSource": "aws:s3"
}
]
}
Please help me find solution for my use case.
A way to do it is to enable versioning and check the metadata of the last version.
To avoid having the deleted version forever, you could setup an expiration policy or explicitly delete the version. I'd probably use both to catch cases where the event processor (the lambda function) fails and could not delete the file.

How to construct SNS filter policy to match partial S3 object key

I have created a subscription to an SNS topic where all of the events will be from S3 ObjectCreated:Put actions. I only want to receive the notifications where the S3 object key contains the string 'KLWX'. What should that filter policy look like? The notification data is below, however the 'Message' attribute value is given as a string, not a JSON object. I just expanded it for easier reading.
{
"SignatureVersion": "1",
"Type": "Notification",
"TopicArn": "xxx",
"Message": {
"Records": [{
"eventVersion": "2.0",
"eventSource": "aws:s3",
"awsRegion": "us-east-1",
"eventTime": "2018-01-18T20:16:27.590Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "xxx"
},
"requestParameters": {
"sourceIPAddress": "xxx"
},
"responseElements": {
"x-amz-request-id": "6CF3314E6D6B7671",
"x-amz-id-2": "tJdr3KDcAsp1tuGdo6y4jBLkYXsEDEeVPcvQ1SWQoLXWsZL81WUzbloDe1HxbhGes4u0tY9Jh+g="
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "NewNEXRADLevel2Object",
"bucket": {
"name": "xxx",
"ownerIdentity": {
"principalId": "xxx"
},
"arn": "xxx"
},
"object": {
"key": "KCBW/881/20180118-201329-015-I",
"size": 16063,
"eTag": "772cd2d2e82b22448792308755891350",
"sequencer": "005A61009B8EC82991"
}
}
}]
},
"UnsubscribeURL": "xxx",
"Signature": "xxx",
"Timestamp": "2018-01-18T20:16:27.626Z",
"SigningCertURL": "xxx",
"Subject": "Amazon S3 Notification",
"MessageId": "ed6a0365-4af2-5497-9be0-51be4829cdee"
}
You have to do this on the S3. When you create the event, you can use a combination of prefix/suffix to filter which object sends the notification to your SNS topic.
Assuming the bucket name is YourBucket, and your object key is KCBW/881/20180118-201329-015-I, you have to configure the S3 event on YourBucket with prefix = KLWX/
You can subscribe and add a target for an AWS Lambda function. In that function, you can code up your logic. If needed, you could send out another SNS message from there, or store/process the data as needed.
Filter policy similar to the one below worked for me. This requires FilterPolicyScope to be set to MessageBody.
"Records" : {
"s3" : {
"object" : {
"key" : [
{ "prefix" : "KLWX/" }
]
}
}
}