How to handle AWS Lambda timeout? - amazon-web-services

I have a lambda function that generates thumbnails on S3 Put event. It works fine. But I want to handle the case when it accidentally takes longer than reserved time(3 sec).
It's because I'm fetching the lambda generated thumbnail by suffixing '-small.jpg' or '-medium.jpg'. If the timeout happens and the thumbnails are not generated, I must have an alternative image in my bucket.

if you want to increase function timeout you can edit in the general setting of your function. steps and screenshot below will explain how to do it.
Click on the lambda function hyperlink and click on General Configuration.
click on edit [top right pane], and increase the function timeout.
2:

I suspect that creating thumbnails will not take more than 15 minutes at maximum Lambda RAM (and correspondingly max CPU) but if you need to handle that possibility then configure your Lambda function with a DLQ and trigger subsequent processing of failed Lambda functions.

Related

Is there a way to check if AWS lambda is running from java code?

There is a Lambda say my Lambda Name is - XYZ which has a s3 file upload trigger. Now lets say if I upload multiple files say around 100 and the lambda execution starts.
Is there any way I can track if the lambda is running or has processed all the files?
The reason for this is, once the lambda has completed processing all the file, I would want to trigger a step function, so for me to trigger my Step Function I would want to do that only once all the files are processed by my lambda (XYZ).
FYI: There is no current way to track how many files have been uploaded
I think it's not a good design to run a step function state machine after the lambda completes the job without having a perfect logical event.
Because you can for example receive a bunch of files (say 100) completed by the lambda, then using alarms with Cloudwatch once we complete the 100 values of a custom metric, or check value in DynamoDb or number of object in a custom folder, fires a job step function, at the same time you receive another file with +5 sec delay to complete the 101, in this case maybe you miss this file.
If you don't have a special event or a condition to have the completion of the files, then you can work with time scheduling and trigger your Step function with Cloudwatch event ( scheduled event) like every 15 min, check if there is work if not, exit the job.
Otherwise, either include the lambda (file process) in your step function as a step or change your design.

AWS CLoudWatch Log Trigger for Lambda

I have a problem in AWS regarding CloudWatch Log Triggers.
I have two Lambda functions. One (business-lambda) gets triggered when I upload a file to a S3 bucket. The other Lambda function (log-lambda) is triggered whenever business-lambda encounters an invalid file which results in creating an ERROR-log entry. I implemented this with a CloudWatch Log Trigger with filter "?ERROR" and having the log-lambda being subscribed to the log-group of the business-lambda.
Everything works fine as long as I upload one file at a time or at a maximum of ~3 files at a time.
But when I upload e.g. 10 invalid files at a time the log-lambda doesn't get triggered for all of the files. Instead it only gets triggered for 4-5 of them.
Is there some kind of "Cloudwatch-log-trigger/second" limit?
I found a solution - luk2302 made the correct suggestion in their comment.
In the log-lambda code I only process the first entry from an incoming log-event. But the log-lambda gets triggered once for multiple error-log entries from the business-lambda. I did not take this into account in the log-lambda code.
Thanks to everybody for their time!

Configure s3 event for alternate PUT operation

I have a Lambda function that gets triggered whenever an object is created in s3 bucket.
Now, I need to trigger the Lambda for alternate object creation.
Lambda should not be triggered when object is created for the first, third , fifth and so on time. But, Lambda should be triggered for the second, fourth, sixth and so on time.
For this, I created an s3 event for 'PUT' operation.
The first time I used the PUT API. The second time I uploaded the file using -
s3_res.meta.client.upload_file
I thought that it would not trigger lambda since this was upload and not PUT. But this also triggered the Lambda.
Is there any way for this?
The reason that meta.client.upload_file is triggering your PUT event lambda is because it is actually using PUT.
upload_file (docs) uses the TransferManager client, which uses PUT under-the-hood (you can see this in the code: https://github.com/boto/s3transfer/blob/develop/s3transfer/upload.py)
Looking at the AWS-SDK you'll see that POST'ing to S3 is pretty much limited to when you want to give a browser/client a pre-signed URL for them to upload a file to. (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html)
If you want to count the number of times PUT has been called, in order to take action on every even call, then the easiest thing to do is to use something like DynamoDB to create a table of 'file-name' vs 'put-count' which you update with every PUT and action accordingly.
Alternatively you could enable bucket file versioning. Then you could use list_object_versions to see how many times the file has been updated. Although you should be aware that S3 is eventually consistent, so this may not be accurate if the file is being rapidly updated.

DynamoDB not triggering lambda

I'm experimenting with dynamo db and lambda and am having trouble with the following flow:
Lambda A is triggered by a put to S3 event. It takes the object, an audio file, calculates its duration and writes a record in dynamoDB for each 30 second segment.
Lambda B is triggered by dynamoDB, downloads the file from S3 and operates on the 30 second record defined in the dynamo row.
My trouble is that when I run this flow, function A writes all of the rows required to dynamo, by function B
Does not seem to be triggered for each row in dynamo
Times out after 5 minutes.
Configuration
Function B is set with the highest memory and 5 minute expiration
The trigger is set with a batch size of 1 and starting position latest
Things I've confirmed
When function B is triggered, the download from S3 happens fast. This does not seem to be the blocker
When I trigger function B with a test event it executes perfectly.
When I look at the cloudwatch metrics, function B has a nearly 100% error rate in invocation. I can't tell if this means he function was invoked and had an error or could not be invoked at all.
Has anyone had similar issues? Any idea what to check next?
Thanks
I had the same problem, the solution was to create a VERSION from the Lambda and NOT to use the $LATEST Version, but a 'fixed' one.
It is not possible to use the latest ever-changing version to build a trigger upon.
Place to do that:
Lambda / Functions / YourLambdaName / Qualifiers Dropdown on the page / Switch versions/aliases / Version Tab -> check that you have a version
If not -> Actions / Publish new version
Check for DynamoDB "Stream" is it is enabled on the table.
Checkout this
5 min timeout is default for lambda, you can find this mentioned in forums.

AWS Lambda: error creating the event source mapping: Configuration is ambiguously defined

There was an error creating the event source mapping: Configuration is ambiguously defined. Cannot have overlapping suffixes in two rules if the prefixes are overlapping for the same event type.
I created an event earlier from the GUI console 6-7 days ago and it was working fine. The next day the event just missing, i cant see it anymore at the Lambda console GUI. But every S3 objects still seems triggering the lambda function not a problem. If i cant see, it is not good; So i deleted the Lambda function, waited for 5-10 seconds before creating another new function. And now, i receive the same above when i try to create the event sources like this:
When i click "Submit" the event sources tab says "You do not have any event sources for this function", Lambda does not get triggered; it means the entire application flow is now broken :(
The problem is almost the same as: "https://forums.aws.amazon.com/thread.jspa?messageID=670712򣯸" But somehow i cant reply to that thread, so i created a new thread here instead. anyone encounter this issue?
In fact, i try to response to the existing AWS forum thread: https://forums.aws.amazon.com/thread.jspa?messageID=670712&#670712
but i keep getting this funny error: "Your message quota has been reached. Please try again later.". And i wasnt even posting anything, how can i use up my quota?
What I suspect is your S3 bucket may still be "linked" to the lambda function.
Maybe check your S3 bucket for events and remove them there, then try creating the lambda events again?
i.e. S3 bucket-> properties-> Events
After 6 years nice to see some people still befitting from this answer,
Here is a shamless plug to youtube video I uploaded 2022-12-13.
https://www.youtube.com/watch?v=rjpOU7jbgEs
The issue must be that the s3 bucket is already linked with the suffix/prefix you are trying to link. Remove the link in S3 and try again.
When you setup a lambda function and setup a trigger related to S3. The notification gets updated in the properties sections of that S3 bucket.
The mentioned error occurs when the earlier lambda function is deleted and you're trying to setup same kind of trigger again. This time the thing to note is, the S3 notification is still not deleted when you deleted the lambda function.
Goto S3 bucket > Properties > Event notifications
and delete the old setting and then setup new trigger in the new lambda function trigger.
Here is a link to a youtube video profiling this issue and demonstrating the solution:
https://www.youtube.com/watch?v=1Tfmc9nEtbU
Just as Ridwaan Manuel, you must remove the events by going to S3 bucket-> properties-> Events as the video shows.
Steps to reproduce this issue:
Create a bucket and create a folder called “example/”
Create Lambda Function
Add S3 trigger to the lambda using the bucket from (1) with default settings
Save the trigger
Click Save and notice error
Refresh the page and notice that the triggers disappeared
Add the same bucket again and notice the ambiguous reference error