AWS Lambda Task timed out - amazon-web-services

I have create a one function and sent mail to users but it time out
05:48:54
END RequestId: 3508fc6c-29cb-442b-95dd-c963018ca5f3
05:48:54
REPORT RequestId: 3508fc6c-29cb-442b-95dd-c963018ca5f3 Duration: 900083.68 ms Billed Duration: 900000 ms Memory Size: 256 MB Max Memory Used: 43 MB
05:48:54
2019-06-25T05:48:54.532Z 3508fc6c-29cb-442b-95dd-c963018ca5f3 Task timed out after 900.08 seconds
I have set TimeoutInfo 15 min.

Another potential issue might be that you have an active connection, so the aws lambda is not responding until the connection terminates.
callbackWaitsForEmptyEventLoop – Set to false to send the response
right away when the callback executes, instead of waiting for the
Node.js event loop to be empty. If this is false, any outstanding
events continue to run during the next invocation.
If this is the case, you can set callbackWaitsForEmptyEventLoop = false to make callback response immediately by
context.callbackWaitsForEmptyEventLoop = false;
Official doc

Check what is the timeout set in configuration file
Also check for VPC!
"Error handling for a given event source depends on how Lambda is invoked. Amazon CloudWatch Events is configured to invoke a Lambda function asynchronously."
"Asynchronous invocation – Asynchronous events are queued before being used to invoke the Lambda function. If AWS Lambda is unable to fully process the event, it will automatically retry the invocation twice, with delays between retries."
So the retry should happen in this case. Not sure what was wrong with your lambda function , just delete and created again and may works!

Related

AWS SQS and Lambda Function : How to fail a lambda function and force message back on to queue

We have a Lambda event source that polls SQS for messages and passes them on to a Lambda function.
The SQS has a Visibility Timeout that dictates how long after a consumer (lambda) picks up a message it will re-appear in the queue if not successfully completed.
Is there a way from the Lambda function to force this to happen as soon as we detect an error?
For example, say Lambda has a timeout of 10 mins. The SQS Visibility Timeout will need to be longer than that.
But if the Lambda function detects a problem early on, and throws an error - this will fail the message, but we have to wait for the SQS Visibility Timeout before the message is available for other consumers to try again.
Is there a way for the Lambda function to tell SQS that the message has failed and it should go back on the queue immediately ?
If you're using the built-in AWS Lambda SQS integration, then simply throwing an exception (returning a non-success status from the Lambda function) will result in all the SQS messages in the event of that invocation being placed back in the queue immediately.
From the documentation:
When Lambda reads a batch, the messages stay in the queue but become
hidden for the length of the queue's visibility timeout. If your
function successfully processes the batch, Lambda deletes the messages
from the queue. If your function is throttled, returns an error, or
doesn't respond, the message becomes visible again. All messages in a
failed batch return to the queue, so your function code must be able
to process the same message multiple times without side effects.
Call the ChangeMessageVisibility API and set the visibility timeout of each failed message to 0 seconds. Then throw an exception.
If you received a batch of SQS messages and were able to successfully process some of them, then explicitly delete those (successful) messages from the underlying SQS queue, then set the visibility timeout of each remaining failed message to 0 seconds, then throw an exception.
Setting a message's visibility timeout to 0 terminates the visibility timeout for that message and makes the message immediately visible to other consumers.

AWS lambda: Execute function on timeout

I am developing a lambda function that migrates logs from an SFTP server to an S3 bucket.
Due to the size of the logs, the function sometimes is timing out - even though I have set the maximum timeout of 15 minutes.
try:
logger.info(f'Migrating {log_name}... ')
transfer_to_s3(log_name, sftp)
logger.info(f'{log_name} was migrated succesfully ')
If transfer_to_s3() fails due to timeoutlogger.info(f'{log_name} was migrated succesfully') line won't be executed.
I want to ensure that in this scenario, I will somehow know that a log was not migrated due to timeout.
Is there a way to force lambda to perform an action, before exiting, in the case of a timeout?
Probably a better way would be to use SQS for that:
Logo info ---> SQS queue ---> Lambda function
If lambda successful moves the files, it removes the log info from SQS queue. If it fails, the log info persists in the SQS queue (or goes to DLQ for special handling), so the next lambda invocation can handle it.

SQS Lambda Integration - what happens when an Exception is thrown

The document states that
A Lambda function can fail for any of the following reasons:
The function times out while trying to reach an endpoint.
The function fails to successfully parse input data.
The function experiences resource constraints, such as out-of-memory
errors or other timeouts.
For my case, I'm using C# Lambda with SQS integration
If the invocation fails or times out, every message in the batch will be returned to the queue, and each will be available for processing once the Visibility Timeout period expires
My question: What happen if I, using an SQS Lambda integration ( .NET )
My function throws an Exception
My SQS visibility timer is set to 15 minutes, max receive count is 1, DLQ setup
Will the function retry?
Will it be put into the DLQ when Exceptions are thrown after all retries?
The moment your code throws an unhandled/uncaught exception Lambda fails. If you have max receive count set to 1 the message will be sent to the DLQ after the first failure, it will not be retried. If your max receive count is set to 5 for example, the moment the Lambda function fails, the message will be returned to the queue after the visibility timeout has expired.
The reason for this behaviour is you are giving Lambda permissions to poll the queue on your behalf. If it gets a message it invokes a function and gives you a single opportunity to process that message. If you fail the message returns to the queue and Lambda continues polling the queue on your behalf, it does not care if the next message is the same as the failed message or a brand new message.
Here is a great blog post which helped me understand how these triggers work.

Lambda times out after ending

After finishing successfully, a Lambda function insists on timing out.
The function's triggering event is s3:ObjectCreated:*.
The function uses MongoDB Atlas and does so according to the optimisation suggestions on https://www.mongodb.com/blog/post/optimizing-aws-lambda-performance-with-mongodb-atlas-and-nodejs, including setting:
context.callbackWaitsForEmptyEventLoop = false; before using the DB.
The function also calls some AWS SDK methods with successfully resolved promises.
After finishing my code successfully and doing everything it set out to do, I get the following in my CloudWatch logs, (both the request's END event and its timeout):
START RequestId: XXX
... my logs...
END RequestId: XXX
REPORT RequestId: XXX Duration: 6001.12 ms Billed Duration: 6000 ms Memory Size: 1024 MB Max Memory Used: 49 MB
XXX Task timed out after 6.00 seconds
The function then repeats itself twice more with the same unfortunate result.
Any immediate suspects? Where should I look?
You need to call callback(null, <any>) in order to end your function handler and tell Lambda that your function executed successfully.
Without that, Lambda will retry the same invocation after a delay and it will again finish but without telling Lambda that it finished successfully.

aws lambda function triggering multiple times for a single event

I am using aws lambda function to convert uploaded wav file in a bucket to mp3 format and later move file to another bucket. It is working correctly. But there's a problem with triggering. When i upload small wav files,lambda function is called once. But when i upload a large sized wav file, this function is triggered multiple times.
I have googled this issue and found that it is stateless, so it will be called multiple times(not sure this trigger is for multiple upload or a same upload).
https://aws.amazon.com/lambda/faqs/
Is there any method to call this function once for a single upload?
Short version:
Try increasing timeout setting in your lambda function configuration.
Long version:
I guess you are running into the lambda function being timed out here.
S3 events are asynchronous in nature and lambda function listening to S3 events is retried atleast 3 times before that event is rejected. You mentioned your lambda function is executed only once (with no error) during smaller sized upload upon which you do conversion and re-upload. There is a possibility that the time required for conversion and re-upload from your code is greater than the timeout setting of your lambda function.
Therefore, you might want to try increasing the timeout setting in your lambda function configuration.
By the way, one way to confirm that your lambda function is invoked multiple times is to look into cloudwatch logs for the event id (67fe6073-e19c-11e5-1111-6bqw43hkbea3) occurrence -
START RequestId: 67jh48x4-abcd-11e5-1111-6bqw43hkbea3 Version: $LATEST
This event id represents a specific event for which lambda was invoked and should be same for all lambda executions that are responsible for the same S3 event.
Also, you can look for execution time (Duration) in the following log line that marks end of one lambda execution -
REPORT RequestId: 67jh48x4-abcd-11e5-1111-6bqw43hkbea3 Duration: 244.10 ms Billed Duration: 300 ms Memory Size: 128 MB Max Memory Used: 20 MB
If not a solution, it will at least give you some room to debug in right direction. Let me know how it goes.
Any event Executing Lambda several times is due to retry behavior of Lambda as specified in AWS document.
Your code might raise an exception, time out, or run out of memory. The runtime executing your code might encounter an error and stop. You might run out concurrency and be throttled.
There could be some error in Lambda which makes the client or service invoking the Lambda function to retry.
Use CloudWatch logs to find the error and resolving it could resolve the problem.
I too faced the same problem, in my case it's because of application error, resolving it helped me.
Recently AWS Lambda has new property to change the default Retry nature. Set the Retry attempts to 0 (default 2) under Asynchronous invocation settings.
For some in-depth understanding on this issue, you should look into message delivery guarantees. Then you can implement a solution using the idempotent consumers pattern.
The context object contains information on which request ID you are currently handling. This ID won't change even if the same event fires multiple times. You could save this ID for every time an event triggers and then check that the ID hasn't already been processed before processing a message.
In the Lambda Configuration look for "Asynchronous invocation" there is an option "Retry attempts" that is the maximum number of times to retry when the function returns an error.
Here you can also configure Dead-letter queue service
Multiple retry can also happen due read time out. I fixed with '--cli-read-timeout 0'.
e.g. If you are invoking lambda with aws cli or jenkins execute shell:
aws lambda invoke --cli-read-timeout 0 --invocation-type RequestResponse --function-name ${functionName} --region ${region} --log-type Tail --```payload {""} out --log-type Tail \
I was also facing this issue earlier, try to keep retry count to 0 under 'Asynchronous Invocations'.