Having issues with running aws lambda function - amazon-web-services

I am trying to get this stock data collector to work on AWS (https://github.com/JustinGuese/aws-lambda-stockdata-collector).I've followed the instructions and when ever I run the function I get error
Response
{
"errorMessage": "2023-02-02T11:32:24.018Z 18fecfde-318c-47a5-8df5-aa9bdbc6f138 Task timed out after 180.11 seconds"
}
Function Logs
OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k
running
START RequestId: 18fecfde-318c-47a5-8df5-aa9bdbc6f138 Version: $LATEST
['APPL']
loading data
Exception in thread Thread-1:
I ain't sure where I am going wrong. Any advice welcome Thanks

Related

AWS CLI in WSL2: "RequestTimeTooSkewed"

I execute the command: aws s3 ls and got the following error message:
An error occurred (RequestTimeTooSkewed) when calling the ListBuckets operation: The difference between the request time and the current time is too large.
Please advise.
If you're using WSL, you can run wsl --shutdown in CMD or PowerShell. This ensures the next time you start a WSL session, it cold boots and fixes the time.
https://github.com/microsoft/WSL/issues/4245
AWS API requests are 'signed' and part of the information exchanged is a timestamp. If the timestamp is more than 900 seconds old the request will be rejected.
This is done to prevent "replay attacks" where old requests are sent again.
You can fix this by correcting the Date and Time on the system where you are sending the request.

Boto3 start_text_translation_job TooManyRequestsException

Error in Python running Boto3 start_text_translation_job
botocore.errorfactory.TooManyRequestsException: An error occurred (TooManyRequestsException) when calling the StartTextTranslationJob operation: Request failed due to too many requests.
I wrote a Python script to kick off batch translation from EN to 48 languages. The first 10 submitted fine, but the 11th one got the above error.
At first, I thought I had to slow down and put a sleep between the calls, but that was NOT the issue.
I tried to start a job using the AWS web console, and got a similar error:
Request failed due to too many requests.
This page on AWS Translate Limitations indicates that you can only have 10 translation jobs started at the same time.

aws lambda: Error: Runtime exited with error: signal: killed

I'm trying to pull a large file from S3 and write it to RDS using pandas dataframes.
I've been googling this error and haven't seen it anywhere, does anyone know what this extremely generic sounding error could mean? I've encountered memory issues previously but expanding the memory removed that error.
{
"errorType": "Runtime.ExitError",
"errorMessage": "RequestId: 99aa9711-ca93-4201-8b1e-73bf31b762a6 Error: Runtime exited with error: signal: killed"
}
Got the same error when executing the lambda for process an image, Only few results coming when searching in web for this error.
increase the AWS Lambda Memory by 1.5x OR 2x to resolve it. For example increase the memory from 128mb to 512mb.
This runtime error occurs because the lambda function does not execute remaining line of code, moreover it is not possible to catch the error and run the rest of the code.
You're reaching memory limit due to boto3 parallel uploading of your file.
You could increase memory usage for lambda, but it's cheating... you'll just pay more.
Per default, S3 cli downloads files larger than multipart_threshold=8MB with max_concurrency=10 parallel threads. It means it will use 80MB for your data, plus threading overhead.
You could reduce to max_concurrency=2 for example, it would use 16MB and it should fit into your lambda memory.
Please note that this may slightly decrease your downloading performance.
import boto3
from boto3.s3.transfer import TransferConfig
config = TransferConfig(max_concurrency=2)
s3 = boto3.client('s3')
s3.download_file('BUCKET_NAME', 'OBJECT_NAME', 'FILE_NAME', Config=config)
Reference: https://docs.aws.amazon.com/cli/latest/topic/s3-config.html
It is not timing out at 15 minutes, since that would log this error "Task timed out after 901.02 seconds", which the OP did not get. As others have said, he is running out of memory.
First of all, aws-lambda is not meant to do long time heavy operations like pulling large files from S3 and write it to RDS.
This process can take too much time depending upon the file size and data. The maximum execution time of aws-lambda is 15 min. So, whatever task you are doing in your lambda should be completed with in the time limit you provided (Max is 15 min).
With large and heavy processing in lambda you can get out of memory error , out of time error or some times to need to extend your processing power.
The other way of doing such large and heavy processing is using AWS Glue Jobs which is aws managed ETL service.
Solution is to increase the AWS Lambda Memory by 1.5x OR 2x,
bcoz this runtime error occurs, the lambda function does not execute any other line of code, it is not possible to catch the error and run the rest of the code.
this error acts as the signal to the lambda execution environment to terminate the current execution.
To add, if anyone is using AWS Amplify as was in the project I was working on - there are still Lambda's under the hood, and you can access and configure them directly from the AWS Lambdas console

Lambda times out after ending

After finishing successfully, a Lambda function insists on timing out.
The function's triggering event is s3:ObjectCreated:*.
The function uses MongoDB Atlas and does so according to the optimisation suggestions on https://www.mongodb.com/blog/post/optimizing-aws-lambda-performance-with-mongodb-atlas-and-nodejs, including setting:
context.callbackWaitsForEmptyEventLoop = false; before using the DB.
The function also calls some AWS SDK methods with successfully resolved promises.
After finishing my code successfully and doing everything it set out to do, I get the following in my CloudWatch logs, (both the request's END event and its timeout):
START RequestId: XXX
... my logs...
END RequestId: XXX
REPORT RequestId: XXX Duration: 6001.12 ms Billed Duration: 6000 ms Memory Size: 1024 MB Max Memory Used: 49 MB
XXX Task timed out after 6.00 seconds
The function then repeats itself twice more with the same unfortunate result.
Any immediate suspects? Where should I look?
You need to call callback(null, <any>) in order to end your function handler and tell Lambda that your function executed successfully.
Without that, Lambda will retry the same invocation after a delay and it will again finish but without telling Lambda that it finished successfully.

aws lambda function triggering multiple times for a single event

I am using aws lambda function to convert uploaded wav file in a bucket to mp3 format and later move file to another bucket. It is working correctly. But there's a problem with triggering. When i upload small wav files,lambda function is called once. But when i upload a large sized wav file, this function is triggered multiple times.
I have googled this issue and found that it is stateless, so it will be called multiple times(not sure this trigger is for multiple upload or a same upload).
https://aws.amazon.com/lambda/faqs/
Is there any method to call this function once for a single upload?
Short version:
Try increasing timeout setting in your lambda function configuration.
Long version:
I guess you are running into the lambda function being timed out here.
S3 events are asynchronous in nature and lambda function listening to S3 events is retried atleast 3 times before that event is rejected. You mentioned your lambda function is executed only once (with no error) during smaller sized upload upon which you do conversion and re-upload. There is a possibility that the time required for conversion and re-upload from your code is greater than the timeout setting of your lambda function.
Therefore, you might want to try increasing the timeout setting in your lambda function configuration.
By the way, one way to confirm that your lambda function is invoked multiple times is to look into cloudwatch logs for the event id (67fe6073-e19c-11e5-1111-6bqw43hkbea3) occurrence -
START RequestId: 67jh48x4-abcd-11e5-1111-6bqw43hkbea3 Version: $LATEST
This event id represents a specific event for which lambda was invoked and should be same for all lambda executions that are responsible for the same S3 event.
Also, you can look for execution time (Duration) in the following log line that marks end of one lambda execution -
REPORT RequestId: 67jh48x4-abcd-11e5-1111-6bqw43hkbea3 Duration: 244.10 ms Billed Duration: 300 ms Memory Size: 128 MB Max Memory Used: 20 MB
If not a solution, it will at least give you some room to debug in right direction. Let me know how it goes.
Any event Executing Lambda several times is due to retry behavior of Lambda as specified in AWS document.
Your code might raise an exception, time out, or run out of memory. The runtime executing your code might encounter an error and stop. You might run out concurrency and be throttled.
There could be some error in Lambda which makes the client or service invoking the Lambda function to retry.
Use CloudWatch logs to find the error and resolving it could resolve the problem.
I too faced the same problem, in my case it's because of application error, resolving it helped me.
Recently AWS Lambda has new property to change the default Retry nature. Set the Retry attempts to 0 (default 2) under Asynchronous invocation settings.
For some in-depth understanding on this issue, you should look into message delivery guarantees. Then you can implement a solution using the idempotent consumers pattern.
The context object contains information on which request ID you are currently handling. This ID won't change even if the same event fires multiple times. You could save this ID for every time an event triggers and then check that the ID hasn't already been processed before processing a message.
In the Lambda Configuration look for "Asynchronous invocation" there is an option "Retry attempts" that is the maximum number of times to retry when the function returns an error.
Here you can also configure Dead-letter queue service
Multiple retry can also happen due read time out. I fixed with '--cli-read-timeout 0'.
e.g. If you are invoking lambda with aws cli or jenkins execute shell:
aws lambda invoke --cli-read-timeout 0 --invocation-type RequestResponse --function-name ${functionName} --region ${region} --log-type Tail --```payload {""} out --log-type Tail \
I was also facing this issue earlier, try to keep retry count to 0 under 'Asynchronous Invocations'.