AWS Lambda CloudWatch Failed to deserialize log message - amazon-web-services

we ran into an issue with AWS Lambda and CloudWatch:
it fails when it tries to log billing details into the console:
REPORT RequestId: ae61585c-9bda-480f-94e6-5b72f4ed7b17 Duration: 319.40 ms Billed Duration: 1257 ms Memory Size: 3072 MB Max Memory Used: 161 MB Init Duration: 936.79 ms
XRAY TraceId: 1-60ad0933-538a058068ffb9c91b0a0db9 SegmentId: 58c34b0d2f79867e Sampl
[2021-05-25T14:32:07.412Z ERROR cloudwatch_lambda_agent::logs::logs_server] Failed to deserialize log message.
Err(Error("expected value", line: 1, column: 375))
[
{
"time": "2021-05-25T14:27:01.598Z",
"type": "platform.end",
"record": {
"requestId": "ae61585c-9bda-480f-94e6-5b72f4ed7b17"
}
}
,{"time":"2021-05-25T14:27:01.598Z","type":"platform.report","record":{"requestId":"ae61585c-9bda-480f-94e6-5b72f4ed7b17","metrics":{"durationMs":319.40,"billedDurationMs":1257,"memorySizeMB":3072,"maxMemoryUsedMB":161,"initDurationMs":936.79},"tracing":}}}]
it is started failing several weeks ago, apparently, there is a bug on the AWS side.
Has someone encountered the same error?

From the error message, it seems that the problem is probably caused by the fact that Active Tracing is enabled on Lambda and the tracing information cannot be sent to the AWS Xray server (or the message cannot be interpreted). In my environment, changing the Active Tracing from Active to Pass-Through solved the problem. It seems that the problem is fundamentally with AWS itself, so I think you should contact AWS support.

Related

Having issues with running aws lambda function

I am trying to get this stock data collector to work on AWS (https://github.com/JustinGuese/aws-lambda-stockdata-collector).I've followed the instructions and when ever I run the function I get error
Response
{
"errorMessage": "2023-02-02T11:32:24.018Z 18fecfde-318c-47a5-8df5-aa9bdbc6f138 Task timed out after 180.11 seconds"
}
Function Logs
OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k
running
START RequestId: 18fecfde-318c-47a5-8df5-aa9bdbc6f138 Version: $LATEST
['APPL']
loading data
Exception in thread Thread-1:
I ain't sure where I am going wrong. Any advice welcome Thanks

"Task timed out after X seconds" error with Lambda AWS and Twilio for WhatsApp message

I was trying to implement a Lambda function to Send WhatsApp messages with Twilio service.
I have already uploaded the twilio npm package (I was getting the "cannot fin twilio module", but I added the layer and I don't get the error anymore). I'm using node 14 and my zipped npm package is with the nodejs/node_modules... structure (not the node14, but I understood it can work with both). Maybe this is why it's not working?
I got stuck after that. I keep getting the "taks timed out". I changed from the default 3 to 5 seconds, but it still gets errors.
What am I missing or doing wrong?
This is my code:
'use strict';
console.log('Trying to send a WhatsApp message...');
exports.handler = async (event) => {
const accountSid = 'ACa4818d82a4d6----------'; //The hyphens is to hide credentials or phone numbers
const authToken = '7e5d8205968af11----------';
const client = require('twilio')("ACa4818d------", "7e5d8205968af11-------");
//I event passed the parameters like this to troubleshoot
client.messages
.create({
body: 'Hi, there!',
from: 'whatsapp:+14------',
to: 'whatsapp:+1-------'
})
.then(message => console.log(message.sid))
.done();
};
This is the response in Lambda console:
Test Event Name
TestCon
Response
{
"errorMessage": "2021-12-05T04:39:26.463Z 74eb5536-7da6-4d96-bf8e-824230c85089 Task timed out after 5.01 seconds"
}
Function Logs
START RequestId: 74eb5536-7da6-4d96-bf8e-824230c85089 Version: $LATEST
2021-12-05T04:39:21.452Z undefined INFO Trying to send a WhatsApp message...
END RequestId: 74eb5536-7da6-4d96-bf8e-824230c85089
REPORT RequestId: 74eb5536-7da6-4d96-bf8e-824230c85089 Duration: 5005.62 ms Billed Duration: 5000 ms Memory Size: 128 MB Max Memory Used: 86 MB Init Duration: 176.11 ms
2021-12-05T04:39:26.463Z 74eb5536-7da6-4d96-bf8e-824230c85089 Task timed out after 5.01 seconds
Request ID
74eb5536-7da6-4d96-bf8e-824230c85089
I solved this by increasing the timeout time.
I changed from 5 seconds to 1 minute.
Looks like the first request in a while takes around 15 seconds. Request afters that take miliseconds.
I was getting the same error, in AWS Lambda function. I checked Lambda configuration -> General Configuration Timeout was 3 sec, I increased it using edit button.

AWS Lambda - Timeout

I have a simple lambda function which calculates a result asynchronously. I can log the result and it seems to be correct but for some reason the lambda function doesn't return successfully, like I am getting a timeout. If you look at the timestamps you can see that the result is calculated way before the timeout. The weird thing is that it works fine when I am using axios but whenever I use fauna it doesn't work anymore, but it does log the correct result... I have been sitting on this problem for days and have no clue what to do. I am using the serverless framework along with this template.
Response
{
"errorMessage": "2021-03-10T07:11:11.567Z 0180b87e-e01f-4527-8c7e-4c1dd5e3e354 Task timed out after 6.01 seconds"
}
Function Logs
START RequestId: 0180b87e-e01f-4527-8c7e-4c1dd5e3e354 Version: $LATEST
2021-03-10T07:11:05.811Z 0180b87e-e01f-4527-8c7e-4c1dd5e3e354 INFO Sending response: { statusCode: 200, body: '{"result":100}' }
END RequestId: 0180b87e-e01f-4527-8c7e-4c1dd5e3e354
REPORT RequestId: 0180b87e-e01f-4527-8c7e-4c1dd5e3e354 Duration: 6007.06 ms Billed Duration: 6000 ms Memory Size: 256 MB Max Memory Used: 76 MB Init Duration: 205.66 ms
2021-03-10T07:11:11.567Z 0180b87e-e01f-4527-8c7e-4c1dd5e3e354 Task timed out after 6.01 seconds
Any help would be much appreciated!
Found the issue. Within the handler I set the context.callbackWaitsForEmptyEventLoop = false. Alternatively when using middy you can use this middleware

AWS Lamba constantly receiving empty SQS event messages

I'm a relative n00b at AWS, so apologise if this is a stupid question.
I have an AWS Lambda written in Java. I also have an SQS queue that receives AWS S3 event messages. I've then created a Lambda trigger against the SQS queue so that my Lambda receives the S3 events as SQS messages and processes them appropriately.
It all works well. The only issue I have is that it seems like the Lambda is receiving notification of an SQS event message every 2 minutes, even when there are no messages in the SQS queue.
The Java code looks like this:
public class SQSEventHandler implements RequestHandler<SQSEvent, Void> {
#Override
public Void handleRequest(SQSEvent sqsEvent, Context context) {
if (sqsEvent != null) {
LOGGER.debug("Received SQS event: {}", sqsEvent.toString());
... do stuff...
If I look in the CloudWatch logs (I log using SLF4J), I can see that the Lambda is triggered with different SQS messages every 2 minutes, even during periods when there are no S3 event messages to process:
02:54:16 START RequestId: d5454080-8ea3-4c44-93e9-caa5bd903599 Version: $LATEST
02:54:16 [2020-02-13 02:54:16.220] - [d5454080-8ea3-4c44-93e9-caa5bd903599] DEBUG <package>.SQSEventHandler - Received SQS event: {}
02:54:16 END RequestId: d5454080-8ea3-4c44-93e9-caa5bd903599
02:54:16 REPORT RequestId: d5454080-8ea3-4c44-93e9-caa5bd903599 Duration: 1.05 ms Billed Duration: 100 ms Memory Size: 512 MB Max Memory Used: 161 MB
02:56:16 START RequestId: 9d5acbba-b96c-47e9-81c2-2d448e4ca6e9 Version: $LATEST
02:56:16 [2020-02-13 02:56:16.386] - [9d5acbba-b96c-47e9-81c2-2d448e4ca6e9] DEBUG <package>.SQSEventHandler - Received SQS event: {}
02:56:16 END RequestId: 9d5acbba-b96c-47e9-81c2-2d448e4ca6e9
02:56:16 REPORT RequestId: 9d5acbba-b96c-47e9-81c2-2d448e4ca6e9 Duration: 1.23 ms Billed Duration: 100 ms Memory Size: 512 MB Max Memory Used: 161 MB
02:58:16 START RequestId: 54bc4fa4-bcaf-4834-9185-09c9c7e2d757 Version: $LATEST
02:58:16 [2020-02-13 02:58:16.451] - [54bc4fa4-bcaf-4834-9185-09c9c7e2d757] DEBUG <package>.SQSEventHandler - Received SQS event: {}
02:58:16 END RequestId: 54bc4fa4-bcaf-4834-9185-09c9c7e2d757
02:58:16 REPORT RequestId: 54bc4fa4-bcaf-4834-9185-09c9c7e2d757 Duration: 1.01 ms Billed Duration: 100 ms Memory Size: 512 MB Max Memory Used: 161 MB
There are no other SQS queues with triggers to this Lambda.
As you can see, the SQS event object isn't null but doesn't produce anything in the toString() call.
I can't work out what the issue is - any assistance would be appreciated.
Unbeknownst to me, there was a CloudWatch rule set up to send a message to my Lambda every two minutes. Once this had been located, I disabled the rule and the Lambda was no longer triggered.
What did your Lambda function do with the SQS messages?
If it processed them to completion then you must delete them from the SQS queue, otherwise they will re-appear after the Visibility Timeout expires. This is the design for how SQS deals with applications receiving messages and then dying before they can complete processing of those messages.

Only logging errors with AWS Lambda

I'm trying to add few lines of code so that, when my AWS Lambda function fails, it logs when it fails and with which input parameters it did. Following the documentation, I added these lines:
logger = logging.getLogger()
logger.setLevel(logging.INFO)
logger.info('user {0}'.format(event["user"]))
They generate some information that are accessible from CloudWatch:
08:50:29 - START RequestId: 92d000ad-b01f-11e8-98a6-c32aa1e3e890 Version: $LATEST
08:50:31 - [INFO] 2018-09-04T08:50:31.781Z 92d000ad-b01f-11e8-98a6-c32aa1e3e890 user xxxxxx
08:50:31 - END RequestId: 92d000ad-b01f-11e8-98a6-c32aa1e3e890
08:50:31 - REPORT RequestId: 92d000ad-b01f-11e8-98a6-c32aa1e3e890 Duration: 2513.04 ms Billed Duration: 2600 ms Memory Size: 896 MB Max Memory Used: 37 MB
However, it seems that every single call of the lambda function creates a log entry in the CloudWatch. As it is, it's impossible to identify the logs associated to failures of the function. Is it instead possible to create log entries only when logging writes the information? In alternative, can an S3 bucket be set to store log files (associated to errors)?
Any reason you are not utilizing the log level?
logger.setLevel(logging.ERROR)
If you need to log all events, and Cloudwatch would be a good place to do that, then you could consider creating a metric filter inside Cloudwatch\Logging, to create alerts for all entries having keyword 'error', as an example.