I have structure similar to this
SQS -> Lambda -> DLQ
When Lambda is invoked asynchronously like below then failure messages are getting added successfully to DLQ.
$ aws lambda invoke --function-name my-function --invocation-type Event --payload '{ "key": "value" }' response.json
But when lambda gets triggered on adding new messages to SQS then on failure, messages doesn't store in DLQ.
I found that events triggered when new message published to SQS are synchronous in nature.
Lambda polls the queue and invokes your function synchronously with an
event that contains queue messages.
Reference - https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html.
So I want either-
SQS event trigger invoke lambda asynchronously
or
Messages gets stored in DLQ on synchronous invocation of Lambda as well.
DLQ is used not to store error messages but to store failed events, that can be handled again later.
CloudWatch is used to store and show logs (including errors) for Lambda and any other AWS service.
The idea behind SQS triggering Lambda is that in case of Lambda failure, the event will be handled again by Lambda later.
It's the same idea as DLQ but implemented differently.
Related
The architecture for a service includes the following:
[S3 event notification -> EventBridge rule triggered -> same message sent to both Lambdas]
There is a DLQ configured for the EventBridge and a DLQ for each Lambda. The former DLQ is for issues with delivery of the message to the Lambdas and the latter DLQs are for issues with the Lambdas itself.
Let's say Lambda A processes the message successfully but Lambda B does not and after exhausting the retry policy, Lambda B sends the message from the EventBridge to its own DLQ since this error is due to a code issue with the Lambda B function.
After updating the code for Lambda B, what is the recommended approach to send the message in the DLQ back to the EventBridge for Lambda B to process it again without sending this message to Lambda A.
Note: The EventBridge sends the same S3 notification message to both Lambdas.
I have an SQS queue (Q) that receives messages via the onFailure "destinations" setting from a Lambda function (F). The Lambda function is triggered by and EventBridge event bus "rule".
My question is: Can I configure the redrive policy of queue Q so that I can redrive messages directly to function F?
Currently, I have set the redrive allow policy to allowAll, but the "Start DLQ Redrive" button is disabled in the console. Looking at the configuration settings for the drive allow policy, I get the feeling that only other queues can be a target for a redrive.
What confuses me about this is that my goal here was to use the onFailure function of the "destinations" feature. Destinations can only be used when a function is called asynchronously and queues trigger lambdas synchronously. So if I were to put a queue in front of my lambda function F that could be a target for a redrive, then I would not be able to use the onFailure destination.
It's not possible to send an event payload from Queue Q to Lambda F with redrive. Redrive works by sending messages from the DLQ back to the source queue, not directly to a Lambda target. Consider, too that the SQS message structure differs from that of EventBridge events, which would confuse your Lambda.
Check out Event replay as an alternative. Or add a Lambda to periodically read from the DLQ and resubmit the events.
I have set up a lambda function and an SQS service. I want the events from the SQS service to get consumed by the lambda function.
SQS configuration
All the other configuration is set to match the default.
Lambda trigger
Lambda configuration
The code used is from the sqs-poller template (and the configuration too)
Code configuration
I'm using the following code to send the event. I run the code with the following command
AWS_SDK_LOAD_CONFIG=true AWS_SHARED_CREDENTIALS_FILE=./credentials node sqs.js
That works fine because I'm seeing the messages in the monitoring panel of the SQS service.
Any idea why events are not being consumed by my lambda function?
It would appear that you have two competing concepts in your architecture.
Amazon SQS and AWS Lambda
When an Amazon SQS queue is configured as a trigger to an AWS Lambda function, the AWS service polls the SQS queue looking for messages. When message(s) are found, the Lambda function is invoked, with the messages being passed to the function via the event variable.
The Lambda function can then process those messages, reading the detail of the messages from the event variable.
If the Lambda function completes without error, the AWS service will automatically delete the messages from the SQS queue. If there was an error in the function, the messages will automatically reappear on the SQS queue after the invisibility period has expired.
At no time does the AWS Lambda function actually call the Amazon SQS service to receive or delete messages. Rather, it is given the messages when it is invoked.
SQS Poller
You mention that you are using an sqs-poller class. I'm not sure whether you are referring to Receiving Messages Using the QueuePoller Class in Amazon SQS - AWS SDK for Ruby or #jishimi/sqs-poller - npm.
Nonetheless, polling is a traditional way that worker processes retrieve messages from an SQS queue, and then delete the messages after they are processed. The process is:
They ask check whether there are messages available in the SQS queue
If so, they invoke a worker
When the worker is finished, they delete the message
You should notice that these are the same steps that the AWS Lambda service does when SQS is configured as a trigger for an AWS Lambda function. Therefore, using a polling architecture is incompatible with using SQS as a trigger for an AWS Lambda function.
You should pick one or the other, not both.
I deployed a lambda, SQS standard queue and Dead letter queue on AWS. And I configured maxReceiveCount in the queue to retry before putting events to DLQ. Lambda pulls events from SQS queue in batch and process each event sequently. My question is about how retry works in case of error. There are two retries, one is on lambda maximumRetryAttempts, the other is on SQS and DLQ. Should I disable the lambda one?
In function, when it processes one event it calls deleteMessage on sqs to delete it. If there is any event that throws exception, the function throws it to lambda to make the retry happen so that it won't retry the success events.
But lambda itself has a maximumRetryAttempts and should I set it to 0? otherwise, will it retry before return to SQS? If I don't disable it, will the retry to process the whole batch of events including the success one?
Not sure which maximumRetryAttempts on lambda you are referring to. But When you use SQS with Lambda through event source mapping, as its done by default, there is no any retry parameter on lambda.
The only retry that applies is set at SQS, not lambda.
The retry option for lambda I can think of, and maybe you are thinning off as well, is for asynchronous invocation. This does not apply for SQS, as your lambda is invoked synchronously with SQS:
Lambda polls the queue and invokes your Lambda function synchronously with an event that contains queue messages.
Lambda Function can be invoked in three different ways:
Lambda reads from and invokes function. Ex: From SQS, Kinesis, etc.
Function invoked synchronously. Ex: From ApiGateway, ELB, etc.
Function invoked asynchronously. Ex: From S3 Events, SNS, Cloudwatch events etc.
Below Retry attempts is applicable for Asynchronous invocations(option 3 above)
For SQS Failures, we have two options:
DLQ on SQS itself.
Destination on Lambda. This could be SNS, another lambda, event bridge and another SQS queue. With this option, we can send both failures and success events.
Note: We don't need to call deleteMessage within lambda, lambda poller will delete message from SQS, when lambda returns success.
I'm trying to send s3event message to rabbitmq by invoking AWS lambda function. I have configured SQS as my dead letter queue(DLQ).
I know the message is sent to DLQ when there failure in invocation of lambda or situations like timeouts or resource constraints.
My question is ,I want to send event message to DLQ from inside lambda function on certain condition like if rabbitmq is down or some other condition of my interest.
Is there any possiblity for the same? Should I throw exception or there is some other better approach to send event message to DLQ.
I'm using java for development and connecting to rabbitmq from my lambda function.
The DLQ is simply an SQS Queue, so you could send a message to it like you would any other queue. You would want it to be formatted the same way that Lambda natively puts message in the DLQ so that whatever processing you have on the DLQ can perform the same way for all messages. You would want to ensure that you treat the lambda as successfully executed in this instance though so that the normal DLQ process doesn't pick up the same message twice.
In the DLQ setting of a Lambda you specify a SNS topic or a SQS Queue. In your setup you have configured the DLQ to be a SQS queue. This is a regular SQS Queue. Using the SQS Java SDK you can post a message to that SQS Queue.
here are few references:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-send-message.html
To get the Queue URL you can use these:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_GetQueueUrl.html
Or through Java:
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/sqs/AmazonSQSClient.html#getQueueUrl-java.lang.String-