How to trigger lambda when messages published to SQS? - amazon-web-services

I am trying to implement lambda1 that'll be triggered when messages will be published to SQS. I am able to send messages to SQS queue and receive the messages.
I created the SQS lambda template as follows:
GetPatientStatusSQS:
Type: AWS::SQS::Queue
Properties:
MaximumMessageSize: 1024
QueueName: !Sub "${EnvironmentName}-GetPatientStatusSQS"
VisibilityTimeout: 30
I checked on aws documentation but couldnt find any example showing how to trigger lambda when messages published to SQS queue.
I found this link Can an AWS Lambda function call another but not sure if that's helpful.
How do i update the SQS template above so it'll trigger the lambda1?

As per Jun 28, 2018, Lambda functions can be triggered by SQS events.
All you need to do is Subscribe your Lambda function to the desired SQS queue.
Go to SQS's console, click on your Queue -> Queue Actions -> Configure Trigger for Lambda function
Set the Lambda's ARN you want to send messages to and that's it, your function will now be triggered by SQS.
Keep in mind that your function will process, at most, a batch of up to 10 messages at once.
If you think you may run into concurrency issues, you can then limit your function's concurrency to 1.
Here's a sample template you can use to wire SQS and Lambda together.
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Example of processing messages on an SQS queue with Lambda
Resources:
MySQSQueueFunction:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
Runtime: node8.10
Events:
MySQSEvent:
Type: SQS
Properties:
Queue: !GetAtt MySqsQueue.Arn
BatchSize: 10
MySqsQueue:
Type: AWS::SQS::Queue
From the docs

Related

How to implement a Dead Letter Queue using AWS

I'm trying to implement a dead letter queue for failed lambda events. I created an SQS queue and two lambdas. One lambda only throws an error and the other listens to the SQS queue and is triggered when the queue receives a message and logs the event to Cloudwatch.
In the AWS management console, I am able to see that the .function lambda (the one that throws an error) has a dead letter queue configured.
I am able to see that the .dispatcher function (the one that listens to SQS) is configured correctly and if I send a message to the SQS queue I can also see Cloudwatch logs showing it's being triggered as expected.
Although everything seems to be configured correctly, I am unable to get the .function to send an event to SQS when it's failed in order to trigger the .dispatcher lambda.
I've watched multiple videos, read documentation and I have not been able to figure out why this is not working. I've also tried the onError configuration but, no luck there either.
handler.js
module.exports.function = async (event) => {
throw new Error('Throwing error 🚨')
}
module.exports.dispatcher = async (event) => {
try {
console.log(`Dispached Event:${JSON.stringify(event)}`)
} catch (error) {
console.log(error)
return error
}
}
serverless.yml
service: dlq-example
frameworkVersion: '3'
provider:
name: aws
runtime: nodejs14.x
iam:
role:
statements:
# Allow functions to list all buckets
- Effect: Allow
Action: '*'
Resource: '*'
plugins:
- serverless-plugin-lambda-dead-letter
functions:
hello:
handler: handler.function
description: Throws error.
deadLetter:
targetArn:
GetResourceArn: DeadLetterQueue
dispatcher:
handler: handler.dispatcher
description: Dispatcher function. Configured to be triggered by events sent to SQS and handle them.
events:
- sqs:
arn:
Fn::GetAtt:
- DeadLetterQueue
- Arn
resources:
Resources:
DeadLetterQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: dlq
Here are images of what I see in the AWS management console:
Screenshot of handler.function lambda (the one that has the DLQ configured)
Screenshot of the handler.dispatcher function (The one that listens to SQS and handles each message)
Screenshot of the SQS queue and the lambda trigger configuration.
It seems to me, there might be disconnect in understanding on how the DLQ behavior works for Lambda.
First and foremost, DLQ feature in AWS Lambda is only available for triggers which are invoked asynchronously like S3, SNS, IOT etc
For the event source AWS SQS, Lambda polls the queue and invokes Lambda function synchronously with an event that contains queue messages.
Hence setting a DLQ on lambda for event source of SQS would not work, it would never be triggered by lambda.
From the context of your problem, I think, what you are trying to achieve are:
1. Have a mechanism to capture SQS messages which fails to process in lambda function. (Preferably to a designated DLQ)
2. Have a mechanism to re-process these failed message using a different lambda function.
Assuming my understanding of the premise of this problem is correct, you may follow below steps to achieve it.
SQS queue provide a feature where if the consumers of SQS fails to process the message for a certain number of times, These failed message can be moved to DLQ. (These DLQ can be configured at primary queue level).
You can follow your existing pattern and create a trigger on this DLQ to process the failed messages from the dispatcher lambda. You may use this helpful guide to set these up.
In this approach, you are configuring DLQ at Queue level instead on lambda.
I hope this would help you in your design.
For anyone who stumbles across this issue in the future. My configuration was correct, I was missing a SQS queue policy. Once I configured a policy everything worked as expected.

how to provide multiple SQS queue name in AWS Lambda trigger cloud formation template

I want to trigger two different SQS queue from my lambda, in my cloud formation template I gave like this - but my stack is not getting created. I'm getting below error message:
Events:
SQSEvent:
Type: SQS
Properties:
Queues:
- !Sub arn:aws:sqs:${AWS::Region}:${AccountId}:${QueueName}
- !Sub arn:aws:sqs:${AWS::Region}:${AccountId}:${DLQQueueName}
BatchSize: 1
Enabled: true
Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document.
Number of errors found: 1. Resource with id [MyLambda] is invalid. Event with id [SQSEvent] is invalid. No Queue (for SQS) or Stream (for Kinesis, DynamoDB or MSK) or Broker (for Amazon MQ) provided.04/27/22 06:09:18 - UPDATE_ROLLBACK_IN_PROGRESS - AWS::CloudFormation::Stack) -
Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document.
Number of errors found: 1. Resource with id [MyLambda] is invalid. Event with id [SQSEvent] is invalid. No Queue (for SQS) or Stream (for Kinesis, DynamoDB or MSK) or Broker (for Amazon MQ) provided.
Can someone please help me to resolve this issue. Appreciated your help!
Thanks!
You will want to use Queues (plural):
Events:
SQSEvent:
Type: SQS
Properties:
Queues:
- !Sub arn:aws:sqs:${AWS::Region}:${AccountId}:${QueueName}
- !Sub arn:aws:sqs:${AWS::Region}:${AccountId}:${DLQQueueName}
BatchSize: 1
Enabled: true
You could check your serverless setup against these templates
https://carova.io/snippets/serverless-aws-create-sqs-queue-template
This one shows the whole setup with your SQS Queue being subscribed to and SNS topic and then triggering the AWS Lambda Function.
https://carova.io/snippets/serverless-aws-sqs-queue-subscribed-to-sns-topic
You can write your template as given below -
Events:
SQSEvent1:
Type: SQS
Properties:
Queue: !Sub arn:aws:sqs:${AWS::Region}:${AccountId}:${QueueName}
BatchSize: 1
Enabled: true
SQSEvent2:
Type: SQS
Properties:
Queue: !Sub arn:aws:sqs:${AWS::Region}:${AccountId}:${DLQQueueName}
BatchSize: 1
Enabled: true

Processing AWS multiple SQS messages with one Lambda Function

I have setup an SQS queue where S3 paths are being pushed whenever there is a file upload.
So I have a setup where I'll receive 10s of small csv files and I want to hold them in a SQS queue and trigger the lambda only once when all the files have arrived during a specific time let's say 5 minutes.
Here is my CF code
LambdaFunctionEventSourceMapping:
Type: AWS::Lambda::EventSourceMapping
Properties:
BatchSize: 5000
MaximumBatchingWindowInSeconds: 300
Enabled: true
EventSourceArn: !GetAtt EventQueue.Arn
FunctionName: !GetAtt QueueConsumerLambdaFunction.Arn
EventQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: Event-Queue
DelaySeconds: 10
VisibilityTimeout: 125
ReceiveMessageWaitTimeSeconds: 10
QueueConsumerLambdaFunction:
Type: AWS::Lambda::Function
Properties:
FunctionName: queue-consumer
Runtime: python3.7
Code: ./queue-consumer
Handler: main.lambda_handler
Role: !GetAtt QueueConsumerLambdaExecutionRole.Arn
Timeout: 120
MemorySize: 512
ReservedConcurrentExecutions: 1
The deployment works fine but if I push 3 files to S3 bucket the SQS triggers 3 different lambda functions asynchronously which I don't want. I need one lambda function to contain all messages in the queue as a result of S3 event and process them. Is there something wrong in my SQS configuration?
What you are observing is likely due to five parallel threads that AWS is using to query your SQS queue. These threads are separate from concurrency setting, and you have no control over these threads. There are always 5 of them.
So each thread will get some msgs from the queue, then your function is going to be invoked with these msgs in turn. Sadly you can't change how it works, as this is how sqs and lambda work at AWS side.

configure deadletter queue for queue to lambda in amazon SAM yaml

Trying to get a SAM YAML script properly setting up my lambda. I have a lambda hooked to aqueue being created, which is just a simple
myQueue:
Type: AWS::SQS:Queue
myLambda:
Type: AWS::Serverless::Function
Properties:
Events:
myQueueEvent:
Type: SQS
Properties:
Queue: !GetAtt myQueue.arn
(with a bunch of other properties taken out)... as far as I can tell it looks like I should be able to add a DeadLetterConfig and point it at another queue - but wherever I try to put it it doesn't work.
Essentially the behaviour I'm looking for is that if I put a value into a queue, then it automatically pops out of the queue into the lambda. If the lambda errors in anyway (eg throws an exception) - the item ends up in the deadletter queue - otherwise it is consumed and disappears. Am I just misunderstanding and this is just not possible out of the box?
I figured it out - you actually put it on the queue - eg:
RedrivePolicy:
deadLetterTargetArn: !GetAtt deadLetterQueue.Arn
maxReceiveCount: 2

AWS SQS doesn't reliably trigger Lambda

I've set up a small serverless app using Lambda and SQS.
In my case i wanted to trigger a lambda every time a message is added to a SQS Queue.
functions in my serverless.yml
functions:
collectGame:
handler: js/collect.collectGame
memorySize: 128
timeout: 10
events:
- sqs:
arn:
Fn::GetAtt:
- gameRequestQueue
- Arn
- http:
method: post
cors:
origin: "https://my-api-url.com"
path: get/game/{id}
private: true
request:
parameters:
paths:
id:true
I tested the process by sending 31 Messages at once to the Queue but realized that only 9 Lambdas get executed (by looking into the cloudwatch logs). I looked into the Queue and can confirm that its being filled with all the messages and that its empty after the 9 Lambdas have been triggered.
I'd expect to have 31 Lambda executions but thats not the case. Anyone knows potential reasons why my Lambdas are not being triggered by the messages?
Your lambda function is probably being invoked with multiple messages. You should be able to set the BatchSize to 1 when you create the event source mapping, if you only want one message to be sent per lambda invocation
It looks like you are using the serverless framework. See their SQS event documentation for setting the batch size.
For anyone using aws sam here is the link that mentions batch size: here, look for the subheading 'Configuring a Queue as an Event Source'. And here is the code that works for me to set this up in the yaml together with a DLQ:
# add an event trigger in the properties section of your function
Events:
MySQSEvent:
Type: SQS
Properties:
Queue: !GetAtt MySqsQueueName.Arn
BatchSize: 1
# then define the queue
MySqsQueueName:
Type: AWS::SQS::Queue
Properties:
VisibilityTimeout: 800
ReceiveMessageWaitTimeSeconds: 10
DelaySeconds: 10
RedrivePolicy:
deadLetterTargetArn: !GetAtt MyDLQueue.Arn
maxReceiveCount: 2
# define a dead letter queue to handle bad messages
MyDLQueue:
Type: AWS::SQS::Queue
Properties:
VisibilityTimeout: 900
Hope this helps someone - this took me ages to work out for my app!
i was also facing the exact same issue. The problem was in my lambda function.
If the batch size is more than 1, in that case in a single lambda invocation, multiple SQS messages will be passed to lambda (based on batch size), just handle all the messages in lambda (by iterating through all the messages).
check your event Records array for multiple messages.
{Records: [{..},{..},{..}]}