I have setup an SQS queue where S3 paths are being pushed whenever there is a file upload.
So I have a setup where I'll receive 10s of small csv files and I want to hold them in a SQS queue and trigger the lambda only once when all the files have arrived during a specific time let's say 5 minutes.
Here is my CF code
LambdaFunctionEventSourceMapping:
Type: AWS::Lambda::EventSourceMapping
Properties:
BatchSize: 5000
MaximumBatchingWindowInSeconds: 300
Enabled: true
EventSourceArn: !GetAtt EventQueue.Arn
FunctionName: !GetAtt QueueConsumerLambdaFunction.Arn
EventQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: Event-Queue
DelaySeconds: 10
VisibilityTimeout: 125
ReceiveMessageWaitTimeSeconds: 10
QueueConsumerLambdaFunction:
Type: AWS::Lambda::Function
Properties:
FunctionName: queue-consumer
Runtime: python3.7
Code: ./queue-consumer
Handler: main.lambda_handler
Role: !GetAtt QueueConsumerLambdaExecutionRole.Arn
Timeout: 120
MemorySize: 512
ReservedConcurrentExecutions: 1
The deployment works fine but if I push 3 files to S3 bucket the SQS triggers 3 different lambda functions asynchronously which I don't want. I need one lambda function to contain all messages in the queue as a result of S3 event and process them. Is there something wrong in my SQS configuration?
What you are observing is likely due to five parallel threads that AWS is using to query your SQS queue. These threads are separate from concurrency setting, and you have no control over these threads. There are always 5 of them.
So each thread will get some msgs from the queue, then your function is going to be invoked with these msgs in turn. Sadly you can't change how it works, as this is how sqs and lambda work at AWS side.
Related
I have a SAM stack with 1 lambda, 1 SQS and 1 DLQ. The message in SQS acts as an event source for the lambda. The lambda has ReservedConcurrentExecutions of 1. The batch size of event(from SQS to lambda) is also 1. The timeout of lambda is 300 seconds.
The SQS is a FIFO queue with ContentBasedDeduplication as true. The VisibilityTimeout is of 400 seconds and ReceiveMessageWaitTimeSeconds as 10.
SQS has a DLQ linked to it with a re drive policy of maxReceiveCount of 5.
All the messages sent to the FIFO queue have the same messageGroupId.
The idea behind this is to ensure all the messages get processed in a FIFO manner and no message is repeated by any chance.
Also, since the ReservedConcurrentExecutions of lambda is set to 1 and the messageGroupId is also same across all messages, it was assumed that this lambda will not throttle in any scenario.
But it is still getting throttled.
I can't seem to find any issue with the configuration of my stack that could cause this issue. Does anyone have any insight as to why and how this scenario could have been possible? Or is there a way I can find out which message was throttled?
Also, a point to mention here is that this lambda throttle was not happening till the time the number of messages in the queue was small. As soon as the queue has messages above 1000, the throttle error started appearing. But the count of throttle at any given time was never more than 1. And the throttle happens randomly and never in any fixed pattern. And eventually all messages were processed since I received no messages in DLQ once all messages were processed.
I had read the following on AWS
To allow your function time to process each batch of records, set the source queue's visibility timeout to at least six times the timeout that you configure on your function. The extra time allows for Lambda to retry if your function is throttled while processing a previous batch.
In my case this is not true since my queue's visibility timeout(400) is just 100 seconds more than the lambda's timeout(300). Could this be the cause of my issue?
Following is my CF script for the lambda and sqs
CreateJobsQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: !Sub '${AWS::StackName}-CreateJobsQueue.fifo'
ReceiveMessageWaitTimeSeconds: 10
FifoQueue: True
ContentBasedDeduplication: True
VisibilityTimeout: 400
RedrivePolicy:
deadLetterTargetArn: !GetAtt CreateJobsDLQ.Arn
maxReceiveCount: 5
CreateJobsDLQ:
Type: AWS::SQS::Queue
Properties:
QueueName: !Sub '${AWS::StackName}-CreateJobsDLQ.fifo'
FifoQueue: True
ContentBasedDeduplication: True
MessageRetentionPeriod: 604800
CreateJobsFn:
Type: AWS::Serverless::Function
Properties:
FunctionName: !Sub '${AWS::StackName}-CreateJobsFn'
CodeUri: functions/create-jobs/
Handler: index.handler
Runtime: nodejs16.x
Description: Lambda function to pick up the message from CreateJobsQueue
MemorySize: 512
Timeout: 300
KmsKeyArn: !Sub "arn:aws:kms:${AWS::Region}:${AWS::AccountId}:key/${AppsKMSKeyId}"
ReservedConcurrentExecutions: 1
Policies:
- AWSLambdaBasicExecutionRole
- AWSLambdaENIManagementAccess
Environment:
Variables:
EMAIL_DOMAIN: ""
Layers:
- !Ref LambdaDependencies
VpcConfig:
!If
- IsVPCRequired
-
SubnetIds: !Ref BFnSubnetIds
SecurityGroupIds: !Ref BFnSecurityGroupIds
- !Ref 'AWS::NoValue'
Events:
CreateJobsFnSQSEvent:
Type: SQS
Properties:
Queue: !GetAtt CreateJobsQueue.Arn
BatchSize: 1
Please let me know if any other details needed from my end.
I want to trigger two different SQS queue from my lambda, in my cloud formation template I gave like this - but my stack is not getting created. I'm getting below error message:
Events:
SQSEvent:
Type: SQS
Properties:
Queues:
- !Sub arn:aws:sqs:${AWS::Region}:${AccountId}:${QueueName}
- !Sub arn:aws:sqs:${AWS::Region}:${AccountId}:${DLQQueueName}
BatchSize: 1
Enabled: true
Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document.
Number of errors found: 1. Resource with id [MyLambda] is invalid. Event with id [SQSEvent] is invalid. No Queue (for SQS) or Stream (for Kinesis, DynamoDB or MSK) or Broker (for Amazon MQ) provided.04/27/22 06:09:18 - UPDATE_ROLLBACK_IN_PROGRESS - AWS::CloudFormation::Stack) -
Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document.
Number of errors found: 1. Resource with id [MyLambda] is invalid. Event with id [SQSEvent] is invalid. No Queue (for SQS) or Stream (for Kinesis, DynamoDB or MSK) or Broker (for Amazon MQ) provided.
Can someone please help me to resolve this issue. Appreciated your help!
Thanks!
You will want to use Queues (plural):
Events:
SQSEvent:
Type: SQS
Properties:
Queues:
- !Sub arn:aws:sqs:${AWS::Region}:${AccountId}:${QueueName}
- !Sub arn:aws:sqs:${AWS::Region}:${AccountId}:${DLQQueueName}
BatchSize: 1
Enabled: true
You could check your serverless setup against these templates
https://carova.io/snippets/serverless-aws-create-sqs-queue-template
This one shows the whole setup with your SQS Queue being subscribed to and SNS topic and then triggering the AWS Lambda Function.
https://carova.io/snippets/serverless-aws-sqs-queue-subscribed-to-sns-topic
You can write your template as given below -
Events:
SQSEvent1:
Type: SQS
Properties:
Queue: !Sub arn:aws:sqs:${AWS::Region}:${AccountId}:${QueueName}
BatchSize: 1
Enabled: true
SQSEvent2:
Type: SQS
Properties:
Queue: !Sub arn:aws:sqs:${AWS::Region}:${AccountId}:${DLQQueueName}
BatchSize: 1
Enabled: true
Trying to get a SAM YAML script properly setting up my lambda. I have a lambda hooked to aqueue being created, which is just a simple
myQueue:
Type: AWS::SQS:Queue
myLambda:
Type: AWS::Serverless::Function
Properties:
Events:
myQueueEvent:
Type: SQS
Properties:
Queue: !GetAtt myQueue.arn
(with a bunch of other properties taken out)... as far as I can tell it looks like I should be able to add a DeadLetterConfig and point it at another queue - but wherever I try to put it it doesn't work.
Essentially the behaviour I'm looking for is that if I put a value into a queue, then it automatically pops out of the queue into the lambda. If the lambda errors in anyway (eg throws an exception) - the item ends up in the deadletter queue - otherwise it is consumed and disappears. Am I just misunderstanding and this is just not possible out of the box?
I figured it out - you actually put it on the queue - eg:
RedrivePolicy:
deadLetterTargetArn: !GetAtt deadLetterQueue.Arn
maxReceiveCount: 2
I am trying to implement lambda1 that'll be triggered when messages will be published to SQS. I am able to send messages to SQS queue and receive the messages.
I created the SQS lambda template as follows:
GetPatientStatusSQS:
Type: AWS::SQS::Queue
Properties:
MaximumMessageSize: 1024
QueueName: !Sub "${EnvironmentName}-GetPatientStatusSQS"
VisibilityTimeout: 30
I checked on aws documentation but couldnt find any example showing how to trigger lambda when messages published to SQS queue.
I found this link Can an AWS Lambda function call another but not sure if that's helpful.
How do i update the SQS template above so it'll trigger the lambda1?
As per Jun 28, 2018, Lambda functions can be triggered by SQS events.
All you need to do is Subscribe your Lambda function to the desired SQS queue.
Go to SQS's console, click on your Queue -> Queue Actions -> Configure Trigger for Lambda function
Set the Lambda's ARN you want to send messages to and that's it, your function will now be triggered by SQS.
Keep in mind that your function will process, at most, a batch of up to 10 messages at once.
If you think you may run into concurrency issues, you can then limit your function's concurrency to 1.
Here's a sample template you can use to wire SQS and Lambda together.
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Example of processing messages on an SQS queue with Lambda
Resources:
MySQSQueueFunction:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
Runtime: node8.10
Events:
MySQSEvent:
Type: SQS
Properties:
Queue: !GetAtt MySqsQueue.Arn
BatchSize: 10
MySqsQueue:
Type: AWS::SQS::Queue
From the docs
I've set up a small serverless app using Lambda and SQS.
In my case i wanted to trigger a lambda every time a message is added to a SQS Queue.
functions in my serverless.yml
functions:
collectGame:
handler: js/collect.collectGame
memorySize: 128
timeout: 10
events:
- sqs:
arn:
Fn::GetAtt:
- gameRequestQueue
- Arn
- http:
method: post
cors:
origin: "https://my-api-url.com"
path: get/game/{id}
private: true
request:
parameters:
paths:
id:true
I tested the process by sending 31 Messages at once to the Queue but realized that only 9 Lambdas get executed (by looking into the cloudwatch logs). I looked into the Queue and can confirm that its being filled with all the messages and that its empty after the 9 Lambdas have been triggered.
I'd expect to have 31 Lambda executions but thats not the case. Anyone knows potential reasons why my Lambdas are not being triggered by the messages?
Your lambda function is probably being invoked with multiple messages. You should be able to set the BatchSize to 1 when you create the event source mapping, if you only want one message to be sent per lambda invocation
It looks like you are using the serverless framework. See their SQS event documentation for setting the batch size.
For anyone using aws sam here is the link that mentions batch size: here, look for the subheading 'Configuring a Queue as an Event Source'. And here is the code that works for me to set this up in the yaml together with a DLQ:
# add an event trigger in the properties section of your function
Events:
MySQSEvent:
Type: SQS
Properties:
Queue: !GetAtt MySqsQueueName.Arn
BatchSize: 1
# then define the queue
MySqsQueueName:
Type: AWS::SQS::Queue
Properties:
VisibilityTimeout: 800
ReceiveMessageWaitTimeSeconds: 10
DelaySeconds: 10
RedrivePolicy:
deadLetterTargetArn: !GetAtt MyDLQueue.Arn
maxReceiveCount: 2
# define a dead letter queue to handle bad messages
MyDLQueue:
Type: AWS::SQS::Queue
Properties:
VisibilityTimeout: 900
Hope this helps someone - this took me ages to work out for my app!
i was also facing the exact same issue. The problem was in my lambda function.
If the batch size is more than 1, in that case in a single lambda invocation, multiple SQS messages will be passed to lambda (based on batch size), just handle all the messages in lambda (by iterating through all the messages).
check your event Records array for multiple messages.
{Records: [{..},{..},{..}]}