Trying to get a SAM YAML script properly setting up my lambda. I have a lambda hooked to aqueue being created, which is just a simple
myQueue:
Type: AWS::SQS:Queue
myLambda:
Type: AWS::Serverless::Function
Properties:
Events:
myQueueEvent:
Type: SQS
Properties:
Queue: !GetAtt myQueue.arn
(with a bunch of other properties taken out)... as far as I can tell it looks like I should be able to add a DeadLetterConfig and point it at another queue - but wherever I try to put it it doesn't work.
Essentially the behaviour I'm looking for is that if I put a value into a queue, then it automatically pops out of the queue into the lambda. If the lambda errors in anyway (eg throws an exception) - the item ends up in the deadletter queue - otherwise it is consumed and disappears. Am I just misunderstanding and this is just not possible out of the box?
I figured it out - you actually put it on the queue - eg:
RedrivePolicy:
deadLetterTargetArn: !GetAtt deadLetterQueue.Arn
maxReceiveCount: 2
Related
Using Serverless Framework, how can I make my Lambda function depend on an SQS queue from the resources section as it is the trigger for the function itself?
In my serverless.yaml, I am defining a new queue and Lambda function.
Then, I want to use the queue as an event source (trigger) for my Lambda function.
I do that by creating the queue ARN manually:
functions:
consumer:
handler: App\Service\Consumer
events:
- sqs:
arn:
Fn::Join:
- ':'
- arn:aws:sqs
- Ref: AWS::Region
- Ref: AWS::AccountId
- ${opt:stage}-skill-assigner
And creating the queue in resources:
resources:
Resources:
SkillAssignerQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: ${opt:stage}-skill-assigner
This works fine if I create the queue in a deployment prior to using it as a function trigger.
But if I try to deploy both of them, it fails with this error when it tries to create the event source mapping:
Invalid request provided: Error occurred while ReceiveMessage. SQS Error Code: AWS.SimpleQueueService.NonExistentQueue. SQS Error Message: The specified queue does not exist for this wsdl version.
Fn::Join enables string concatenation which doesn't inform the Serverless Framework (SF) about the dependency of the function on the queue.
We visually can see that but it needs to be done declaratively.
To make this link obvious to SF, use Fn::GetAtt: instead.
It will inform Serverless Framework about the dependency of the Lambda function on the SQS queue.
This should work:
functions:
consumer:
handler: App\Service\Consumer
events:
- sqs:
arn:
Fn::GetAtt:
- SkillAssignerQueue
- Arn
resources:
Resources:
SkillAssignerQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: ${opt:stage}-skill-assigner
The Serveless Framework can automatically create the queue for you. No need to define it in resources
I want to trigger two different SQS queue from my lambda, in my cloud formation template I gave like this - but my stack is not getting created. I'm getting below error message:
Events:
SQSEvent:
Type: SQS
Properties:
Queues:
- !Sub arn:aws:sqs:${AWS::Region}:${AccountId}:${QueueName}
- !Sub arn:aws:sqs:${AWS::Region}:${AccountId}:${DLQQueueName}
BatchSize: 1
Enabled: true
Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document.
Number of errors found: 1. Resource with id [MyLambda] is invalid. Event with id [SQSEvent] is invalid. No Queue (for SQS) or Stream (for Kinesis, DynamoDB or MSK) or Broker (for Amazon MQ) provided.04/27/22 06:09:18 - UPDATE_ROLLBACK_IN_PROGRESS - AWS::CloudFormation::Stack) -
Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document.
Number of errors found: 1. Resource with id [MyLambda] is invalid. Event with id [SQSEvent] is invalid. No Queue (for SQS) or Stream (for Kinesis, DynamoDB or MSK) or Broker (for Amazon MQ) provided.
Can someone please help me to resolve this issue. Appreciated your help!
Thanks!
You will want to use Queues (plural):
Events:
SQSEvent:
Type: SQS
Properties:
Queues:
- !Sub arn:aws:sqs:${AWS::Region}:${AccountId}:${QueueName}
- !Sub arn:aws:sqs:${AWS::Region}:${AccountId}:${DLQQueueName}
BatchSize: 1
Enabled: true
You could check your serverless setup against these templates
https://carova.io/snippets/serverless-aws-create-sqs-queue-template
This one shows the whole setup with your SQS Queue being subscribed to and SNS topic and then triggering the AWS Lambda Function.
https://carova.io/snippets/serverless-aws-sqs-queue-subscribed-to-sns-topic
You can write your template as given below -
Events:
SQSEvent1:
Type: SQS
Properties:
Queue: !Sub arn:aws:sqs:${AWS::Region}:${AccountId}:${QueueName}
BatchSize: 1
Enabled: true
SQSEvent2:
Type: SQS
Properties:
Queue: !Sub arn:aws:sqs:${AWS::Region}:${AccountId}:${DLQQueueName}
BatchSize: 1
Enabled: true
I have setup an SQS queue where S3 paths are being pushed whenever there is a file upload.
So I have a setup where I'll receive 10s of small csv files and I want to hold them in a SQS queue and trigger the lambda only once when all the files have arrived during a specific time let's say 5 minutes.
Here is my CF code
LambdaFunctionEventSourceMapping:
Type: AWS::Lambda::EventSourceMapping
Properties:
BatchSize: 5000
MaximumBatchingWindowInSeconds: 300
Enabled: true
EventSourceArn: !GetAtt EventQueue.Arn
FunctionName: !GetAtt QueueConsumerLambdaFunction.Arn
EventQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: Event-Queue
DelaySeconds: 10
VisibilityTimeout: 125
ReceiveMessageWaitTimeSeconds: 10
QueueConsumerLambdaFunction:
Type: AWS::Lambda::Function
Properties:
FunctionName: queue-consumer
Runtime: python3.7
Code: ./queue-consumer
Handler: main.lambda_handler
Role: !GetAtt QueueConsumerLambdaExecutionRole.Arn
Timeout: 120
MemorySize: 512
ReservedConcurrentExecutions: 1
The deployment works fine but if I push 3 files to S3 bucket the SQS triggers 3 different lambda functions asynchronously which I don't want. I need one lambda function to contain all messages in the queue as a result of S3 event and process them. Is there something wrong in my SQS configuration?
What you are observing is likely due to five parallel threads that AWS is using to query your SQS queue. These threads are separate from concurrency setting, and you have no control over these threads. There are always 5 of them.
So each thread will get some msgs from the queue, then your function is going to be invoked with these msgs in turn. Sadly you can't change how it works, as this is how sqs and lambda work at AWS side.
I have a SAM cloudformation template:
Transform: AWS::Serverless-2016-10-31
Description: Create SNS with a sub
Parameters:
NotificationEmail:
Type: String
Description: Email address to subscribe to SNS topic
Resources:
NotificationTopic:
Type: AWS::SNS::Topic
DeletionPolicy: Retain
Properties:
TopicName: sam-test-sns
Subscription:
- Endpoint: !Ref NotificationEmail
Protocol: email
Outputs:
SNSTopic:
Value: !Ref NotificationTopic
So I want to keep the topic sam-test-sns around since there are several subscribers already, and I don't want subscribers to tediously re-subscribe if I tear down the service and bring it back up.
Tearing down the service with Retain keeps the topic around, so that's fine. But when I try deploy the template, it fails because it already exists.
So what is the right approach to use an existing SNS topic?
Keeping the "Ec2NotificationTopic" resource in the template after removing the stack but keeping the topic around, will instruct CloudFormation to also create the topic when (re)creating the stack, which will always fail.
Since you are just referencing an existing topic, you should remove the resource from the template and replace the references to it with the ARN/name.
With the output done you are exporting the variable. I am going to assume you want this resource in another stack.
First you need to export the value so for example
Outputs:
SNSTopic:
Value: !Ref NotificationTopic
Export:
Name: Fn::Sub: "${AWS::StackName}-SNSTopic"
Add a parameter to your new stack of SNSStackName, where you would pass in the SNS stacks name (within the current region).
Then from within your new stack to reference you would need to call the output value like below:
Fn::ImportValue:
Fn::Sub: "${SNSStackName}-SNSTopic"
I've set up a small serverless app using Lambda and SQS.
In my case i wanted to trigger a lambda every time a message is added to a SQS Queue.
functions in my serverless.yml
functions:
collectGame:
handler: js/collect.collectGame
memorySize: 128
timeout: 10
events:
- sqs:
arn:
Fn::GetAtt:
- gameRequestQueue
- Arn
- http:
method: post
cors:
origin: "https://my-api-url.com"
path: get/game/{id}
private: true
request:
parameters:
paths:
id:true
I tested the process by sending 31 Messages at once to the Queue but realized that only 9 Lambdas get executed (by looking into the cloudwatch logs). I looked into the Queue and can confirm that its being filled with all the messages and that its empty after the 9 Lambdas have been triggered.
I'd expect to have 31 Lambda executions but thats not the case. Anyone knows potential reasons why my Lambdas are not being triggered by the messages?
Your lambda function is probably being invoked with multiple messages. You should be able to set the BatchSize to 1 when you create the event source mapping, if you only want one message to be sent per lambda invocation
It looks like you are using the serverless framework. See their SQS event documentation for setting the batch size.
For anyone using aws sam here is the link that mentions batch size: here, look for the subheading 'Configuring a Queue as an Event Source'. And here is the code that works for me to set this up in the yaml together with a DLQ:
# add an event trigger in the properties section of your function
Events:
MySQSEvent:
Type: SQS
Properties:
Queue: !GetAtt MySqsQueueName.Arn
BatchSize: 1
# then define the queue
MySqsQueueName:
Type: AWS::SQS::Queue
Properties:
VisibilityTimeout: 800
ReceiveMessageWaitTimeSeconds: 10
DelaySeconds: 10
RedrivePolicy:
deadLetterTargetArn: !GetAtt MyDLQueue.Arn
maxReceiveCount: 2
# define a dead letter queue to handle bad messages
MyDLQueue:
Type: AWS::SQS::Queue
Properties:
VisibilityTimeout: 900
Hope this helps someone - this took me ages to work out for my app!
i was also facing the exact same issue. The problem was in my lambda function.
If the batch size is more than 1, in that case in a single lambda invocation, multiple SQS messages will be passed to lambda (based on batch size), just handle all the messages in lambda (by iterating through all the messages).
check your event Records array for multiple messages.
{Records: [{..},{..},{..}]}