I am trying to create an AWS SQS Dead Letter Queue, using the serverless framework
The idea is to have a SQS to trigger a Lambda function,
and have another SQS as a DeadLetterQueue, ie. to pick up the message in case the Lambda fails or timesout
I did the following to create a test project -
mkdir dlq
cd dlq/
serverless create --template aws-nodejs
Following is my serverless.yaml -
service: dlq
provider:
name: aws
runtime: nodejs12.x
region: ap-southeast-1
role: arn:aws:iam::xxxx:role/dlqLambdaRole
plugins:
- serverless-plugin-lambda-dead-letter
functions:
dlq:
handler: handler.hello
events:
- sqs:
arn:
Fn::GetAtt:
- MainQueue
- Arn
deadLetter:
targetArn:
GetResourceArn: DeadLetterQueue
resources:
Resources:
MainQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: main
DeadLetterQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: dlq
I also tried the following -
service: dlq
provider:
name: aws
runtime: nodejs12.x
region: ap-southeast-1
role: arn:aws:iam::xxxx:role/dlqLambdaRole
plugins:
- serverless-plugin-lambda-dead-letter
functions:
dlq:
handler: handler.hello
events:
- sqs:
arn:
Fn::GetAtt:
- MainQueue
- Arn
deadLetter:
sqs: dlq
resources:
Resources:
MainQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: main
But in both these cases, the framework is just creating a normal SQS
I am following this document -
https://www.serverless.com/plugins/serverless-plugin-lambda-dead-letter
Better late than never. Hope this helps you or someone searching for this problem.
When you configure SQS to trigger Lambda, the DLQ is supposed to be configured on the SQS(Since it would not be an Asynchronous invocation).
Notice the 'Note' section in the link
Source
Hence your serverless.yaml needs to declare the ReddrivePolicy in the main queue to refer to the DLQ. (Below)
service: dlq
provider:
name: aws
runtime: nodejs12.x
region: ap-southeast-1
role: arn:aws:iam::xxxx:role/dlqLambdaRole
functions:
dlq:
handler: handler.hello
events:
- sqs:
arn:
Fn::GetAtt:
- MainQueue
- Arn
deadLetter:
targetArn:
GetResourceArn: DeadLetterQueue
resources:
Resources:
MainQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: main
RedrivePolicy:
deadLetterTargetArn:
Fn::GetAtt:
- "DeadLetterQueue"
- "Arn"
maxReceiveCount: 5
DeadLetterQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: dlq
maxReceiveCount is set to 5 as per AWS Documentation.
To give you some background, Dead Letter Queue is just that, a normal SQS queue. it's the configuration at AWS Lambda that informs it to push message to this Queue whenever there is any error while processing the message.
You can verify this from the management console by referring to the "Dead-letter queue service" under "Asynchronous invocation"
Related
Here is sam yaml snippet:
SnsTopic:
Type: AWS::SNS::Topic
Properties:
DisplayName: !Sub ${Env}-sns-topic
TopicName: !Sub ${Env}-sns-topic
Queue:
Type: AWS::SQS::Queue
Properties:
QueueName: !Sub ${Env}-queue
VisibilityTimeout: 300
MessageRetentionPeriod: 1209600
ReceiveMessageWaitTimeSeconds: 20
RedrivePolicy:
deadLetterTargetArn: !GetAtt Dlq.Arn
maxReceiveCount: 5
Dlq:
Type: AWS::SQS::Queue
Properties:
QueueName: !Sub ${Env}-dlq
VisibilityTimeout: 300
MessageRetentionPeriod: 1209600
ReceiveMessageWaitTimeSeconds: 0
TestSubscription:
Type: AWS::SNS::Subscription
DependsOn:
- SnsTopic
- Queue
- Dlq
Properties:
Protocol: sqs
TopicArn: !Ref SnsTopic
Endpoint: !GetAtt
- Queue
- Arn
RawMessageDelivery: true
I am trying to create an SNS that is subscribed to an SQS with a Dead Letter Queue (DLQ). After deployment, I can see that the SNS and SQS have been created successfully, but the SQS subscription to the SNS does not appear to be working. When I check the subscriptions for the SQS, I can see that the SNS that was created in the same stack is listed, but I have to manually add the subscription for it to work. I'm wondering what could be causing this issue or if there's something missing in my example.
You have to setup AWS::SQS::QueuePolicy to allow SNS to send messages to the queue. Check AWS docs for examples of how to do it.
I am trying to set up a demo environment to try out SQS as an AWS Event Bridge Source. I tried uploading few documents to SQS to see if Event Bridge detects any change, but I don't see any events triggered. How can I test SQS as a source with AWS Event Bridge?
Resources:
Queue:
Type: AWS::SQS::Queue
Properties:
QueueName: !Sub ${AWS::StackName}
LambdaHandlerExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
EventConsumerFunction:
Type: AWS::Lambda::Function
Properties:
Handler: index.lambda_handler
Role: !GetAtt LambdaHandlerExecutionRole.Arn
Code:
ZipFile: |
import json
def lambda_handler(event, context):
print("Received event: " + json.dumps(event, indent=2))
Runtime: python3.7
Timeout: 50
EventRule:
Type: AWS::Events::Rule
Properties:
Description: eventEventRule
State: ENABLED
EventPattern:
source:
- aws.sqs
resources:
- !GetAtt Queue.Arn
Targets:
- Arn: !GetAtt EventConsumerFunction.Arn
Id: EventConsumerFunctionTarget
PermissionForEventsToInvokeLambda:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !Ref EventConsumerFunction
Action: lambda:InvokeFunction
Principal: events.amazonaws.com
SourceArn: !GetAtt EventRule.Arn
SQS data events (publishing new message) are not source events for Event Bridge (EB). Only management events can be picked up by EB, e.g.:
purging of the queue
creating of new queue
deletion of a queue
Also your event rule should be more generic for that:
EventRule:
Type: AWS::Events::Rule
Properties:
Description: eventEventRule
State: ENABLED
EventPattern:
source:
- aws.sqs
# resources:
# - !GetAtt Queue.Arn
Targets:
- Arn: !GetAtt EventConsumerFunction.Arn
Id: EventConsumerFunctionTarget
You can also enable CloudWatch trial and detect API events for the SQS. This should enable fetching more events.
I might be late but this can benefit someone else,
have a look at this:
https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-ecs-patterns.QueueProcessingFargateService.html
This will handle scaling of the Fargate container based on a number of messages in the SQS Queue.
a simplest stack can be defined using AWS CDK as following:
queue = sqs.Queue(stack, "Queue")
cluster = aws_ecs.Cluster(
stack, 'FargateCluster'
)
queue_processing_fargate_service = QueueProcessingFargateService(stack, "Service",
cluster=cluster,
memory_limit_mi_b=512,
image=ecs.ContainerImage.from_registry("test"),
command=["-c", "4", "amazon.com"],
enable_logging=False,
desired_task_count=2,
environment={
"TEST_ENVIRONMENT_VARIABLE1": "test environment variable 1 value",
"TEST_ENVIRONMENT_VARIABLE2": "test environment variable 2 value"
},
queue=queue,
max_scaling_capacity=5,
container_name="test"
)
I have a subscription to a SNS topic that is configured to move a message to a DLQ if it can't be delivered successfully to a lambda function.
As described by this document, there are client-side and server-side errors. If a client-side error occurred the message is correctly moved to the DLQ but in case a server-side error occurred, the message is not moved to the DLQ. This document describes the delivery retries and the subscription does use the default delivery policy defined by the SNS topic. The retries do happen but after the retries are exhausted the message is not moved to the DLQ.
Now I wonder why the message is not moved correctly to the DLQ on server-side errors. Is there some more configuration missing?
I created the resources with the following AWS SAM template:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
lambda-test
Globals:
Function:
Timeout: 30
Resources:
EmailFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: EmailFunction
Handler: de.domain.email.App::handleRequest
Runtime: java11
Architectures:
- x86_64
MemorySize: 512
Environment:
Variables:
# https://aws.amazon.com/blogs/compute/optimizing-aws-lambda-function-performance-for-java/
JAVA_TOOL_OPTIONS: -XX:+TieredCompilation -XX:TieredStopAtLevel=1
EmailsTopic:
Type: AWS::SNS::Topic
Properties:
TopicName: 'test-emails'
EmailFunctionInvokePermission:
Type: AWS::Lambda::Permission
Properties:
Action: 'lambda:InvokeFunction'
FunctionName: !Ref EmailFunction
Principal: sns.amazonaws.com
EmailDLQ:
Type: AWS::SQS::Queue
Properties:
QueueName: !Join ['', [!GetAtt EmailsTopic.TopicName, '-dlq']]
# Policy for DLQ: https://docs.aws.amazon.com/sns/latest/dg/sns-configure-dead-letter-queue.html
EmailDLQPolicy:
Type: AWS::SQS::QueuePolicy
Properties:
Queues:
- !Ref EmailDLQ
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal: '*'
Action:
- 'sqs:GetQueueUrl'
- 'sqs:GetQueueAttributes'
- 'sqs:SetQueueAttributes'
- 'sqs:SendMessage'
- 'sqs:ReceiveMessage'
- 'sqs:DeleteMessage'
- 'sqs:PurgeQueue'
Resource:
- !GetAtt EmailDLQ.Arn
EmailsSubscription:
Type: AWS::SNS::Subscription
Properties:
TopicArn: !Ref EmailsTopic
Protocol: lambda
Endpoint: !GetAtt EmailFunction.Arn
RedrivePolicy:
deadLetterTargetArn: !GetAtt EmailDLQ.Arn
And the Java function just looks like this (and throws an exception when the message body is reject):
package de.domain.email;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.SNSEvent;
public class App implements RequestHandler<SNSEvent, Object> {
public Object handleRequest(final SNSEvent input, final Context context) {
input.getRecords().forEach(r -> {
context.getLogger().log(r.getSNS().getMessage() + "\n");
if (r.getSNS().getMessage().equals("reject"))
throw new IllegalStateException("reject");
});
return null;
}
}
If you configure a Lambda with SNSEvent this creates a subscription.
If you configure a Subscription with a Protocol: lambda this also creates a subscription.
When you configure both, (with the exact same endpoints) you only get one subscription. Do both get merged, does one overwrite the other, what exactly is going on?
I'm asking to get a better understanding of CloudFormation.
For example:
# ReceivedRequestSNS Role
ReceivedRequestsSNS:
Type: AWS::SNS::Topic
Properties:
TopicName: !Sub
- ${StackName}-ReceivedRequests-${Stage}
- StackName: !Ref AWS::StackName
Stage: !Ref Stage
ReceivedRequestsToLambdaSuscription:
Type: AWS::SNS::Subscription
Properties:
Protocol: lambda
Endpoint: !Sub
- ${LambdaArn}:live
- { LambdaArn: !GetAtt TrainingNotificationsRequestsHandler.Arn }
RedrivePolicy:
deadLetterTargetArn : !GetAtt ReceivedRequestsSNSDLQ.Arn
TopicArn: !Ref ReceivedRequestsSNS
TrainingNotificationsRequestsHandler:
Type: AWS::Serverless::Function
Properties:
Handler: 'com.test.handlers.RequestsHandler::handleRequest'
Runtime: java8
Events:
SNSEvent:
Type: SNS
Properties:
Topic: !Ref ReceivedRequestsSNS
The SAM Documentation states that:
SAM generates AWS::SNS::Subscription resource when this event type is set
I would use either the SAM SNS Event Source, or the "raw" AWS:SNS::Subscription, but not both. Any behavior when both are specified seems undocumented and should not be relied on.
I have A service where we have sns topics and in B service sqs queue event.
from B service cloud formation I need to write the cloud formation YAML file to subscription between SNS event topic and SNS event queue.
sns topic name : sns-event-topic
subscribed to queue name: abcd-events
Resources:
AbcdEventQueue:
Type: "AWS::SQS::Queue"
Properties:
QueueName: "abcd-events"
AbcdEventQueuePolicy:
Type: "AWS::SQS::QueuePolicy"
Properties:
Queues:
- Ref: "AbcdEventQueue"
PolicyDocument:
Statement:
- Effect: "Allow"
Principal:
AWS: '*'
Action:
- sqs:SendMessage
- sqs:ReceiveMessage
- sqs:DeleteMessage
- sqs:GetQueueUrl
- sqs:GetQueueAttributes
- sqs:ListQueueTags
- sqs:ChangeMessageVisibility
Resource:
- !GetAtt AbcdEventQueue.Arn
Assuming you have the SNS topic already you would create a AWS::SNS::Subscription resource.
It would look like the below structure
Subscription:
Type: 'AWS::SNS::Subscription'
Properties:
TopicArn: !Ref TopicArn #You will need to provide the SNS Topic Arn here
Endpoint: !GetAtt
- AbcdEventQueue
- Arn
Protocol: sqs
RawMessageDelivery: 'true'
If the SNS topic does not share the same stack you will need to pass this into your template, this can be done either as a parameter or by using the Export feature to define a global value that you can use by referencing it with the Fn::ImportValue intrinsic function.
in lambda
Subscription:
Type: AWS::Lambda::EventSourceMapping
Properties:
EventSourceArn: !ImportValue sns-topic-arn
FunctionName: !GetAtt Function.Arn
Enabled: true
BatchSize: 1