I have set up a lambda function and an SQS service. I want the events from the SQS service to get consumed by the lambda function.
SQS configuration
All the other configuration is set to match the default.
Lambda trigger
Lambda configuration
The code used is from the sqs-poller template (and the configuration too)
Code configuration
I'm using the following code to send the event. I run the code with the following command
AWS_SDK_LOAD_CONFIG=true AWS_SHARED_CREDENTIALS_FILE=./credentials node sqs.js
That works fine because I'm seeing the messages in the monitoring panel of the SQS service.
Any idea why events are not being consumed by my lambda function?
It would appear that you have two competing concepts in your architecture.
Amazon SQS and AWS Lambda
When an Amazon SQS queue is configured as a trigger to an AWS Lambda function, the AWS service polls the SQS queue looking for messages. When message(s) are found, the Lambda function is invoked, with the messages being passed to the function via the event variable.
The Lambda function can then process those messages, reading the detail of the messages from the event variable.
If the Lambda function completes without error, the AWS service will automatically delete the messages from the SQS queue. If there was an error in the function, the messages will automatically reappear on the SQS queue after the invisibility period has expired.
At no time does the AWS Lambda function actually call the Amazon SQS service to receive or delete messages. Rather, it is given the messages when it is invoked.
SQS Poller
You mention that you are using an sqs-poller class. I'm not sure whether you are referring to Receiving Messages Using the QueuePoller Class in Amazon SQS - AWS SDK for Ruby or #jishimi/sqs-poller - npm.
Nonetheless, polling is a traditional way that worker processes retrieve messages from an SQS queue, and then delete the messages after they are processed. The process is:
They ask check whether there are messages available in the SQS queue
If so, they invoke a worker
When the worker is finished, they delete the message
You should notice that these are the same steps that the AWS Lambda service does when SQS is configured as a trigger for an AWS Lambda function. Therefore, using a polling architecture is incompatible with using SQS as a trigger for an AWS Lambda function.
You should pick one or the other, not both.
Related
I deployed a lambda, SQS standard queue and Dead letter queue on AWS. And I configured maxReceiveCount in the queue to retry before putting events to DLQ. Lambda pulls events from SQS queue in batch and process each event sequently. My question is about how retry works in case of error. There are two retries, one is on lambda maximumRetryAttempts, the other is on SQS and DLQ. Should I disable the lambda one?
In function, when it processes one event it calls deleteMessage on sqs to delete it. If there is any event that throws exception, the function throws it to lambda to make the retry happen so that it won't retry the success events.
But lambda itself has a maximumRetryAttempts and should I set it to 0? otherwise, will it retry before return to SQS? If I don't disable it, will the retry to process the whole batch of events including the success one?
Not sure which maximumRetryAttempts on lambda you are referring to. But When you use SQS with Lambda through event source mapping, as its done by default, there is no any retry parameter on lambda.
The only retry that applies is set at SQS, not lambda.
The retry option for lambda I can think of, and maybe you are thinning off as well, is for asynchronous invocation. This does not apply for SQS, as your lambda is invoked synchronously with SQS:
Lambda polls the queue and invokes your Lambda function synchronously with an event that contains queue messages.
Lambda Function can be invoked in three different ways:
Lambda reads from and invokes function. Ex: From SQS, Kinesis, etc.
Function invoked synchronously. Ex: From ApiGateway, ELB, etc.
Function invoked asynchronously. Ex: From S3 Events, SNS, Cloudwatch events etc.
Below Retry attempts is applicable for Asynchronous invocations(option 3 above)
For SQS Failures, we have two options:
DLQ on SQS itself.
Destination on Lambda. This could be SNS, another lambda, event bridge and another SQS queue. With this option, we can send both failures and success events.
Note: We don't need to call deleteMessage within lambda, lambda poller will delete message from SQS, when lambda returns success.
When a file is added to my S3 bucket an S3PUT Event is triggered which puts a message into SQS. I've configured a Lambda to be triggered as soon as a message is available.
In the lambda function, I'm sending an API request to run a task on an ECS Fargate container with environment variables containing the message received from SQS. In the container I'm using the message to download the file from S3, do processing and on successful processing I wish to delete the message from SQS.
However the message gets deleted from SQS automatically after my lambda executes.
Is there any way that I can configure the lambda not to automatically delete the SQS message (other than raising an exception and failing the lambda purposely), so that I can programmatically delete the message from my container?
Update:
Consider this scenario which I wish to achieve.
Message enters SQS queue
Lambda takes the message & runs ECS API and finishes without deleting the msg from queue.
Msg is in-flight.
ECS container runs the task and deletes msg from queue on successful processing.
If container fails, after the visibility timeout the message will re-enter the queue and the lambda will be triggered again and the cycle will repeat from step 1.
If container fails more than a certain number of times, only then will message go from in-flight to DLQ.
This all currently works only if I purposely raise an exception on the lambda and I'm looking for a similar solution without doing this.
The behaviour is intended and as long as SQS is configured as a Lambda trigger, once the function returns (i.e. completes execution) the message is automatically deleted.
The way I see it, to achieve the behaviour you're describing you have 4 options:
Remove SQS as Lambda trigger and instead execute the Lambda Function on a schedule and poll the queue yourself. The lambda will read messages that are available but unless you delete them explicitly they will become available again once their visibility timeout is expired. You can achieve this with a CloudWatch schedule.
Remove SQS as Lambda trigger and instead execute the Lambda Function explicitly. Similar to the above but instead of executing on a schedule all the time, the Lambda function could be triggered by the producer of the message itself.
Keep the SQS Lambda trigger and store the message in an alternative SQS Queue (as suggested by #jarmod in a comment above).
Configure the producer of the message to publish a message to an SNS Topic and subscribe 2 SQS Queue to this topic. One of the two queues will trigger a Lambda Function, the other one will be used by your ECS tasks.
Update
Based on the new info provided, you have another option:
Leave the event flow as it is and let the message in the SQS be deleted by Lambda. Then, in your ECS Task, handle the failure state and put a new message in the SQS with the same payload/body. This will allow you to retry indefinitely.
There's no reason why the SQS message has to be the exact same, what you're interested is the body/payload.
You might want to consider adding a mechanism to set a limit to these retries and post a message to a DLQ.
One solution I can think of is: remove lambda triggered by the sqs queue, create an alarm that on sqs queue. When the alarm triggers, scale out the ecs task. When there's no item in the queue, scale down the ecs task. Let the ecs task just poll the queue and handle all the messages.
I'm trying to send s3event message to rabbitmq by invoking AWS lambda function. I have configured SQS as my dead letter queue(DLQ).
I know the message is sent to DLQ when there failure in invocation of lambda or situations like timeouts or resource constraints.
My question is ,I want to send event message to DLQ from inside lambda function on certain condition like if rabbitmq is down or some other condition of my interest.
Is there any possiblity for the same? Should I throw exception or there is some other better approach to send event message to DLQ.
I'm using java for development and connecting to rabbitmq from my lambda function.
The DLQ is simply an SQS Queue, so you could send a message to it like you would any other queue. You would want it to be formatted the same way that Lambda natively puts message in the DLQ so that whatever processing you have on the DLQ can perform the same way for all messages. You would want to ensure that you treat the lambda as successfully executed in this instance though so that the normal DLQ process doesn't pick up the same message twice.
In the DLQ setting of a Lambda you specify a SNS topic or a SQS Queue. In your setup you have configured the DLQ to be a SQS queue. This is a regular SQS Queue. Using the SQS Java SDK you can post a message to that SQS Queue.
here are few references:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-send-message.html
To get the Queue URL you can use these:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_GetQueueUrl.html
Or through Java:
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/sqs/AmazonSQSClient.html#getQueueUrl-java.lang.String-
I want to trigger Lambda function whenever new message added to SQS.
Note that I don't want to add new message (events) to SQS.
What I'm trying to do:
My app will send message to SQS
Whenever new message added to queue CloudWatch event gets generated
CloudWatch Event triggers lambda
Problem:
In AWS console while configuring CloudWatch Events I haven't found any option to add source of event i.e. URL or Name of my SQS queue.
I'm not sure if this use case is valid but please help me out.
EDIT: AWS now supports SQS as an event source to trigger Lambda functions. See this blog post for more details.
ORIGINAL ANSWER:
SQS is not supported as a direct event source for AWS Lambda functions. If there are properties of a queueing system that you need for your use case, then you could have a "cron-job" type Lambda function that runs on a schedule, receives messages from the queue, and calls your worker Lambda function in response to each message received. The problem with this approach is that you must continually poll SQS even during periods when you don't expect messages, which incurs unnecessary cost.
The easiest approach is to use SNS instead. Create a topic, publish events to that topic instead of adding a message to an SQS queue, and have your Lambda function subscribe to that SNS topic. It will then be invoked each time a message is published to that SNS topic. There's a tutorial on this approach here:
http://docs.aws.amazon.com/lambda/latest/dg/with-sns-example.html
I would recommend to change your approach.
Your application should publish a message to an existing SNS topic. Your SQS and Lambda should than subscribe to this SNS topic.
Application -> publish -> SNS_TOPIC
-> SQS is notified
-> Lambda is notified
Does AWS lambda provide support for listening to SQS queue? I found some examples which says one can do that but I am not sure if AWS lambda explicity provide support for that. When I create the lambda function, then I found one blueprint for SQS. So,
I linked to it in your other thread - these are the supported event sources. Notice that cloudwatch events are one of the possible event types. You could set up a Lambda to, for example, run every minute and poll an SQS queue. You cannot directly trigger a Lambda off of an SQS queue.
Good news, this feature was released yesterday!
28 JUN 2018: AWS Lambda Adds Amazon Simple Queue Service to Supported Event Sources
Read the announcement blog post here: https://aws.amazon.com/blogs/aws/aws-lambda-adds-amazon-simple-queue-service-to-supported-event-sources/
AWS Serverless Model supports a new event source as following:
Type: SQS
PropertiesProperties:
QueueQueue: arn:aws:sqs:us-west-2:012345678901:my-queue arn:aws:sqs:us-west-2:0123456789 # NOTE: FIFO SQS Queues are not yet supported
BatchSize: 10
And this is how it can be configured from the AWS Console UI:
You can make your lambda function poll the queue using the SQS API. You could use SNS to trigger the Lambda function.
Update: AWS Lambda can now be triggered from Amazon SQS queues.
Old answer:
Rather than having AWS Lambda poll an Amazon SQS queue, the application that sends the message to SQS queue should instead directly invoke the Lambda function. This could be done in several ways:
Direct invocation via an AWS API call
Sending a message to an Amazon SNS topic, with the Lambda function subscribed to the topic
Calling a function via AWS API Gateway, which can then trigger a Lambda function
The extra step of putting a message into an SQS queue is not necessary.