I have a lambda that makes web requests depending on attributes of a message coming in via API Gateway. When the web request fails, I drop the event in a queue to be processed at a later time.
Since there is a likelihood that the external service is down as the reason for the failed request, I want to retry the request but not immediately.
I know I can have the queue be a trigger for the lambda, but I don't want it to trigger immediately on a new message arriving. I'd rather have it wait for 5 minutes or so and then have the SQS trigger the lambda.
My current solution has another lambda, that is triggered by a CloudWatch event, pull from the queue and then resend the messages to the lambda that makes requests. I feel like this solution is sloppy since I'm building a cloud watch event and another lambda just to handle a retry.
Is there a way for the SQS to trigger the Lambda on a time interval rather than on enqueue? Is there a better way to handle this?
Is there a way for the SQS to trigger the Lambda on a time interval rather than on enqueue? Is there a better way to handle this?
Yes, you can setup SQS delay queues with the delay of up to 15 minutes.
Related
When a file is added to my S3 bucket an S3PUT Event is triggered which puts a message into SQS. I've configured a Lambda to be triggered as soon as a message is available.
In the lambda function, I'm sending an API request to run a task on an ECS Fargate container with environment variables containing the message received from SQS. In the container I'm using the message to download the file from S3, do processing and on successful processing I wish to delete the message from SQS.
However the message gets deleted from SQS automatically after my lambda executes.
Is there any way that I can configure the lambda not to automatically delete the SQS message (other than raising an exception and failing the lambda purposely), so that I can programmatically delete the message from my container?
Update:
Consider this scenario which I wish to achieve.
Message enters SQS queue
Lambda takes the message & runs ECS API and finishes without deleting the msg from queue.
Msg is in-flight.
ECS container runs the task and deletes msg from queue on successful processing.
If container fails, after the visibility timeout the message will re-enter the queue and the lambda will be triggered again and the cycle will repeat from step 1.
If container fails more than a certain number of times, only then will message go from in-flight to DLQ.
This all currently works only if I purposely raise an exception on the lambda and I'm looking for a similar solution without doing this.
The behaviour is intended and as long as SQS is configured as a Lambda trigger, once the function returns (i.e. completes execution) the message is automatically deleted.
The way I see it, to achieve the behaviour you're describing you have 4 options:
Remove SQS as Lambda trigger and instead execute the Lambda Function on a schedule and poll the queue yourself. The lambda will read messages that are available but unless you delete them explicitly they will become available again once their visibility timeout is expired. You can achieve this with a CloudWatch schedule.
Remove SQS as Lambda trigger and instead execute the Lambda Function explicitly. Similar to the above but instead of executing on a schedule all the time, the Lambda function could be triggered by the producer of the message itself.
Keep the SQS Lambda trigger and store the message in an alternative SQS Queue (as suggested by #jarmod in a comment above).
Configure the producer of the message to publish a message to an SNS Topic and subscribe 2 SQS Queue to this topic. One of the two queues will trigger a Lambda Function, the other one will be used by your ECS tasks.
Update
Based on the new info provided, you have another option:
Leave the event flow as it is and let the message in the SQS be deleted by Lambda. Then, in your ECS Task, handle the failure state and put a new message in the SQS with the same payload/body. This will allow you to retry indefinitely.
There's no reason why the SQS message has to be the exact same, what you're interested is the body/payload.
You might want to consider adding a mechanism to set a limit to these retries and post a message to a DLQ.
One solution I can think of is: remove lambda triggered by the sqs queue, create an alarm that on sqs queue. When the alarm triggers, scale out the ecs task. When there's no item in the queue, scale down the ecs task. Let the ecs task just poll the queue and handle all the messages.
So, I am putting some entries in SQS Queue which is set as an event source for the Lambda, and this flow is working fine. As soon as entry comes in SQS queue lambda process it. so far so good.
But I have a situation where I want to let the entries to stay in SQS for 3-4 days and then let a lambda process them.
So basically if I see that okey, I have 100 entries in my SQS Queue and it's been 4 days now. I want to let lambda drain them and run some logic. Is this possible, Kindly guide me?
I think disabling lambda is not the way to fulfil the requirement, as you will miss other messages too.
SQS is messaging service and when it integrated with Lambda you can just configure retry and process the message, keeping the message in SQS, not in user control but lambda do that by design.
Lambda polls the queue and invokes your function synchronously with an
event that contains queue messages. Lambda reads messages in batches
and invokes your function once for each batch. When your function
successfully processes a batch, Lambda deletes its messages from the
queue.
enter link description here
One solution that can work to deal with your query
But I have a situation where I want to let the entries to stay in SQS
for 3-4 days and then let a lambda process them.
You also need to decide which SQS should not be processed at the moment and push these message to DynamoDb and then process these message after 4 or 5 days base on Dynamo DB TTL that was added during insertion. You can follow below steps
Add property to SQS is_dynamodb to identify the message that should not be processed at the moment
Push such message to DynamoDB
Add TTL during insertion
Check event in Lambda function that stream from DynamoDb is removed not insertion
Process messages if the event is Removed
I have a lambda function is responsible for checking the server status. It needs to be called when SQS receives new messages and It is not allowed to change anything in SQS. I tried using SQS Lambda trigger but it will push the message into lambda function => that changed SQS queue.
I am looking the way to handle this problem. I try to use CloudWatch to handle this but I don't know is this possible or not? How Cloudwatch can trigger Lambda functions when SQS receives new messages?
Thanks in advance.
This will be difficult because, if the message is consumed quickly, it might not have an impact on Amazon CloudWatch metrics. You'll need to experiment to see whether this is the case. For example, set a metric for the maximum number of messages received in a 1-minute time period and try to trigger a CloudWatch Alarm when it is greater than zero.
Alternatively, have the system that sends the SQS message send it to Amazon SNS instead. Then, both the SQS queue and the Lambda function can subscribe to the SNS topic and both can be activated.
In fact, I know somebody who always uses SNS in front of SQS "just in case" this type of thing is necessary.
I have a AWS Lambda function which is triggered by SQS. This function is triggered approximately 100 times daily, but request count to the SQS queue is approximately 20.000 times daily. I don't understand why the number of requests made to the SQS is too high. My expectation is that the number of requests made to the SQS should be same with the Lambda invocation.
I have only one Lambda function and one SQS queue in my account.
Can be related with polling of SQS queue? I tried to change the polling interval of SQS from the queue configuration but nothing changed. Another possibility is to change polling interval from Lambda function configuration. However, I cannot find any related parameter.
Briefy, I want to reduce number of SQS request, how can i do that while invoking Lmabda function with SQS?
When using SQS as an event source for AWS Lambda, AWS Lambda regularly polls the configured SQS queue to fetch new messages. While the official documentation isn't clear really about that, the blog post announcing that feature goes into the details:
When an SQS event source mapping is initially created and enabled, or when messages first appear after a period with no traffic, then the Lambda service will begin polling the SQS queue using five parallel long-polling connections.
According to the AWS documentation, the default duration for a long poll from AWS Lambda to SQS is 20 seconds.
That results in five requests to SQS every 20 seconds for AWS Lambda functions without significant load, which sums up to the ~21600 per day, which is close to the 20000 you're experiencing.
While increasing the long poll duration seems like an easy way to decrease the number of requests, that's not possible, as the 20 seconds AWS Lambda is using by default is already the maximum possible duration for an SQS queue. I'm afraid there is no easy way to decrease the requests to SQS, when using it as event source for AWS Lambda. Instead depending it could be worth evaluating if another event source, like SNS, would fit your use case as well.
Here is how we originally implemented when there is no SQS trigger.
Create a SNS trigger with the SQS Cloudwatch Metric
ApproximateNumberOfMessagesVisible > 0
Trigger a Lambda from SNS, Read Messages from SQS and deliver it to whichever the lambda needs the message.
Alternatively, you can use Kinesis to deliver it to Lambda.
SQS --> Cloudwatch (Trigger Lambda) --> Lambda(Reads Messages) -->
Kinesis (Set Batch Size) --> Lambda (Handle Actual Message)
You can also use Kinesis directly but there is no delayed delivery.
Hope it helps.
I have a requirement to send email and SMS based on some conditions to users, i want to publish a message to AWS (Any service) with time and message at the time of user creation, is there any way to call a lambda function based on my scheduled time along with message?
Sounds like what you are saying is that you want to store a message and a 'time to send' someplace and then when that time comes, send out that message via SMS and/or SES, correct?
Lots of ways to accomplish it, but one way would be to store your messages into the database of your choice (perhaps dynamodb), and have a lambda function that gets called periodically (every minute or whatever frequency you determine) to find messages that are ready to send.
In this scenario you could use cloudwatch events to call the lambda function at the interval you decide (but no more frequent than once per minute).
Possible enhancement (especially if you have a huge number events) would be to have the lambda function not actually process the sms/ses sends - but just find those messages that are ready to send - and post those messages to an SNS topic and have a different lambda function that takes care of the actual processing (sending) of those messages.
You can use CloudWatch scheduled events for this. It allows you to specify cron expression. The event itself can trigger your lambda that then checks any preconditions you might have and then sends notification via SNS or some other way.