Scheduling SQS messages - amazon-web-services

My use case is as follows: I need to be able to schedule SQS messages in such a way that scheduled messages can be added to a queue on a specific date/time, and also on a recurring basis as needed.
At the implementation level, what I'm basically be looking to do is have some function I can call where I pass in the SQS queue, message, and schedule I want it to run on, without having to build the actual scheduler logic.
I haven't seen anything in AWS itself that seems to allow for that, I also didn't get the impression Lambda functions would do exactly what I need unless I'm missing something.
Is there any other third party cloud service for scheduled processes I should look into, or am I better off in the end just running a scheduling machine at AWS and have some REST API that can add cron jobs/windows scheduled tasks to it that will handle the scheduling of SQS messages?

I could see two slightly different ways of accomplishing this, both based on Cloudwatch scheduled events. The first would be to have Cloudwatch fire off a Lambda. The Lambda would either have the needed parameters or would get them from somewhere else - for example, a DynamoDB table. Otherwise, the rule target allows you to specify a SQS queue - skipping the Lambda. But I'm not sure if that would have the configuration ability you'd want.
Either way, checkout Cloudwatch -> Events -> Create Rule in the AWS console to see your choices.

Related

How do I run AWS Dead Letter Queue periodically?

Requirement
I'd like to push DLQ messages to Lambda function periodically like cron.
Situation
I created a batch processing system using SQS and Lambda.
So, I also prepare DLQ(Dead Letter Queue) to store the failed messages.
AWS proposed re-run failed messages in the DLQ with Lambda trigger.
https://docs.aws.amazon.com/lambda/latest/dg/invocation-retries.html
I'd like to re-run periodically and automatically like cron, but AWS doesn't show how to do that except for manually.
In case I re-run manually, I JUST map the lambda and the DLQ. It's all done.
After that, I can re-run messaging dynamically with Enable and Disable button.
Otherwise, it's more complicated because no API switching Enable and Disable of Lambda trigger.
boto3 API: create_event_source_mapping() and delete_event_source_mapping() are seemed to better way.
But delete_event_source_mapping() requires UUID of the event mapping. I think it indicates that I need any datastore like ElastiCache or else.
However, I don't wanna prepare other resources in this situation if possible.
My Solution
Cloud Watch Event call lambda.
lambda activate(Enable) event source mapping using create_event_source_mapping().
lambda deactivate(Disable) event source mapping using delete_event_source_mapping()
It looks good to me at first, but in 3rd process lambda want to know UUID from the 1st event. I think this case needs datastore including UUID.
Do you have any solutions without datastore?
Thanks.

How to trigger AWS Lambda on CloudWatch schedule + SQS

I know that AWS Lambda can be invoked by CloudWatch scheduler as well as by SQS event, but can they be used together in logical "and" combination?
Basically, what I need is to run my lambda every minute (for example) only when messages available in SQS. Is it even possible with AWS config only?
I need this to be able to utilize some third-party API with hard API limit, that's why I cannot just use SQS event (easy to break the limit) and I don't like the idea to use scheduler only, because it will be useless when queue is empty.
While this is a cool idea, this is unfortunately not possible - event sources in Lambda are always separate from each other. I understand your impulse to save CPU-cycles and API-calls (and money), but I think the only solution that works is your proposed put-it-on-a-timer-and-poll-sqs one.
I was searching the documentation for references on this, but couldn't find any.

Scheduled jobs using AWS Lambda

I have several AWS lambda functions triggered by events from other applications, e.g. via Kinesis. Some of this events should trigger something happening at another time. As an example, consider the case of sending a reminder/notification e-mail to a user about something when 24 hours have passed since event X happened.
I have previously worked with lambda functions that schedule other lambda functions by dynamically creating CloudWatch "cron" rules in runtime, but I'm now revisiting my old design and considering whether this is the best approach. It was a bit tedious to set up lambdas that schedule other lambdas, because in addition to submitting CW rules with the new lambda as the target I also had to deal with runtime granting of the invoked lambda permissions to be triggered by the new CW rule.
So another approach I'm considering is to submit jobs to be done by adding them to a database table, with a given execution time, and then have one single CW cron rule running every x minutes that checks the database for due jobs. This reduces complexity of the CW rules (only one, static rule needed), lambda permissions (also static) etc, but adds complexity in an additional database table etc. Another difference is that while the old design only performed one executed one "job" per invocation, this design would potentially execute 100 pending jobs in the same invocation, and I'm not sure if this could cause timeout issues etc.
Did anyone successfully implement something similar? What approach did you choose?
I know there are also other services such as AWS Batch, but this seems overkill for scheduling of simple tasks such as sending an e-mail when time t has passed since event e happened, since to my knowledge it doesn't support simple lambda jobs. SQS also supports timed messages, but only up to 15 minutes, so it doesn't seem useful for scheduling something in 24 hours.
An interesting alternative is to use AWS Step Functions to trigger the AWS Lambda function after a given delay.
Step Functions has a Wait state that can schedule or delay execution, so you can can implement a fairly simple Step Functions state machine that puts a delay in front of calling a Lambda function. No database required!
For an example of the concept (slightly different, but close enough), see:
Using AWS Step Functions To Schedule Or Delay SNS Message Publication - Alestic.com
Task Timer - AWS Step Functions

AWS Lambda Scheduled One Time Tasks

I’m working on figuring out the best way to have Lambda run one time tasks at a given time.
The system I’m envisioning will basically have events that will need to be sent out, either as soon as the event is received/created, at a specific time, or as a recurring action. And I’d like to use AWS as much as possible for this, due to the scalable nature.
My original idea was to have a AWS SQS queue for events to send. Then I’d have a DynamoDB table for future events. I’d also have two AWS Lambda functions, one setup to run on a cron job every few minutes to take the events that are scheduled in the next 15 minutes or so from the DynamoDB table and put them into that AWS SQS queue with a Message Timer setup to delay the message from being visible for that given time. The second Lambda function would be setup and have a trigger to be run from that AWS SQS queue. This function would be responsible for actually sending the event out.
From there I could either add the event to the SQS queue (with or without a message timer) if it’s gonna need to be sent out within the next 15 minutes. Or add it to the DynamoDB table if it’s gonna need to be sent out in the future (beyond 15 minutes).
The biggest problem I just figured out is that AWS SQS FIFO queues doesn’t support Message Timers on individual messages. I need a FIFO queue because I need to prevent these events from being sent out multiple times, or triggering my second Lambda function twice.
I've also looked into the AWS Lambda cron jobs, and although you can schedule invocations every say 5 minutes, I don't think this is what I'm looking for because I'm looking more for scheduling a 1 time invocation in the future, and having that be scalable. So I don't think this is what I'm looking for.
Any ideas on how I can achieve this, since it doesn’t look like Amazon SQS Message Timers will work for what I'm trying to do?
Have you considered Step Function? You could create a wait state before invoking the lambda.

AWS Lambda to read from SQS queue

I have an AWS Lambda function to read from an SQS queue. The lambda logic is basically to read off one message from SQS and then it processes and deletes the message. Code to read the message being something like.
ReceiveMessageRequest messageRequest =
new ReceiveMessageRequest(queueUrl).withWaitTimeSeconds(5).withMaxNumberOfMessages(1);
Now my question is what is the best way to trigger this lambda and how does this lambda scale for instance, if there are let's say 1000 messages in the queue so will there be a 1000 lambdas running together, since in my case one lambda can read only one message off the queue.
Any pointers on best practices around this kind of design.
Right now you best option is probably to setup an AWS Cloudwatch event rule that calls the lambda function on the interval that you need.
Here is a sample app from AWS to do just that:
https://github.com/awslabs/aws-serverless-sqs-event-source
I do believe that AWS will eventually support SQS as a event type for AWS lambda, which should make this even easier, but for now you best choice is probably a version of the code I linked above.
We can now use SQS messages to trigger AWS Lambda Functions. Moreover, no longer required to run a message polling service or create an SQS to SNS mapping.
Further details:
https://aws.amazon.com/blogs/aws/aws-lambda-adds-amazon-simple-queue-service-to-supported-event-sources/
https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html
AWS added native support in June 2018: https://aws.amazon.com/blogs/aws/aws-lambda-adds-amazon-simple-queue-service-to-supported-event-sources/
There are probably a few ways to do this, but I found this guide to be fairly helpful when I tried to implement the same sort of functionality you are describing in Node.js. One downside to this strategy is that you can only poll the queue every 60s.
The basic workflow would look something like this:
Set up a CloudWatch Alarm that gets triggered when the queue has a certain number of messages.
The Cloudwatch alarm then posts to SNS
The SNS message triggers a Lambda scale() function
The scale() function updates a configuration record in a DynamoDB table that sets the number of worker processes needed
You then have a main CloudWatch Schedule that invokes a worker() function every 60s
The worker() function reads configuration from DynamoDB to determine how many concurrent processes are needed, based on the queue size.
Worker() then invokes the appropriate number of process() functions
Process() function consumes messages from SQS, performs your main application logic, and then removes the item from the queue.
You can find an example of what the scaling functions would look like in Node.js here
I have used this solution in a production environment for almost a year without any issues, even with thousands of messages in the queue. If you cut out the scaling portion it is only going to do one message a time.