How to handle large Emailing queue and delivery with AWS SES? - amazon-web-services

We are developing an app. that need to handle large email queues. We have planned to store emails in a SQS queue and use SES to send emails. but a bit confused on how to actually handle the queue and process queue. should I use cronjob to regularly read the SQS queue and send emails? What would be the best way to actually trigger the script that will be emailing from our app?

Using SQS with SES is a great way to handle this. If something goes wrong while emailing the request will still be on the queue and will be processed next time around.
I just use a cron job that starts my queue processing/email sending job once an hour. The job runs for an hour as a simple loop:
while i've been running < 1 hour:
if there's a message in the queue:
process the message
delete the message from the queue
I set the WaitTimeSeconds parameter to the maximum (20 seconds) so that the check for a new message will wait a while for a new message if necessary so that the job isn't hitting AWS every few milliseconds. Otherwise, I could put a sleep statement of some kind in the loop.
The reason I run for just an hour is that the job might encounter some error that kills it, or have a memory leak, or some other unanticipated problem. This way any queued email requests will still get handled the next time the job is started.
If you want, you can start the job every fifteen minutes so you'll always have four worker processes handling queue requests. If one of them dies for some reason, you'll still be processing with the other three.

Related

SQS Lambda Trigger polling rate

I'm trying to understand how SQS Lambda Triggers works when polling for messages from the Queue.
Criteria
I'm trying to make sure that not more than 3 messages are processed within a period of 1 second.
Idea
My idea is to set the trigger BatchSize to 3 and setting the ReceiveMessageWaitTimeSeconds of the queue to 1 second. Am I thinking about this correctly?
Edit:
I did some digging and looks like I can set a concurrency limit on my Lambda. If I set my Lambda concurrency limit to one that ensures only one batch of message gets processed at a time. If my lambda runs for a second, then the next batch of messages gets processed at least a second later. The gotcha here is long-polling auto scales the number of asychronous polling on the queue based on message volume. This means, the lambdas can potentailly throttle when a large number of messages comes in. When the lambdas throttle, the message goes back to the queue until it eventually goes into the DLQ.
ReceiveMessageWaitTimeSeconds is used for long polling. It is the length of time, in seconds, for which a ReceiveMessage action waits for messages to arrive (docs). Long polling does not mean that your client will wait for the full length of the time set. If you have it set to one second, but in the queue we already have enough messages, your client will consume them instantaneously and will try to consume again as soon as processing is completed.
If you want to consume certain number of messages at certain rate, you have do this on your application (for example consumes messages on a scheduled basis). SQS by itself does not provide any kind of rate limiting similar to what you would want to accomplish.

Reprocess AWS SQS Dead Letter Queue messages

I'm sending messages that failed in my lambda to a dead letter queue using aws sdk. I want to wait for few hours before sending the message back to the main queue for reprocessing. I have a lambda attached to my dead letter queue. I can use delay for sending messages to the dead letter queue. But the maximum delay is 15 minutes. But I want to wait for more time. Has anyone done this before?
Amazon SQS is not intended to be used in this manner. Its primary purpose is to store messages and then provide them back when requested.
Some other options:
Store the message in a database and have the application search for relevant messages based on a timestamp field, or
Do some tricky stuff with delays on AWS Step Functions (which has a delay feature)
As shown in this answer you can extend the delay, by giving each message a timestamp and processing only those that are in queue for a while now.
message = SQS.poll_messages
if message.perform_message_at > Time.now
SQS.push_to_queue({perform_message_at : "Thursday November
2022"},delay:15 mins)
else
process_message(message)
end

SQS - Schedule a message to de delivered

I would like to publish a message on SQS and process that message after a few hours.
How can i schedule a message delivery or select messages from SQS based on some attribute?
I've implemented a SQS consumer but I'm receiving every message from SQS queue. Is possible to implement something like that on SQS? I was thinking about to receive every message and send to queue again if it's not time to process that message.
There is a feature called as Delay Queues in SQS, wherein if you set the delay on the queue then any message that is put on queue is available to consumers only after the delay duration has elapsed. However, the maximum delay that you can set there is 15 minutes and if you are looking for a delay of few hours this may not directly work for you.
The other option is to set a visibilty timeout for the messages higher than the delay time that you want. Then when you read the message you can get the message timestamp. If there is still some time left for your delay then you can sleep your consumer for the remaining time and after it has woken up you can process that message. However this is not a recommended way and would be highly inefficient because your threads are getting blocked. In fact what can as well be done is if there is still some time left for your delay then you just hold the message in a local List/Array and check for other messages and process this message after your delay. But all this would require entire logic to reside in your code and you don't get any ready-made feature from AWS

Does the Spring SqsListener wait until the last message is processed (or completed) from the current poll before the next poll of messages happens?

I have a SQS Listener with a max message count of 10. When my consumer receives a batch of 10 message they all get processed but sometimes (depending on the message) the process will take 5-6 hours and some with take as little as 5 minutes. I have 3 consumers (3 different JVM's) polling from the queue with a maxMessageCount of 10. Here is my issue:
If one of those 10 messages takes 5 hours to process it seems as though the listener is waiting to do the next poll of 10 messages until all of the previous messages are 100% complete. Is there a way to allow it to poll a new batch of messages even though another is still being processed?
I'm guessing that I am missing something little here. How I am using Spring Cloud library and the SqsListener annotation. Has anybody ran across this before?
Also I dont think this should matter but the queue is AWS SQS and there JVM's are running on an ECS cluster.
If you run the task on the poller thread, the next poll won't happen until the current one completes.
You can use an ExecutorChannel or QueueChannel to hand the work off to another thread (or threads) but you risk message loss if you do that.
Your situation is rather unusual; 5 hours is a long time to process a message.
You should perhaps consider redesigning your application to persist these "long running" requests to a database or similar, instead of processing them directly from the message. Or, perhaps put them in a different queue so that they don't impact the shorter tasks.

Subscribing to AWS SQS Messages

I have large number of messages in AWS SQS Queue. These messages will be pushed to it constantly by other source. There are no proper dynamic on how often those messages will be pushed to queue. Currently, I keep polling SQS every second and checking if there are any messages available in there. Is there any better way of handling this, like receiving notification from SQS or SNS that some messages are available so that I only request SQS when I needed instead of constant polling?
The way to do what you want is to use long polling - rather than constantly poll every second, you open a request that stays open until it either times out or a message comes into the queue. Take a look at the documentation for ReceiveMessageRequest
ReceiveMessageRequest req = new ReceiveMessageRequest()
.withWaitTimeSeconds(Integer.valueOf(20)); // set long poll timeout to 20 sec
// set other properties on the request as well
ReceiveMessageResult result = amazonSQS.receiveMessage(req);
A common usage pattern for this is to have a background thread running the long poll and pushing the results into an internal queue (such as LinkedBlockingQueue or an ExecutorService) for a worker thread to read from.
PS. Don't forget to call deleteMessage once you're done processing the result so you don't end up receiving it again.
You can also use the worker functionality in AWS Elastic Beanstalk. It allows you to build a worker to process each message, and when you use Elastic Beanstalk to deploy it to an EC2 instance, you can define it as subscribed to a specific queue. Then each message will be POST to the worker, without your need to call receive-message on it from the queue.
It makes your system wiring much easier, as you can also have auto scaling rules that will allow you to spawn multiple workers to handle more messages in time of peak load, and scale down back to a single worker, when the load is low. It will also delete the message automatically, if you respond with OK from your worker.
See more information about it here: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-tiers.html
You could also have a look at Shoryuken and the property delay:
delay: 25 # The delay in seconds to pause a queue when it's empty
But being honest we use delay: 0 here, the cost of SQS is inexpensive:
First 1 million Amazon SQS Requests per month are free
$0.50 per 1 million Amazon SQS Requests per month thereafter ($0.00000050 per SQS Request)
A single request can have from 1 to 10 messages, up to a maximum total payload of 256KB.
Each 64KB ‘chunk’ of payload is billed as 1 request. For example, a single API call with a 256KB payload will be billed as four requests.
You will probably spend less than 10 dollars monthly polling messages every second 24x7 in a single host.
One of the advantages of Shoryuken is that it fetches in batch, so it saves some money compared with a fetch per message solutions.