does MSMQ have "lock until expire" functionality similar to Amazon SQS? - amazon-web-services

I've been using AWS SQS, which has a nice feature that when a message is claimed from the queue it locks for a period of time. During this lock if it is processed successfully the message is marked as completed. If the processing fails (and no response is received from the message processor), after a period of time the lock expires and the message is available for another processor to pick up.
Now I have a requirement to use queues outside of SQS (mostly for latency reasons, but potentially for cost reasons too). I'm really looking for a queue provider that has the same characteristic. MSMQ would be the obvious choice for me, since it's already installed and we use it elsewhere, but I can't find any functionality that handles failed messages in the same way.
Does MSMQ allow for this, or is there an easy way to replicate it?
Alternatively, is there another lightweight, open-source messaging service that does?

MSMQ does this already. If you read a message within a transaction and the transaction aborts then the message will reappear in the queue.

Related

Managing SQS Queue Manually vs Lambda Trigger

I'm not sure if this would be better served on ServerFault or Software Engineering, willing to move this post if appropriate.
We have somewhat recently started to move some of our data processing pipeline to use queues to manage individual bits of data, whereas previously we had timed lambdas that would pull all data since last change.
While making this change, we noticed that queues didn't work quite as we had anticipated first of all - we thought lambda would just pull items off the queue as the lambdas had availability. Instead, it seems the aws managed lambda trigger grabs a chunk of messages (up to ten) and throws it at the lambda service. If lambda doesn't have availability, the message gets throttled, then replayed after a backoff time, up til our configured replay "error" limit (five). After that, it's thrown into our dead letter queue.
We see a handful of message per day end up in the dead letter queue as a result of throttling. We then throw these back into the main queue (we have a process to do so every handful of hours). However, we weren't 100% sure throttling was the reason for things being pushed over since nothing indicates why the messages are moved over - we just assumed as much because we weren't getting any error logs for those messages. We contacted Amazon support to ask about this, and they were able to actually confirm the messages were in fact "errored" as a result of throttling.
We asked further into their recommendations for this - this must be a common problem right? They first off suggested upping our replay limit, which seemed an obvious no go. Replays occur for any failure, so that would just hammer our lambdas with bad requests when they came through. Asked also if there's any way to differentiate the errors because we don't care for throttling, we'd happily let those retry a dozen times if needed - but no. The other suggestion they had was to manage the queue ourselves from our lambdas. Build our own code within our lambdas to pull messages and then delete them after processing. This seems really counter-intuitive, though - why would every AWS consumer build their own infrastructure?
So I guess my question is, is this what others are doing? Are you using the built in lambda triggers? Are you creating your own code for managing queue consumption? Do you see these sorts of throttling, or is there anything we could do differently? Are there any difference with other services to manage this?
Best practice is to handle errors in your code and manually delete messages that have succeeded. That allows you to handle poison messages without reprocessing the good messages again. Throttles shouldn't be ending up in a DLQ that often. This video from re:Invent 2020 has a good explaination of how this works. Scalable serverless event-driven architectures with SNS, SQS & Lambda. Start at about the 20 minutes mark to get into SQS error handling.

What percentage of SQS messages are delivered at least once?

I understand that standard SQS uses "at least once" delivery, while FIFO messages are delivered exactly once.
What percentage (roughly) of SQS messages will be duplicated? This seems like an important factor when weighing standard queues vs FIFO. I wonder if it depends on message throughput?
Amazon does not provide any detailed number (even a ballpark one) to your question.
"On rare occasions" is the best I can find -
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/standard-queues.html
Based on Amazon's explanation why this can happen, I think it is irrelevant to your message throughput. You should consider it as an "expected" AWS platform glitch. It will not be an issue as long as your message handler is idempotent.
The SQS documentation says that duplicated message can occur if one of the nodes hosting SQS goes down, and cannot receive the delete message.
So based on that, you would have a fairly low number of duplicated messages. If your application cannot tolerate duplicated messages, then you probably want to use a FIFO queue.
I think the question you should be asking is "Is my process idempotent to handle duplicate messages?"
If not, make your process idempotent and use standard SQS queue.
If yes, use standard SQS queue.
You can always use SQS FIFO queue but that will make your application code "incompatible" with other queue systems that do not support such functionality.

Kinesis Producer callback functions - guaranteed delivery?

Streaming to Kinesis billions of messages a day.
We're looking for an implementation that would allow us to deliver messages to Kinesis with exactly-once guarantee.
Our producer framework requires a streaming sink to be idempotent for exactly-once delivery guarantee, which Kinesis is not. So we're getting at-least once deliveries currently. (duplicates are possible and we do see them, when a streaming micro-batch has to restart for whatever reason on the producer side)
We started looking at Kinesis Producer Library (KPL) callback functions. Basically we would be tracking state of what messages were delivered and what not in DynamoDB based on a key that's present in each message. And if we know that a message was already sent, we will skip it for delivery re-attempt. Then it seems exactly-once is possible.. with two concerns:
1)
The only question we have - how likely it is we would lose a invocation of the callback function (e.g. network glitch etc), or the callback function itself has failed (e.g. we ran into a DynamoDB limit/ outage etc) -- is this documented somewhere? I know the chances are not high, but we want to design a system that would be resilient to some expected things like these.
2)
Timing. Let's say if for whatever reason Kinesis invoked a callback function with a delay (5-15 milliseconds would be enough to break some assumptions in the above callback functions that persists delivery state in DynamoDB). And while we haven't received a confirmation on the delivery, our streaming producer framework has attempted redelivery that it thinks wasn't yet delivery. Any workarounds for this potential issue?
ps. We know that one way to workaround, is to make dedups on an application side (receiver from that kinesis stream), but that's outside of our project and we have a hard requirement to get exactly-once into that Kinesis stream.
For #1, any path you go down you'll find yourself in edge cases that could lead you to loss of data, or duplicate calls. Even using a two phased commit protocol doesn't work here if the consumer isn't participating in that protocol.
For #2, Kinesis is ordered, so if you do get duplicates you should be able to reliably assume they will be on the same shard, and thus not processed while another reader is still processing (assuming one reader per shard). Just make sure you are using a strongly consistent read when calling DynamoDB.

Chat bots: ensuring serial processing of messages on a per-conversation basis in clustered environment

In the context of writing a Messenger chat bot in a cloud environment, I'm facing some concurrency issues.
Specifically, I would like to ensure that incoming messages from the same conversation are processed one after the other.
As a constraint, I'm processing the messages with workers in a Cloud environment (i.e the worker pool is of variable size and worker instances are potentially short-lived and may crash). Also, low latency is important.
So abstracting a little, my requirements are:
I have a stream of incoming messages
each of these messages has a 'topic key' (the conversation id)
the set of topics is not known ahead-of-time and is virtually infinite
I want to ensure that messages of the same topic are processed serially
on a cluster of potentially ephemeral workers
if possible, I would like reliability guarantees e.g making sure that each message is processed exactly once.
My questions are:
Is there a name for this concurrency scenario?.
Are there technologies (message brokers, coordination services, etc.) which implement this out of the box?
If not, what algorithms can I use to implement this on top of lower-level concurrency tools? (distributed locks, actors, queues, etc.)
I don't know of a widely-accepted name for the scenario, but a common strategy to solve that type of problem is to route your messages so that all messages with the same topic key end up at the same destination. A couple of technologies that will do this for you:
With Apache ActiveMQ, HornetQ, or Apache ActiveMQ Artemis, you could use your topic key as the JMSXGroupId to ensure all messages with the same topic key are processed in-order by the same consumer, with failover
With Apache Kafka, you could use your topic key as the partition key, which will also ensure all messages with the same topic key are processed in-order by the same consumer
Some message broker vendors refer to this requirement as Message Grouping, Sticky Sessions, or Sticky Message Load Balancing.
Another common strategy on messaging systems with weaker delivery/ordering guarantees (like Amazon SQS) is to simply include a sequence number in the message and leave it up to the destination to resequence and request redelivery of missing messages as needed.
I think you can fix this by using a queue and a set. What I can think of is sending every message object in queue and processing it as first in first out. But while adding it in queue add topic name in set and while taking it out for processing remove topic name from set.
So now if you have any topic in set then don't add another message object of same topic in queue.
I hope this will help you. All the best :)

When to use delay queue feature of Amazon SQS?

I understand the concept of delay queue of Amazon SQS, but I wonder why it is useful.
What's the usage of SQS delay queue?
Thanks
One use case which i can think of is usage in distributed applications which have eventual consistency semantics. The system consuming the message may have an dependency like a co-relation identifier to be available and hence may need to wait for certain guaranteed duration of time before seeing the co-relation data. In this case, it makes sense for the message to be delayed for certain duration of time.
Like you I was confused as to a use-case for delay queues, until I stumbled across one in my own work. My application needs to have an internal queue with each item waiting at least one minute between each check for completion.
So instead of having to manage a "last-checked-time" on every object, I just shove the object's ID into an SQS queue messagewith a delay time of 60 seconds, and my main loop then becomes a simple long-poll against the queue.
A few off the top of my head:
Emails - Let's say you have a service that sends reminder emails triggered from queue messages. You'd have to delay enqueueing the message in that case.
Race conditions - Delivery delays can be used to overcome race conditions in distributed systems. For example, a service could insert a row into a table, and sends a message about its availability to other services. They can't use the new entry just yet, so you have to delay publishing the SQS message.
Handling retries - Sometimes if a message fails you want to retry with exponential backoffs. This requires re-enqueuing the message with longer delays.
I've built a suite of API's to make queue message scheduling easy. You can call our API's to schedule queue messages, cancel, edit, and check on the status of such messages. Think of it like a scheduler microservice.
www.schedulerapi.com
If you are looking for a solution, let me know. I've built these schedulers before at work for delivering emails at high scale, so I have experience with similar use cases.
One use-case can be:
Think of a time critical expression like a scheduled equity trade order.
If one of your system is fetching all the order scheduled in next 60 minutes and putting them in queue (which will be fetched by another sub system).
If you send these order directly, then they will be visible immediately to process in queue and will be processed depending upon their order.
But most likely, they will not execute in exact time (Hour:Minute:Seconds) in which Customer wanted and this will impact the outcome.
So to solve this, what first sub system will do, it will add delay seconds (difference between current and execution time) so message will only be visible after that much delay or at exact time when user wanted.