My understanding of the size limit on the message queue in a MFC thread comes from the explanation on PostThreadMessage page of MSDN.
https://msdn.microsoft.com/en-us/library/windows/desktop/ms644946%28v=vs.85%29.aspx
As stated, the limit by default is 10000 messages. I am trying to understand exactly what this limit is. I see it being one of two thing.
Scenario A
I have a GUI that is handling messages. The rate at which the messages are being placed in the queue is greater than that at which these messages are being pulled off the queue and handled. In this case messages accumulate, eventually there are 10000 messages on the queue, another message tries to join the queue, but it then fails.
Scenario B
I have a GUI that is handling messages. The rate at which message are being placed in the queue is less that then rate at which these messages are being pulled of the queue and handled. Messages do no accumulate on the queue. But after my queue has seen 10000 messages, it is rendered useless, so effectively, my message queue has a limited operational life.
The more I think about it, the answer should be Scenario A... but stranger things have happened..
From the linked article: GetLastError returns ERROR_NOT_ENOUGH_QUOTA when the message limit is hit. So, every attempt to send/post message when the queue is full fails, that's all.
Generally, destination thread handles the messages and removes them from the queue. PeekMessage with PM_NOREMOVE flag allows to handle the message without removing it. For reference, PeekMessage function: https://msdn.microsoft.com/en-us/library/windows/desktop/ms644943%28v=vs.85%29.aspx
Related
I have a queue which is supposed to receive the messages sent by a lambda function. This function is supposed to send each different message once only. However, I saw a scary amount of receive count on the console:
Since I cannot find any explanation about receive count in the plain English, I need to consult StackOverflow Community. I have 2 theories to verify:
There are actually not so many messages and the reason why "receive count" is that high is simply because I polled the messages for a looooong time so the messages were captured more than once;
the function that sends the messages to the queue is SQS-triggered, those messages might be processed by multiple processors. Though I set VisibilityTimeout already, are the messages which are processed going to be deleted? If they aren't remained, there are no reasons for them to be caught and processed for a second time.
Any debugging suggestion will be appreciated!!
So, receive count is basically the amount of times the lambda (or any other consumer) has received the message. It can be that a consumer receives a message more than once (this is by design, and you should handle that in your logic).
That being said, the receive count also increases if your lambda fails to process the message (or even hits the execution limits). The default is 3 times, so if something with your lambda is wrong, you will have at least 3 receives per message.
Also, when you are polling the message, via the AWS console, you are basically increasing the receive count.
SQS sometimes stops receiving messages or allowing message consumption, then resumes after ~5 mins. Do you know if there is a setting that can produce this behavior? I was playing around with the settings but could not change this behavior.
Notice: When I send a message, I get the ID and the OK as it was received, but the message is not in the queue.
If you are getting an ID and message is not in the queue,I believe you are using FIFO and it ignores dupliate messages within a default time frame (5 min. ?). Whatever is feeding the queue need to use a good deduplication id in case if you want to process duplicate messages.
Read this
Imagine the following lifetime of an Order.
Order is Paid
Order is Approved
Order is Completed
We chose to use an SQS FIFO to ensure all these messages are processed in the order they are produced, to avoid for example changing the status of an order to Approved only after it was Paid and not after has been Completed.
But let's say that there is an error while trying to Approve an order, and after several attempts the message will be moved to the Deadletter queue.
The problem we noticed is the subsequent message, that is "Order is completed", it is processed, even though the previous message, "Approved", it is in the deadletter queue.
How we should handle this?
Should we check the contents of deadletter queue for having messages with the same MessageGroupID as the consuming one, assuming we could do this?
Is there a mechanism that we are missing?
Sounds to me like you are using a single Queue for multiple types of events, where I would probably recommend (at least) three seperate queues:
An order paid event queue
An order approved event queue
An order completed event queue
When a order payment comes in, an event is put into the first queue, once your system has successfully processed that payment, it removes the item from the first queue (deletes the message), and then inserts 'Order Approved' event into the 2nd queue.
The process responsible for processing those events, only watches that queue and does what it needs to do, and once complete, deletes the message and inserts a third message into the third queue so that yet another process can see and act on that message - process it and then delete it.
If anything fails along the way the message will eventually endup in a dead letter queue - either the same on, or one per queue - that makes no difference, but nothing that was supposed to happen AFTER the event failed would happen.
Doesn't even sound to me like you need a FIFO queue at all in this case, though there is no real harm (except for the slighlty higher cost, and lower throughput limits).
Source from AWS https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html:
Don't use a dead-letter queue with a FIFO queue if you don't want to break the exact order of messages or operations. For example, don't use a dead-letter queue with instructions in an Edit Decision List (EDL) for a video editing suite, where changing the order of edits changes the context of subsequent edits.
I want to process messages from an Amazon SQS Dead Letter Queue.
What is the best way to process them?
Receive messages from dead letter queue and process it.
Receive messages from dead letter queue put back in main queue and then process it?
I just need to process messages from dead letter queue once in a while.
After careful consideration of various options, I am going with the option 2 "Receive messages from dead letter queue put back in main queue and then process it" you mentioned.
Make sure that while transferring the messages from one queue messages are not lost.
Before putting messages from DLQ to main queue, make sure that the errors faced in the main listener (mainly coding errors if any) are resolved or if any network issues are resolved.
The listener of the main queue has retried the message already and retrying it again. So please make sure to either skip already successful steps of message processing in case message is being retried. Also revert successfully processed steps in case of any errors. (This will will help in the message retry as well.)
DLQ is meant for unexpected errors. So you may have an on-demand job for doing this.
Presumably the message ended up in the Dead Letter Queue for a reason, after failing several times.
It would not be a good idea to put it back in the main queue because, presumably, it would fail again and you would create an infinite loop.
Initially, dead messages should be examined manually to determine the causes of failure. Then, based on this information, an alternate flow could be developed.
I have a program that has a thread that generates Expose messages using XSendEvent. A second thread receives the Expose messages along with other messages (mainly input handling). The problem is that the sending thread sends the Expose messages at a constant rate (~60Hz) but the receiving thread may be rendering slower than that. The X11 queue will get bogged down with extra Expose messages, and any input handling messages will start fall way behind all those extra Expose messages.
In Windows, this is not a problem because Windows will automatically coalesce all WM_PAINT messages into a single message. Is there any way to do this in X11, or some other way to solve this problem?
You can very easily coalesce any kind of event yourself with XCheckTypedEvent() and friends.
I was able to solve this problem as follows:
Block the rendering thread using XPeekEvent.
When an event comes in, read all events into a new queue data structure using a combination of XPending and XNextEvent, but only copy the first expose message.
Then run the event processing loop over the new queue data structure.
This fixed the problem for me, but I think a solution that uses XCheckTypedEvent (per n.m.'s answer here) is probably more elegant.
A few of thing you can do:
If you are doing complete redraw for each event, only action events with a count of 0, count > 1 is the redraw of a particular rectange
If you generate expose events for part of the window, this will reduce the amount of work each expose event does
The constant rate, means you could just process every nth event or keep a time since the last event and ignore events received within a given time