I had a typical SDL event loop calling SDL_WaitEvent, and ran into a much-discussed issue (see here and here) where my application was not able to re-draw during a resize because SDL_WaitEvent doesn't return until a resize is finished on certain platforms (Win32 & Mac OS). In each of these discussions, the technique of using SDL_SetEventFilter to get around it is mentioned and more or less accepted as a solution and a hack.
Using the SDL_SetEventFilter approach works perfectly, but now I'm looking at my code and I've practically moved all the code from my SDL_WaitEvent into my EventFilter and just handling events there.
Architecturally it's fishy as heck.
Are there any gotcha's with this approach of dispatching messages to my application in the function set by SDL_SetEventFilter, besides the possibility of being called on a separate thread?
Bonus question: How is SDL handling this internally? From what I understand, this resize issue is rooted in the underlying platform. For example, Win32 will issue a WM_SIZING and then enter its own internal message pump until WM_SIZE is issued. What is triggering the SDL EventFilter to run?
Answering my own question after more experimentation and sifting through the source.
The way SDL handles events is that when you call SDL_WaitEvent/SDL_PeekEvent/SDL_PeepEvents, it pumps win32 until there's no messages left. During that pump, it will process the win32 messages and turn them into SDL events, which it queues, to return after the pump completes.
The way win32 handles move/resize operations is to go into a message pump until moving/resizing completes. It's a regular message pump, so your WndProc is still invoked during this time. You'll get a WM_ENTERSIZEMOVE, followed by many WM_SIZING or WM_MOVING messages, then finally a WM_EXITSIZEMOVE.
These two things together means that when you call any of the SDL event functions and win32 does a move/resize operation, you're stuck until dragging completes.
The way EventFilter gets around this is that it gets called as part of the WndProc itself. This means that you don't need to queue up messages and get them handed back to you at the end of SDL_Peek/Wait/Peep Event. You get them handed to you immediately as part of the pumping.
In my architecture, this fits perfectly. YMMV.
Related
I need to pump COM messages while waiting for an event to fix a deadlock. It's better to pump as few messages as possible just to process that COM call. The best candidate for this role is CoWaitForMultipleHandles but starting from Vista it pumps WM_PAINT in addition to COM messages. Pumping WM_PAINT is too dangerous for me from re-entrance perspective and I don't want to install a custom shim database as a solution for this problem.
I'm trying to pump COM messages sent to the hidden message-only window manually.
I have found two ways to get HWND of the hidden window:
((SOleTlsData *) NtCurrentTeb()->ReservedForOle)->hwndSTA using ntinfo.h from .NET Core. This seems to be undocumented and not reliable solution in terms of future changes.
Find window of OleMainThreadWndClass as suggested in this question. The problem is that CoInitialize does not create the window. It is created later on first cross-apartment call which may or may not happen in my application. Running the search loop every time I need HWND is bad from performance perspective but caching HWND seems impossible because I don't know when it's created.
Is there a way to determine if the hidden window is created for the current apartment? I suppose it will be cheaper than the loop and then I could find and cache HWND.
Is there a better way to pump COM messages without pumping WM_PAINT?
Update: you can force the window creation by calling CoMarshalInterThreadInterfaceInStream for any interface. Then call CoReleaseMarshalData to release the stream pointer. This is what I end up doing along with the search for OleMainThreadWndClass.
WM_PAINT is generated when there is no other message in the message queue and you execute GetMessage or use PeekMessage.
But WM_PAINT is only sent if you Dispatch it. Also there is no new WM_PAINT message until a window is invalidated again.
So it depends on you if you dispatch a WM_PAINT message or not. But be aware, there are other chances of reentrances like a WM_TIMER message.
The details about this are in the docs for WM_PAINT.
From my point of view the best solution would be to set you application in a "wait" mode, that even can handle WM_PAINT in this undefined waiting state. You know when you are reentered. It is always after a WM_PAINT... or similar messages that arrive like other input messages. So I don't see any problems here. An STA has one thread and you always process messages to an end, until you execute GetMessage, launch a modal dialog or show a MessageBox. When you are inside some message handling, nothing will disturb you.
Maybe an other solution would be to wait inside a second thread for this event. This thread may not have any windows and you can translate the event to anything you need in your application.
So you question may not have enough information how this deadlock really appears. So this answer may not be sufficient.
After writing als this I tend to the opinion that this is an XY problem.
We have an application built on an older framework (Qt 3.3.5) that we would prefer not to have to attempt to upgrade to recognize touch events.
We recently upgraded it from win32-msvc2005 to win32-msvc2013. For the most part this has gone just fine, but tablets (for testing purposes, a Surface Pro on Windows 8.1) that sent mouse events when compiled against 2005 now send only WM_POINTER events when the touch screen is used in the 2013 compiled application.
I have been completely unable to find a way to get Windows to send me mouse events for touchscreen input again. My research implies that if I register for WM_TOUCH events I should also get mouse events (as indicated by many people on the internet angry about getting mouse events with their WM_TOUCH events), but my (supposedly) successful calls to RegisterTouchWindow don't seem to actually enable the WM_TOUCH events (or any mouse events), and I am still receiving pointer events.
I feel like I must be missing something obvious, especially with not being able to get WM_TOUCH events (which I don't even want, but would supposedly get me mouse events), but whatever it is continues to elude me. (Presumably it would have to be RegisterTouchWindow not being called for the specific hwnd I am actually touching on the screen, but I've gone so far as putting a RegisterTouchWindow call in specifically upon seeing any WM_POINTERUPDATE event for the hwnd that spawned the event, and outputting the result of the Register call upon it returning true and it is returning true, so that seems impossible as the cause.)
I'm also calling DefWindowProc on all WM_TOUCH/GESTURE/POINTER events, which is the only other thing the internet seems to think might be necessary for events to bubble into more basic events correctly. The framework does not call RegisterRawInputDevices and does not attempt to handle WM_INPUT events (which it wouldn't receive anyway thanks to not being registered for raw input). Any events not handled explicitly should fall through to a DefWindowProc call.
Is there even a way that older applications like ours can move to a newer msvc without going through the pain of teaching the framework to correctly handle the various touch protocols? How does an application that worked just fine on msvc2005 using the built in windows touch to mouse events conversion get that functionality back in msvc2013?
I'd like to understand how callback functions in a windowing application (like FreeGLUT, GLFW) work.
How many times they check for keyboard/mouse/resize events per second?
Does it depend on the frame rate, is it constant or maybe operation system specific?
Speaking generally, without getting into specifics for Unix or Windows implementations, callbacks are invoked from a main event loop which looks roughly like this:
Loop forever {
Get a message from the event queue.
Process the message
}
The stage of "Get a message" will have a very small sleep if it waits for a message to appear in the queue, probably less than a millisecond. The event queue will contain every message relevant to the application, including things like mouse button presses, mouse motion events, keyboard events, and window events like resize and expose.
The "Process the message" step will take an event and dispatch it to whatever is relevant for the event. So for example, a mouse click might result in the callback for a Button widget being called. Or if your OpenGL drawing area has an input handler callback set up, the mouse click would result in that function being called.
Here are a couple of resources to learn more about the process:
For Windows: http://en.wikipedia.org/wiki/Message_loop_in_Microsoft_Windows
For X/Motif: http://www.unix.com/man-page/all/3x/XtAppMainLoop/
If you want to see the specific steps along the way (there are many), you might try setting a breakpoint in a function you're interested in, such as your main OpenGL draw routine or an input callback function. Then the call stack will show you how you got there.
I'm writing a Win32 OpenGL application for painting where it is critical that all mouse movement is handled. As it happens, sometimes the painting operation in my program is not able to perform in real time -- which is fine for me, as long as all mouse events are queued and can be handled later. Now I would have thought that this would simply be a matter of calling PeekMessage making sure to process all events, but when I do this, it is apparent that the mouse movements my application receives are not of the same fidelity that as those being displayed by Windows.
Is this a feature of Windows? Are mouse event dropped when the application is labor intensive? Or am I missing something? In either case, what can I do to remedy this situation? I would like to avoid multi-threading, part of the reason being that, as I understand, Win32 requires the message callback to be in the main thread and I'm not sure about separating the OpenGL-stuff to a different context.
And as for code example, I am essentially using the template code in the link below. The message I'm checking for is WM_MOUSEMOVE.
http://nehe.gamedev.net/tutorial/creating_an_opengl_window_(win32)/13001/
Is this a feature of Windows? Are mouse event dropped when the application is labor intensive?
Yes, this is a feature. WM_MOUSEMOVE messages are not dropped, they are synthesized. In other words, they are not actually posted to the message queue. That wouldn't work very well in practice, a user could generate a great many mouse moves in a second and rapidly fill the message queue to capacity when your program is busy.
You get a WM_MOUSEMOVE message when the mouse was moved since the last time you called GetMessage(). And you get the last known position. So the rate at which you get them, and the number of pixels between them, depend directly on how often you call GetMessage().
An alternative is to use raw input.
WM_MOUSEMOVE is special in that it isn't queued; it's automatically generated as needed when the message queue is empty. (WM_PAINT and WM_TIMER behave the same way.)
Raymond Chen suggests using GetMouseMovePointsEx if you need additional mouse input data.
Additional reading:
Why do I get spurious WM_MOUSEMOVE messages?
Paint messages will come in as fast as you let them
Is there a way to hook for a particular windows message without subclassing the window.
There is WH_GETMESSAGE but that seems create performance issues.
Any other solutions apart from these which doesn't deteriorate performance?
AFAIK there's no better solution than what you mentioned. And, of course, subclassing the window is better than hooking all the messages of the thread.
Let's think which path the message passes up until it's handled by the window:
The message is either posted or sent to the window, either by explicit call to PostMessage/SendMessage or implicitly by the OS.
Posted messages only: eventually the thread pops this message from the message queue (by calling GetMessage or similar), and then calls DispatchMessage.
The OS invokes the window's procedure by calling CallWindowProc (or similar).
The CallWindowProc identifies the window procedore associated with the window (via GetClassLong/GetWindowLong)
The above procedure is called.
Subclassing - means replacing the window procedure for the target window. This seems to be the best variant.
Installing hook with WH_GETMESSAGE flag will monitor all the messages posted to the message queue. This is bad because of the following:
Performance reasons.
You'll get notified only for windows created in the specific thread
You'll get notified only for posted messages (sent messages will not be seen)
A "posted" message doesn't necessarily means "delivered". That is, it may be filtered by the message loop (thrown away without calling DispatchMessage).
You can't see what the actual window does and returns for that message.
So that subclassing seems much better.
One more solution - in case your specific message is posted (rather than sent) you may override the message loop, and for every retrieved message you may do some pre/post-processing