Windows Touch and Mouse events - c++

We have an application built on an older framework (Qt 3.3.5) that we would prefer not to have to attempt to upgrade to recognize touch events.
We recently upgraded it from win32-msvc2005 to win32-msvc2013. For the most part this has gone just fine, but tablets (for testing purposes, a Surface Pro on Windows 8.1) that sent mouse events when compiled against 2005 now send only WM_POINTER events when the touch screen is used in the 2013 compiled application.
I have been completely unable to find a way to get Windows to send me mouse events for touchscreen input again. My research implies that if I register for WM_TOUCH events I should also get mouse events (as indicated by many people on the internet angry about getting mouse events with their WM_TOUCH events), but my (supposedly) successful calls to RegisterTouchWindow don't seem to actually enable the WM_TOUCH events (or any mouse events), and I am still receiving pointer events.
I feel like I must be missing something obvious, especially with not being able to get WM_TOUCH events (which I don't even want, but would supposedly get me mouse events), but whatever it is continues to elude me. (Presumably it would have to be RegisterTouchWindow not being called for the specific hwnd I am actually touching on the screen, but I've gone so far as putting a RegisterTouchWindow call in specifically upon seeing any WM_POINTERUPDATE event for the hwnd that spawned the event, and outputting the result of the Register call upon it returning true and it is returning true, so that seems impossible as the cause.)
I'm also calling DefWindowProc on all WM_TOUCH/GESTURE/POINTER events, which is the only other thing the internet seems to think might be necessary for events to bubble into more basic events correctly. The framework does not call RegisterRawInputDevices and does not attempt to handle WM_INPUT events (which it wouldn't receive anyway thanks to not being registered for raw input). Any events not handled explicitly should fall through to a DefWindowProc call.
Is there even a way that older applications like ours can move to a newer msvc without going through the pain of teaching the framework to correctly handle the various touch protocols? How does an application that worked just fine on msvc2005 using the built in windows touch to mouse events conversion get that functionality back in msvc2013?

Related

How to send simulated operation of mouse movement to UE4 application

Mouse movements for the entire computer can be done using SendInput(MOUSEEVENTF_MOVE), which can be manipulated when the UE4 window is activated.
Given that there are multiple UE4 Windows, it may be necessary to have a transiting service to handle them uniformly, but this may require switching applications or even desktops.
It would be nice if I could send mouse movement information to the specified UE4 process. Unfortunately, keyboard input and mouse clicks can be done with WM_KEYDOWN/WM_LBUTTONDOWN, etc., but mouse movement alone cannot be done with WM_MOUSEMOVE
I read the source code of UE4 and found that it uses VM_INPUT + GetDeviceData for mouse movement events, because the definition of RAWINPUT is invisible, so I can't simulate the message of RAWINPUT. In addition, Microsoft has implemented GlobalAlloc as LocalAlloc So, even if it can be simulated, it cannot be injected into different processes. Even if it can be injected into the process, I can’t modify the usage method of GetDeviceData in the UE4 source code.
I heard that there is a technology called "pixel streaming" in UE4, we can use webrtc to directly operate remote UE4 applications, including mouse movement, I tested it, it is true, but I don’t know what message is sent in webrtc, If I know, maybe I can send it a similar message to operate it.
So, there is any way to operator multiple UE4 processes at same time without webrtc?
I have joined EpicGames and can clone the newest source code, I find WM_MOUSEMOVE has been ignore when "bUsingHighPrecisionMouseInput is true"

SDL2 SDL_SetEventFilter vs SDL_WaitEvent

I had a typical SDL event loop calling SDL_WaitEvent, and ran into a much-discussed issue (see here and here) where my application was not able to re-draw during a resize because SDL_WaitEvent doesn't return until a resize is finished on certain platforms (Win32 & Mac OS). In each of these discussions, the technique of using SDL_SetEventFilter to get around it is mentioned and more or less accepted as a solution and a hack.
Using the SDL_SetEventFilter approach works perfectly, but now I'm looking at my code and I've practically moved all the code from my SDL_WaitEvent into my EventFilter and just handling events there.
Architecturally it's fishy as heck.
Are there any gotcha's with this approach of dispatching messages to my application in the function set by SDL_SetEventFilter, besides the possibility of being called on a separate thread?
Bonus question: How is SDL handling this internally? From what I understand, this resize issue is rooted in the underlying platform. For example, Win32 will issue a WM_SIZING and then enter its own internal message pump until WM_SIZE is issued. What is triggering the SDL EventFilter to run?
Answering my own question after more experimentation and sifting through the source.
The way SDL handles events is that when you call SDL_WaitEvent/SDL_PeekEvent/SDL_PeepEvents, it pumps win32 until there's no messages left. During that pump, it will process the win32 messages and turn them into SDL events, which it queues, to return after the pump completes.
The way win32 handles move/resize operations is to go into a message pump until moving/resizing completes. It's a regular message pump, so your WndProc is still invoked during this time. You'll get a WM_ENTERSIZEMOVE, followed by many WM_SIZING or WM_MOVING messages, then finally a WM_EXITSIZEMOVE.
These two things together means that when you call any of the SDL event functions and win32 does a move/resize operation, you're stuck until dragging completes.
The way EventFilter gets around this is that it gets called as part of the WndProc itself. This means that you don't need to queue up messages and get them handed back to you at the end of SDL_Peek/Wait/Peep Event. You get them handed to you immediately as part of the pumping.
In my architecture, this fits perfectly. YMMV.

Simulate mouse click in background window

I'm trying to use SendMessage to post mouse clicks to a background window (Chrome), which works fine, but brings the window to front after every click. Is there any way to avoid that?
Before anyone says this is a duplicate question, please make sure that the other topic actually mentions not activating the target window, because I couldn't find any.
Update: aha, hiding the window does the trick, almost. It receives simulated mouse/keyboard events as intended, and doesn't show up on screen. However, I can just barely use my own mouse to navigate around the computer, and keyboard input is completely disrupted.
So my question is, how does sending messages to a window affect other applications? Since I'm not actually simulating mouse/keyboard events, shouldn't the other windows be completely oblivious to this?
Is it possibly related to the window calling SetCapture when it receives WM_LBUTTONDOWN? And how would I avoid that, other than hooking the API call (which would be very, very ugly for such a small task)?
The default handling provided by the system (via DefWindowProc) causes windows to come to the front (when clicked on) as a response to the WM_MOUSEACTIVATE message, not WM_LBUTTONDOWN.
The fact that Chrome comes to the front in response to WM_LBUTTONDOWN suggests that it's something Chrome is specifically doing, rather than default system behaviour that you might be able to prevent in some way.
The source code to Chrome is available; I suggest you have a look at it and see if it is indeed something Chrome is doing itself. If so, the only practical way you would be able to prevent it (short of compiling your own version of Chrome) is to inject code into Chrome's process and sub-class its main window procedure.

WM_SETFOCUS, get app that just lost focus

When my WTL C++ application is activated or gets the keyboard focus I need to determine the window handle of the application that was previously activated/had focus. However, the window handles (LPARAM) of both the WM_SETFOCUS and WM_ACTIVATE messages are both NULL (XP, 32 bit).
How can I determine the application that just lost focus when my application is activated? Is there a simple way to do this or will I need to roll a special CBT hook?
An easy way to see exactly what messages are being sent and what their parameters are is to fire up Spy++ and set it to Log Messages while you Alt+Tab to another window.
Consistent with what you've discovered, the lParam for both WM_SETFOCUS and WM_ACTIVATE will be NULL when the previously active window (or the window being active) is not in the same thread.
You might have more luck with WM_ACTIVATEAPP, as David suggested. Once you get the thread identifier, you can try calling the GetGUIThreadInfo function to determine the active window for that thread. This function will work even if the active window is not owned by the calling process.
If your app is anything other than a small utility that the user is not expected to keep open and running for very long, I would shy away from using a CBT hook if at all possible, given the potential performance implications. Unfortunately, interaction like this across process boundaries is difficult.
If you're not afraid of using things that may break with future versions of Windows, you could investigate the RegisterShellHookWindow function. I can't tell you much about it, having never used it myself, but it's an easier way to get the shell messages you would otherwise only receive by installing a hook.
It was around as far back as Windows 2000, but wasn't included in the SDK until XP SP1. It still exists in Windows Vista and 7, as far as I can tell.

Event Handler for Minimize and Maximize Window

I am developing an application for PocketPC. When the application starts the custom function SetScreenOrientation(270) is called which rotates the screen. When the application closes the function SetScreenOrientation(0) is called which restores the screen orientation.
This way the screen orientation isn't restored if the user minimizes the application and this is not acceptable.
Does anyone know where (in which event handlers) should SetScreenOrientation(int angle) be called to set the screen orientation on application start, restore orientation on minimize, set the orientation on maximize and restore the orientation on close?
Actually I don't know which event handler handles the Minimize and Maximize event.
The correct message is WM_SIZE, but Daemin's answer points to the wrong WM_SIZE help topic. Check the wParam. Be careful as your window may be maximized but hidden.
Going from my Windows CE experience you should handle either the WM_SIZE or WM_WINDOWPOSCHANGED messages. If you're working on PocketPC I would suggest you take a look at the WM_WINDOWPOSCHANGED message first because I'm not sure the WM_SIZE has the right parameters that you need.
From the WM_WINDOWPOSCHANGED message's WINDOWPOS structure take a look at the flags member, specifically SWP_SHOWWINDOW and SWP_HIDEWINDOW.
The specific version of the messages that you need to look at vary with what operating system you're using. The Pocket PC OS is built on Windows CE 3.0 (and lower), while Windows Mobile is now built on Windows CE 5.0 (even Windows Mobile 6), but was also built on Windows CE 4. (Source)
So just look under the relevant section in MSDN for the OS that you're writing for.
I don't know what these are called in the C++ world, but in .NET Compact Framework your application form's Resize event would be called when you minimize/maximize a window, and then in the event code you would check the WindowState property of the form to see if its minimized or mazimized.
Altering the state of your PDA from within your application is risky (although there are lots of good reasons to do it), because if your app crashes it will leave the PDA in whatever state it was in. I've done a lot of kiosk-type (full-screen) apps in Windows Mobile, and one of the tricks to doing this effectively is to hide the WM title bar (the top row with the Windows start button) to keep it from flashing up for a split second every time you open a new form. If the app crashes, the windows bar remains invisible until you reset the device, which isn't good. At least with screen rotation the user can restore it manually.
It really depends on the platform, but I'd go with WM_WINDOWPOSCHANGED or the OnShow. It's not wm_size.. That one is not always thrown on all platforms. Casio's don't throw the size event when you'd expect them to. TDS and Symbol's do.
Even though the MSDN is a great sourse for info, remember not all OS's are created equal. In the PPC world the hardware provider gets to create their own OS and sometimes the miss things, or purposfully ignore things.
I've got a platform here (name withheld to protect... well me) that has left and right buttons.. When you press them, you'd expect to be able to catch VK_LEFT, VK_RIGHT.. You'd be wrong. You actually get ';' or ':'. How's that for a kick in the pants.