I'm trying to use SendMessage to post mouse clicks to a background window (Chrome), which works fine, but brings the window to front after every click. Is there any way to avoid that?
Before anyone says this is a duplicate question, please make sure that the other topic actually mentions not activating the target window, because I couldn't find any.
Update: aha, hiding the window does the trick, almost. It receives simulated mouse/keyboard events as intended, and doesn't show up on screen. However, I can just barely use my own mouse to navigate around the computer, and keyboard input is completely disrupted.
So my question is, how does sending messages to a window affect other applications? Since I'm not actually simulating mouse/keyboard events, shouldn't the other windows be completely oblivious to this?
Is it possibly related to the window calling SetCapture when it receives WM_LBUTTONDOWN? And how would I avoid that, other than hooking the API call (which would be very, very ugly for such a small task)?
The default handling provided by the system (via DefWindowProc) causes windows to come to the front (when clicked on) as a response to the WM_MOUSEACTIVATE message, not WM_LBUTTONDOWN.
The fact that Chrome comes to the front in response to WM_LBUTTONDOWN suggests that it's something Chrome is specifically doing, rather than default system behaviour that you might be able to prevent in some way.
The source code to Chrome is available; I suggest you have a look at it and see if it is indeed something Chrome is doing itself. If so, the only practical way you would be able to prevent it (short of compiling your own version of Chrome) is to inject code into Chrome's process and sub-class its main window procedure.
Related
I'm currently trying to develop an application to use two mice to perform completely different actions in Windows. However, after having spent couple days on it, I'm starting to wonder if what I want to do is even possible using Windows APIs. As I'm far from being an expert in Windows APIs, I would like to get your opinions to know whether I'm going in the right direction or whether I should try to do it completely differently (maybe developing a driver ?).
Here's what I want to do : Imagine two mice are plugged in my computer. I would like to use the first one as a regular mouse, while the second one would be used to perform completely different actions. For instance, by clicking the second left mouse button, it would open a new tab in Firefox (sending a CTRL+T command to FireFox app) and when clicking the right button, it would send a CTRL+C. Then, by moving the second mouse upwards, it would zoom in, and when moving it downwards, the firefox page would zoom out (so the mouse cursor on screen would remain fix while doing that !). The idea is to recognize as well which application is currently used (which one has mouse/keyboard focus) and perform different actions depending on it. So for instance, the second mouse left click would generate a CTRL+T in FireFox, a CTRL+B in WORD and a CTRL+S in Notepad (in fact, the idea is to parameterize those actions at will). All of that while the first mouse must continue to act just as a regular mouse.
So, it's important to understand that my application will run in the background and will never, per se, interact directly with the user (no GUI as it doesn't require the user to input anything). Its purpose is just to modify the mouse inputs coming from the second mouse and send other inputs(messages) to the application currently being used.
So far, I'm using raw input. I'm able to differentiate which mouse is being used and I'm able to send messages (application specific) to other applications when an action is performed on the second mouse. I'm even able to lock the cursor on screen when the second mouse is moved (so as only the corresponding message is sent to the application of interest !). However, I'm unable to block the button messages sent by the second mouse to the app with the mouse focus. Hence, when clicking on the second mouse right button in Notepad for instance, my specific command ("aaa" for the moment as I'm just trying with letters for sake of simplicity) is sent (and displayed in the notepad window) BUT the contextual Notepad menu opens as well… (hence it's received as well a WM_RBUTTONDOWN message).
My question is then : How can I block the mouse button messages ((WM_RBUTTONDOWN, and so on…) to be received by other applications when the second mouse is used? Is it even possible ? The problem is that (in my understanding) those messages have higher priority over the WM_input messages… So when I read the WM_input message in my application and detects that the button was pressed from the second mouse, it's already too late and the WM_xBUTTONDOWN was already sent !)
I know that using the mouse hooks, I could block those but then, there is no way to differentiate the origin of the message (and of course, detecting which mouse is used is the main point of my application).
I've tried as well using DirectInput8 but it doesn't support anymore the usage of several mice (Windows specifically says to use raw input to this effect).
So, I guess that by know you've gotten that I'm quite lost and have no idea whether what I want to do it even achievable. Any help would be more than welcome.
Looking forward to reading your replies.
I was about to suggest hooks, but then I read that you looked into that already. I guess, the last resort for your problem would be to write your own driver.
After Windows installed the second mouse in it's usual way, you can go to the Device Manager and change the driver of the mouse you want to "repurpose" to your own driver.
Although, developing a driver is probably nothing one will do as a side task in a project.
We have an application built on an older framework (Qt 3.3.5) that we would prefer not to have to attempt to upgrade to recognize touch events.
We recently upgraded it from win32-msvc2005 to win32-msvc2013. For the most part this has gone just fine, but tablets (for testing purposes, a Surface Pro on Windows 8.1) that sent mouse events when compiled against 2005 now send only WM_POINTER events when the touch screen is used in the 2013 compiled application.
I have been completely unable to find a way to get Windows to send me mouse events for touchscreen input again. My research implies that if I register for WM_TOUCH events I should also get mouse events (as indicated by many people on the internet angry about getting mouse events with their WM_TOUCH events), but my (supposedly) successful calls to RegisterTouchWindow don't seem to actually enable the WM_TOUCH events (or any mouse events), and I am still receiving pointer events.
I feel like I must be missing something obvious, especially with not being able to get WM_TOUCH events (which I don't even want, but would supposedly get me mouse events), but whatever it is continues to elude me. (Presumably it would have to be RegisterTouchWindow not being called for the specific hwnd I am actually touching on the screen, but I've gone so far as putting a RegisterTouchWindow call in specifically upon seeing any WM_POINTERUPDATE event for the hwnd that spawned the event, and outputting the result of the Register call upon it returning true and it is returning true, so that seems impossible as the cause.)
I'm also calling DefWindowProc on all WM_TOUCH/GESTURE/POINTER events, which is the only other thing the internet seems to think might be necessary for events to bubble into more basic events correctly. The framework does not call RegisterRawInputDevices and does not attempt to handle WM_INPUT events (which it wouldn't receive anyway thanks to not being registered for raw input). Any events not handled explicitly should fall through to a DefWindowProc call.
Is there even a way that older applications like ours can move to a newer msvc without going through the pain of teaching the framework to correctly handle the various touch protocols? How does an application that worked just fine on msvc2005 using the built in windows touch to mouse events conversion get that functionality back in msvc2013?
I'm part of the SFML Team and we're currently looking into a feature to "request" window focus. The goal is to get very similar behavior across Windows, OS X and Linux.
For Windows one gets the rather simple SetForegroundWindow function via the WinAPI, which has a few condition as to how the window actually gets focus. The most important part to notice here is, that it only gets focus if it's from the same foreground process.
On OS X it's possible to get the focus for the active app only and otherwise let the icon bounce, i.e. notification.
Here comes the problem now, we'd like to get the same behavior on Linux as well, meaning the window should get focus if the window belongs to the active/foreground process and otherwise it should generate a notification. What would be the closest thing to that with X11?
There are already a few suggestions on the issue tracker of SFML, but none of them are actually implementing this behavior.
"User Story"
I guess developers can think of different things when being confronted with different technical names, as such here's the issue from a user perspective.
There are mainly two situations in which requesting focus is needed:
Sometimes when starting an application that uses a console window in the background, it can happen that the console window gets the focus instead of the actual GUI window. When this happens it's rather annoying for the user having to click on the window first. Since the console window and the GUI window are from the same application there's no harm done in switching the focus to the GUI window.
When one is writing an application that supports multiple windows, there might be situations where the application should decide which window gets the focus and again since the window belong to the same application there's no harm done in switching the focus from one GUI window to the other GUI window.
Further more if a different application has the focus/is being used then it's not okay to steal the focus and as such we just want to get the user's attention. For Windows that might be a blinking taskbar or for OS X that might be a jumping icon.
The current implementation seems to work fine on OS X and Windows, butwe're unsure about the X11 implementation. Thus the question is: How would one go about switching the window focus if the currently focussed window has been created by the same application that makes the focus request and otherwise create some kind of notification. For the notification we're/I'm not even sure if there's some generic way of doing it with X11.
In X11, "focus" means "the keyboard focus", that is, the window that gets the keyboard input. The window that has the focus is not necessarily in the foreground. This depends on your window manager focus policy. Most can be configured to have "click-to-focus" or "point-to-focus" policy. If you are interested in the keyboard focus, use XSetInputFocus. If you want to bring your window to the foreground, use XRaiseWindow.
It is OK to call RaiseWindow and XSetInputFocus once, when the application starts. It is also OK to bring a window to the foreground/set focus as a response to a user interaction with that or some other window of the same application. But it's not OK to do so as a response to some background event (time passed, file downloaded etc).
The standard X11 method of drawing attention to a window is setting the urgency hint. This will normally flash or bounce the icon, depending on your window manager. Do not forget to unset the hint when the user finally interacts with the window.
I think all of this has been discussed in the thread you have linked. I'm not quite sure which concerns are still left unanswered. Nothing can implement the exact same behaviour as with the other windowing systems, simply because X11 is not those windowing systems, and it's totally OK. X11, Mac OS X and Windows all behave differently and the users know and expect that. It would annoy me to no end if some application on X11 decided to behave exactly like it does on Windows, instead of toeing the X11 party line.
(Maybe this is a beginner question for people experienced with Windows message queue handling and dialg boxes. Unfortunately this is not my area of expertise, so please be gracious to me.)
I have a quite simple C++ program in terms of user interface.
It should not be a real console program of several reasons, but it runs unattended.
It is basically a MFC stub without showing a window at all. But sometimes it shows message boxes like :
MessageBox("Question,"XX",MB_YESNO) or so.
The problem is, that, when two questions are posed after each other, sometimes Windows seems to save the mouse click or keyboard click or the user wanted to klick only once, but the hardware send two clicks. So the user has not real possibility to answer the second question, but yes or no is answered from the "ghost" click before.
(There is a word for it, but I don't know it in English. I hope, you got my point though.)
In the command line there is a fflush() for such things. How to handle it here?
I would even use a customized Messagebox, if I find somewhere ready code for it
(and I have not to write it :-)
But I thought, there might be an easy-to-use snippet to delete the message queue of the app before the next messagebox is shown. But all I know about Windows messages is that they exist ;-(
Can anybody help me?
Philm,
Message box is a modal window, hence it has own message loop. When message box is dismissed, its message does not exist, no mouse messages.
Mouse click messages are always posted to a window under the cursor, unless GetCapture is called.
From what you claim, you do not show any window, so no mouse messages posted in the que?
The only way to resolve your problem would be to debug your application or the test project that you write to duplicate this problem. Can you write it and post somewhere for downloading?
I want to make a C++ button on Start>Run i.e but when I do it will not do signalled event?
Im sorry I have seen that you do not get the question.
Ok basically when you create a button with CreateWindowEx(); I want to do that but put on a different window with SetPArent which I have already done now the button does not work so I need my program to someone get when it is clicked from the Run window as example!
And yes you have it, but it's not making the button is the problem it's getting when it's clicked with my program since it does not belong to it anymore!
You need to apply the ancient but still-supported technique known in Windows as subclassing; it is well explained here (15-years-old article, but still quite valid;-). As this article puts it,
Subclassing is a technique that allows
an application to intercept messages
destined for another window. An
application can augment, monitor, or
modify the default behavior of a
window by intercepting messages meant
for another window.
You'll want "instance subclassing", since you're interested only in a single window (either your new button, or, the one you've SetParented your new button to); if you decide to subclass a window belonging to another process, you'll also need to use the injection techniques explained in the article, such as, injecting your DLL into system processes and watching over events with a WH_CBT hook, and the like. But I think you could keep the button in your own process even though you're SetParenting it to a window belonging to a different process, in which case you can do your instance subclassing entirely within your own process, which is much simpler if feasible.
"Superclassing" is an alternative to "subclassing", also explained in the article, but doesn't seem to offer that many advantages when compared to instance subclassing (though it may compared with global subclassing... but, that's not what you need here anyway).
You'll find other interesting articles on such topics here, here, and here (developing a big, rich C++ library for subclassing -- but, also showing a simpler approach based on hooks which you might prefer). Each article has a pretty difference stance and code examples, so I think that having many to look at may help you find the right mindset and code for your specific needs!
OK, I'll do my very best - as I understand you, you're trying to inject a button into some existing window. That meaning: Your tool creates a button in some window that does not belong to your application. Now, you want to be notified when that button is pressed. Am I correct so far?
To be notified about the button being pressed, you need to get the respective window message, which will only work if you also "inject" a different WndProc into the window. Actually I have no idea how that should work, but I faintly remember functions like GetWindowLong and SetWindowLong. Maybe they will help?
EDIT
I've searched MSDN a little: While you can get the address of a window's WndProc using GetWindowLong, you can not set the WndProc using SetWindowLong on Windows NT/2000/XP (and up I suppose). See here (MSDN).
So what you could do is install a global message hook that intercepts all window messages, filter those for the window you've injected the button into and then find your message. If you have trouble with this, however, I'm the wrong person to ask, because it's been years ago since I've done anything like that, but it would be stuff for a new question.
EDIT 2
Please see Alex Martinellis answer for how to define the hook. I think he's describing the technique I was referring to when I talked about defining global message hooks to intercept the window messages for the window you injected your button into.