Emulating alt-tab keys causes the menu to get stuck on screen - c++

I need to make a custom hotkey for the alt-tab function. I'm doing this with SendInput by sending the corresponding keys, and it works fine.
However, if a hotkey already includes the alt key, the program only needs to press and release tab; but doing so causes the alt-tab menu to get stuck on screen even, and the only way to make it go away is to close my program. How could that possibly happen, and what does closing my program have to do with the menu disappearing?
On the other hand, sending (alt down)(tab down)(tab up)(alt up) keys regardless of whether alt is already down works in all cases, but I can't rely on this behavior for other reasons.
I'm using WinXP if that helps, I haven't tried it on the Win7 computer yet.

Had a similar problem caused by doing PostMessage WM_KEYDOWN, VK_TAB, in an event triggered by operator clicking ALT-N to cancel an action. The ALT key was thus still down when the tab was sent. Since our code never sends a WM_KEYUP, it must have confused Win XP. Left the alt-tab menu on the screen until the application exited.

I don't know if this related, but Alt+Ctrl+Tab causes the menu to stuck, just like if Alt would stick when press Alt+Tab. So may be you are sending Ctrl signal somehow.

Related

Using two mice to perform completely different actions in Windows

I'm currently trying to develop an application to use two mice to perform completely different actions in Windows. However, after having spent couple days on it, I'm starting to wonder if what I want to do is even possible using Windows APIs. As I'm far from being an expert in Windows APIs, I would like to get your opinions to know whether I'm going in the right direction or whether I should try to do it completely differently (maybe developing a driver ?).
Here's what I want to do : Imagine two mice are plugged in my computer. I would like to use the first one as a regular mouse, while the second one would be used to perform completely different actions. For instance, by clicking the second left mouse button, it would open a new tab in Firefox (sending a CTRL+T command to FireFox app) and when clicking the right button, it would send a CTRL+C. Then, by moving the second mouse upwards, it would zoom in, and when moving it downwards, the firefox page would zoom out (so the mouse cursor on screen would remain fix while doing that !). The idea is to recognize as well which application is currently used (which one has mouse/keyboard focus) and perform different actions depending on it. So for instance, the second mouse left click would generate a CTRL+T in FireFox, a CTRL+B in WORD and a CTRL+S in Notepad (in fact, the idea is to parameterize those actions at will). All of that while the first mouse must continue to act just as a regular mouse.
So, it's important to understand that my application will run in the background and will never, per se, interact directly with the user (no GUI as it doesn't require the user to input anything). Its purpose is just to modify the mouse inputs coming from the second mouse and send other inputs(messages) to the application currently being used.
So far, I'm using raw input. I'm able to differentiate which mouse is being used and I'm able to send messages (application specific) to other applications when an action is performed on the second mouse. I'm even able to lock the cursor on screen when the second mouse is moved (so as only the corresponding message is sent to the application of interest !). However, I'm unable to block the button messages sent by the second mouse to the app with the mouse focus. Hence, when clicking on the second mouse right button in Notepad for instance, my specific command ("aaa" for the moment as I'm just trying with letters for sake of simplicity) is sent (and displayed in the notepad window) BUT the contextual Notepad menu opens as well… (hence it's received as well a WM_RBUTTONDOWN message).
My question is then : How can I block the mouse button messages ((WM_RBUTTONDOWN, and so on…) to be received by other applications when the second mouse is used? Is it even possible ? The problem is that (in my understanding) those messages have higher priority over the WM_input messages… So when I read the WM_input message in my application and detects that the button was pressed from the second mouse, it's already too late and the WM_xBUTTONDOWN was already sent !)
I know that using the mouse hooks, I could block those but then, there is no way to differentiate the origin of the message (and of course, detecting which mouse is used is the main point of my application).
I've tried as well using DirectInput8 but it doesn't support anymore the usage of several mice (Windows specifically says to use raw input to this effect).
So, I guess that by know you've gotten that I'm quite lost and have no idea whether what I want to do it even achievable. Any help would be more than welcome.
Looking forward to reading your replies.
I was about to suggest hooks, but then I read that you looked into that already. I guess, the last resort for your problem would be to write your own driver.
After Windows installed the second mouse in it's usual way, you can go to the Device Manager and change the driver of the mouse you want to "repurpose" to your own driver.
Although, developing a driver is probably nothing one will do as a side task in a project.

Calling SendInput() results in unexpected behaviour

Calling SendInput to simulate pressing down left click appears to be executed after code written below said call to SendInput is executed.
I made a listbox and I want right click to select items from the list box so I decided to make the message WM_CONTEXTMENU call SendInput to simulate a left click immediately before opening a context menu, but I believe the context menu is popping up before the left click occurs, resulting in the left click clicking the edge of the context menu (which does nothing).
Adding MessageBox(0,0,0,0); right in between the call to SendInput and the creation of the popup menu results in the left click successfully occurring and selecting an item, this is the behaviour I expected and desire. Strangely calling Sleep(1000) after the call to SendInput delays the program but doesn't cause the SendInput to behave as expected.
EDIT: Yes I know one solution to my problem is to select it using LB_SETSEL, but I'm partially doing this for learning purposes and if I run into a similar problem using SendInput I want to know how to solve it, so please help me resolve this specific bug.
SendInput() merely injects keystrokes into the keyboard's input buffer and then exits immediately, letting your app do other things while Windows handles the keystrokes in the background as if the user had typed them manually. That is not the solution to your problem.
In your WM_CONTEXTMENU handler, simply send a LB_SETCURSEL message (for a single-select ListBox) or a LB_SETSEL message (for a multi-select ListBox) directly to the ListBox's HWND to select the desired list item(s) before then displaying your popup menu.

Keyboard messages from child controls

I am currently developing a user interface DLL that uses the WIN32 API. The DLL must work for numerous platforms, XP, WIN CE, etc. I have managed to incorporate docking, anchoring and so on but appear to have a problem regarding owner-drawn buttons. I can draw the button's correct state, focus, clicked, default. However, I cannot receive key notifications. I specifically want to perform a click operation on a button that currently has focus, should the user press enter.
Note that I am using a windows message loop rather than a dialog message loop. I use windows hooks to hook into the window creation and set the user data to 'point' to my control instance. If I test for WM_KEYDOWN in the main message loop I can get a handle to my button control instance and could forward the message to the relevant control. Unfortunately, I am dealing with a lot of legacy code and this may not be an ideal solution.
So, my question is what is the best way forward. Is subclassing the button control's window procedure a viable option or is there an easier way?
Many thanks in advance.
The comments above are correct. The button with focus should be getting the key messages. But buttons don't (by themselves) respond to Enter--they respond to Space. It sounds like what you're missing is the typical dialog keyboard navigation, like Tab key moving the focus and Enter activating the "default" button.
If you've got a typical Windows message pump, and you want the keyboard behavior normally associated with dialogs, then you need to use the IsDialogMessage API in your message loop. This means your window is essentially a "modeless dialog".
Looks like standard window proc subclassing should do the trick. See http://msdn.microsoft.com/en-us/library/windows/desktop/ms633591(v=vs.85).aspx for details.

Capturing keyboard input without focus on the programwindow

I am doing a VoIP client and I want to start/stop on WM_KEYDOWN and WM_KEYUP messages for a certain input, say K. When the main window has focus, this is np, but how do I enable it outside of the window? For example, if the window is not in focus and I'm just looking at the desktop or am in a videogame. How does one perform something like this? I am not sure where to begin.
Also -- I guess you somehow has to poll every input even outside the program, is that expensive?
win32 c++ btw
You need to install keyboard hooks: http://msdn.microsoft.com/en-us/library/ms644990(v=VS.85).aspx
This can be very troubling though for every running application if something steals its keyboard messages.
I don't think you want this - if I'm typing a document into Word and I hit K, I'm going to be very angry when your application pops up instead of a "k" appearing in my document.
Windows allows you to assign shortcut keys to an icon on the desktop, but it limits them to the function keys or to combinations containing both Alt and Ctrl. Right-click on a desktop icon and go to Properties, and look for the field marked "Shortcut key".

button keyboard focus issues

How would one prevent the little dotted square that appears on a button when it has the keyboard focus in a dialog. (w/ apologies for the technical jargon). At one point I hacked together a solution by subclassing a button WindowProc and subverting some windows messages, but wanted to know the correct way.
There's actually a problem with another control in the dialog also involving the keyboard. This other control is actually also a button, but being used as a group box or panel, not as a functioning button. But when I hit the tab key in the dialog, this group box "button" comes to the foreground obscuring the static controls on top of it, so I wanted to prevent that.
For both of the above, I tried turning off WS_TABSTOP - didn't help.)
Both of my problems mentioned above were solved by subclassing the WndProcs and returning 0 in response to message 0x128 and discarding it. Even Spy++ could not identify this message 0x128, and I don't have it in any header. But its sent to every control in the dialog the first time tab is hit in the dialog.
(I did try BN_SETFOCUS as described above and also WM_SETFOCUS but it didn't help.)
So if anyone knows where to find what windows message 0x128 is...
The correct way is to write your own button control instead of using the default Windows one.
Alternatively, you can prevent if from ever getting keyboard focus.