Exclusive mouse/keyboard with the Winapi - c++

DirectInput had an option to have exclusive mouse/keyboard access. I'm now moving away from using DirectInput and was wondering how I could achieve the same behavior by just using the winapi?
Edit: I guess I could just use SetCursorPos() to the middle of the window and hide the cursor via ShowCursor()

In the case of the mouse, use the Windows raw input API.
Use the flag RIDEV_CAPTUREMOUSE in your RAWINPUTDEVICE structure for the call to RegisterRawInputDevices. This will prevent mouse clicks from activating other windows. In combination with that, use the ShowCursor function to hide the mouse cursor. Those 2 things will reproduce the DirectInput exclusive mouse behavior. In its later revisions, DirectInput (for the keyboard and mouse) is just a wrapper around the raw input api.
I don't believe there is any equivalent control over the keyboard (and I don't think there was in DirectInput either.) However, this is generally not a problem since the user won't be able to get the input focus onto another app unless they specifically want to with alt-tab or ctrl-alt-dlt.

Have you looked at SetCapture()?
It would help if your question were clearer. A lack of mouse input (ie WM_MOUSEMOVE messages) to an app is generally something the app is robust to. After all, a perfectly stationary mouse won't generate any such messages. So I'm guessing that you're doing something a little unusual.
There is also a mechanism for tracking the mouse leaving your app's window(s) - see here. It involves setting up a TrackMouseEvent structure which is a little painful but it does all seem to work in my experience. I'm wondering if in fact it is this mechanism which is pausing your app?
Can't help much more than that on the info provided I'm afraid.

Use ClipCursor() to confine the mouse within a specific rectangle of the screen, such as the rectangle of your window.

Related

When resizing, make the window transparent with a dotted-line border

I'm asking this question ahead of time, since I haven't gotten around to attempting an actual, real implementation yet. Win32 (C++) is turning out to be a colossal pain to program. But, my question is this:
I want to make my application's window become fully transparent with a dotted perimeter when resizing the window. How would I accomplish this? Think of what happens in Windows 3/3.1 (I believe it was this version) when resizing a window. Everything goes transparent, with a dotted-outline where the mouse is moving, then it repaints the entire contents. That's what I'm trying to achieve.
A while ago, I tried handling the WM_(ENTER/EXIT)SIZEMOVE messages and make use of SetWindowLong() to set the WS_EX_TRANSPARENT extended style, but my window became (indefinitely) pass-through, and when the window's focus was killed, it could never again regain focus.
Do I need to handle other messages like WM_NCLBUTTON(DOWN/UP)? I have a boolean flag to tell me when to halt drawing during resizing, and the logic for determining when I'm resizing works perfectly, but I cannot get the visuals to work. I'm not sure which parts of the Win32 API to actually use. I've done some research, and uxtheme.lib/.h seems promising, but I'm not sure how that would work with WM_NCPAINT, which I have been using with (some) luck.
EDIT
I need to clarify something, in case anyone was confused or unsure of what I meant. What I meant by the Windows 3.1/3 resizing scenario is that once WM_ENTERSIZEMOVE has occurred, the window (controls, caption, frame) should be made entirely invisible, and the window's nonclient-region's perimeter should display a dotted-outline of sorts. Then, only until the resize has been finished, when WM_EXITSIZEMOVE has occurred should the entire window (controls, caption, frame) be fully redrawn, updated, and returned to its normal, functional state. Sorry for any miscommunication!
I found the answer... After so long, finally found it. Here's where I found it! http://www.catch22.net/tuts/win32/docking-toolbars-part-2# - Hope it helps anyone else possibly in my shoes!
And it turns out that the solution was rather simple. In fact, the core concept of what is explained is near-completely what I was thinking, yet I just had no idea how to implement it. The solution involves overriding the default WM_NCLBUTTONDOWN, WM_MOUSEMOVE, WM_LBUTTONUP (specifically when initiating a window movement) messages, and drawing a patterned rectangle which follows the position of the cursor. Then, afterwards, calling SetWindowPos or some other similar function to relocate the window.
Basically, block Windows from attempting to display anything graphics related until the resizing has been finished. Then, and only then, make Windows move the entire window in one huge, foul swoop.
Based on Remy's comment, there is a global option and corresponding registry setting for this, so perhaps try setting the registry setting when the move starts and restoring it when the move finishes.
Unfortunately this doesn't work as Windows appears only to pick up the setting on restart, broadcasting WM_SETTINGCHANGE also doesn't trigger it, which is a pity as doing something yourself that the OS already has an implementation of do is rather a poor state of affairs.

Simulate mouse click without moving the cursor

I wrote an application that detects all active Windows and puts them into a list.
Is there a way to simulate a mouseclick on a spot on the screen relative to the Windows location without actually moving the cursor?
I don't have access to the buttons handle that is supposed to be clicked, only to the handle of the window
Is there a way to simulate a mouseclick on a spot on the screen relative to the Windows location without actually moving the cursor?
To answer your specific question - NO. Mouse clicks can only be directed where the mouse cursor actually resides at the time of the click. The correct way to simulate mouse input is to use SendInput() (or mouse_event() on older systems). But those functions inject simulated events into the same input queue that the actual mouse driver posts to, so they will have a physical effect on the mouse cursor - ie move it around the screen, etc.
How do I simulate input without SendInput?
SendInput operates at the bottom level of the input stack. It is just a backdoor into the same input mechanism that the keyboard and mouse drivers use to tell the window manager that the user has generated input. The SendInput function doesn't know what will happen to the input. That is handled by much higher levels of the window manager, like the components which hit-test mouse input to see which window the message should initially be delivered to.
When something gets added to a queue, it takes time for it to come out the front of the queue
When you call Send­Input, you're putting input packets into the system hardware input queue. (Note: Not the official term. That's just what I'm calling it today.) This is the same input queue that the hardware device driver stack uses when physical devices report events.
The message goes into the hardware input queue, where the Raw Input Thread picks them up. The Raw Input Thread runs at high priority, so it's probably going to pick it up really quickly, but on a multi-core machine, your code can keep running while the second core runs the Raw Input Thread. And the Raw Input thread has some stuff it needs to do once it dequeues the event. If there are low-level input hooks, it has to call each of those hooks to see if any of them want to reject the input. (And those hooks can take who-knows-how-long to decide.) Only after all the low-level hooks sign off on the input is the Raw Input Thread allowed to modify the input state and cause Get­Async­Key­State to report that the key is down.
The only real way to do what you are asking for is to find the HWND of the UI control that is located at the desired screen coordinates. Then you can either:
send WM_LBUTTONDOWN and WM_LBUTTONUP messages directly to it. Or, in the case of a standard Win32 button control, send a single BM_CLICK message instead.
use the AccessibleObjectFromWindow() function of the UI Automation API to access the control's IAccessible interface, and then call its accDoDefaultAction() method, which for a button will click it.
That being said, ...
I don't have access to the buttons handle that is supposed to be clicked.
You can access anything that has an HWND. Have a look at WindowFromPoint(), for instance. You can use it to find the HWND of the button that occupies the desired screen coordinates (with caveats, of course: WindowFromPoint, ChildWindowFromPoint, RealChildWindowFromPoint, when will it all end?).

Setting/Getting my absolute mouse position in windowed mode

I searched, but most posts are just telling me what I already have, so below is basically my code right now:
DIKeyboard->Acquire();
DIMouse->Acquire();
DIMouse->GetDeviceState(sizeof(DIMOUSESTATE), &mouseCurrState);
DIKeyboard->GetDeviceState(sizeof(keyboardState),(LPVOID)&keyboardState);
MousePos.x += mouseCurrState.lX;
MousePos.y += mouseCurrState.lY;
Any post telling me how to get absolute position just says to use those last two lines. But my program is windowed, and the mouse can start anywhere on the screen.
i.e. If my mouse happens to be in the centre of my screen, that becomes position 0,0. I basically just want the top left of my window (not my screen) to be my 0,0 mouse coordinates, but am having a hard time finding anything relevant.
Thanks for any help! :)
Following the discussion in the comments, you'll have to decide which method works best for you. Unfortunately, having never worked with DirectInput, I do not know the ins-and-outs of it.
However, Window Messages work best for RTS-style controls, where a cursor is drawn to screen. This is due to the fact that this respects user settings, such as mouse acceleration and mouse speed, whereas DirectInput only uses the driver settings (so not the control panel settings). The user will expect the mouse to feel the same, especially in windowed mode.
DirectInput works better for FPS-style controls, when there is no cursor drawn, as window messages give you only the cursor coordinates, and not offset values. This means that once you are at the edge of the screen, window messages will no longer allow you to detect the mouse being moved further (actually, I am not 100% sure on this, so if someone could verify, please feel free to comment).
For keyboard, I would definitely suggest window messages, because DirectInput offers no advantages, and WM input is easier to use, and quite powerful (the WM_KEYDOWN messages contains a lot of useful data), and it'll allow you (via TranslateMessage) to get good text input, adjusted to locale, etc.).
Solving your problem with DirectInput:
You could probably use GetCursorPos followed by ScreenToClient to initialise your MousePos structure. I'm guessing you'll need to redo this every time you lose mouse input and reacquire it.
Hybrid solution (for RTS like controls):
It might be possible to use a hybrid solution for the mouse if you desire RTS-like controls. If this is the case, I suggest, though I have not tested this, to use WM for the movement of the mouse, which avoids the need for workaround mentioned above, and only use DirectInput to detect additional mouse buttons.
Now one thing I think you should do in such a hybrid approach is not directly use the button when you detect it via DirectInput, but rather post a custom application message to your own message queue (using PostMessage and WM_APP) with the relevant information. I suggest this because using WM you do not get the real-time state of the mouse & keyboard, but rather the state at the time of the message. Posting a message that the button was pressed allows you to handle the extra buttons in the same state-dependent manner (I don't know how noticeable this 'lag' effect is). It also makes the entire input handling very uniform, as every bit of input with this enters as a window message.

Qt - Catch events normally handled by the Window Manager

I'm not sure quite how to phrase the question concisely, so if there is a similar question, please point me in the right direction and close this one.
I am currently building a CAD app, the user interacts within the 3D viewports primarily through the mouse and the three keyboard modifiers (alt, shift, ctrl). Shift and control modify the currently selected tool options, and alt operates the camera - much like any other 3D CAD app.
However I'm currently developing with a Gnome desktop, and it's window manager (AFAIK) catches any Alt-RightButton mouse dragging events and interprets them as a window drag command - even when not holding the title bar and regardless of the currently highlighted widget.
This is a disaster for me because camera keyboard controls are quite standardised in my target industry. So does anyone know of a way to override this behaviour, preferably from within Qt, and preferably focus it for my one scenario in one particular widget class?
Thank you,
Cam
If you use the Qt::X11BypassWindowManagerHint on the window, then the window manager can't steal your keypresses. However, this means you lose the native window frame (including decoration, moving, and resizing), so it is likely you don't want to do this.
Another way: if your users are only on 1 or 2 varieties of Linux, add something to the installer which asks the user whether they want to manipulate the gnome (or whatever) keysettings, and if so, changes them via gconftool-2 (or equivalent).

How do I get the window that currently has the cursor on top of it with X11?

How can I retrieve the top window of which the cursor is on top of in the X11 server?
The window doesn't have to be ”active” (selected, open, whatever), it just has to have the cursor floating on top of it.
Thanks in advance.
You can use XQueryPointer() to get the mouse position. Then get a window list using XQueryTree(). XQueryTree() returns the window list in proper z-order so you can just loop through all the windows until you find one whose bounding box is under the pointer, XGetWindowAttributes() will give you everything you need to figure out the bounding box. I'm not sure what you would do with shaped windows though.
I haven't work with X11 for a few years so this might be a rather clunky approach but it should work. I also don't have my O'Reilly X11 books anymore, you'll want to get your hands on book one of that series if you're going to work with low level X11 stuff; I think the whole series is available for free online these days.
I haven't programmed X11 for over a decade, so forgive me if I get this wrong.
I believe you can register for mouse movement events on your windows. If you handle such event by storing the window handle in some variable or other, and then handling the event so it doesn't percolate down the tree, then at the time you want to identify the window you can just query the variable.
However this will only work when the mouse is over a window you have registered a suitable event handler for, so you won't know about windows belonging to other applications - unless there is a way to register for events on other people's windows which may be possible.
The advantage over the other answer is that you don't have to traverse the whole tree. The disadvantage is that you need to handle a great many mouse movement events, and it may not work to find other people's windows.
I believe there may also be mouse enter and mouse leave events too which would reduce the amount of processing required.