I have my own keyboard and an USB barcode scanner that works like a second keyboard.
I would like to use the main keyboard to control the computer (as you usually do) and the second keyboard (that is actually the barcode scanner) to log all the input into a file.
Is this possible to do?
The point is, I could be on the internet, word, excel or whatever. I would use the main keyboard to write to that processes while in the background the second keyboard (barcode scanner) could be at the same time writing but to a log file. The program that I could be using in the moment would never know about the second keyboard input.
Thanks, all suggestions are very welcome.
You can use the Raw Input API to monitor keyboard events before the OS processes them. The API will tell you which device is sending each event, so you can log events from just the scanner.
However, the Raw Input API does not allow you to block input, so to block the scanner's events from being processed as normal keyboard events, you would need to use SetWindowsHookEx() to setup a keyboard hook that dismisses the events.
However, SetWindowsHookEx() does not report which device is sending each event, so you will have to coordinate the two APIs manually. When Raw Input detects a keyboard event, set a flag based on which device the event came from. When the hook detects the cooresponding event, check the flag and dismiss the event if the flag indicates the scanner device.
See Combining Raw Input and keyboard Hook to selectively block input from multiple keyboards on CodeProject.
Related
Is there a way to detect simulated keyboard/mouse input on Windows. For example, a user types something on his keyboard vs sendKeys/PostMessage/On-screen keyboard. Is there a way that I can distinguish between the two?
EDIT: Perhaps an example would help. I am making a game and want to distinguish between real input vs WinAPI synthesizing keyboard/mouse messages.
The only way to distinguish between "real" input and "simulated" input (assuming it is being generated with keybd_event()/mouse_event() or SendInput()) is to use a low-level keyboard/mouse hook via SetWindowsHookEx(). The WH_KEYBOARD_LL and WH_MOUSE_LL hook callbacks provide INJECTED flags for simulated input.
Starting form Windows 8 there's the GetCurrentInputMessageSource function. You can use it, and check the originId enum for the following value:
IMO_INJECTED - The input message has been injected (through the SendInput function) by an application that doesn't have the UIAccess attribute set to TRUE in its manifest file.
I might be wrong, but the on-screen keyboard (and other applications that simulate user input) most probably uses the SendInput API:
SendInput operates at the bottom level of the input stack. It is just a backdoor into the same input mechanism that the keyboard and mouse drivers use to tell the window manager that the user has generated input.
Source: http://blogs.msdn.com/b/oldnewthing/archive/2010/12/21/10107494.aspx
So there is probably no way to tell whether the input is coming from a "real" keyboard or not.
I'm in the middle of adding custom Windows Touchpad handling into my Windows C++ desktop application. From a high level, the app has its own cursor and objects that can be clicked. The cursor needs to be directly controlled by a Windows Precision Touchpad, and completely decoupled from the standard Windows mouse. I'm accomplishing this via processing raw input from WM_INPUT messages, and using low level mouse hooks to prevent controlling the normal mouse pointer.
I'm able to interpret single and multi-finger gestures just fine using the WM_INPUT data, but haven't figured out how to get "clicks" or "taps" from the touchpad. Legacy Mouse Input events will obviously not work for my use case since they:
Aren't global and require my app to be focused
Are generated by any connected mouse/pointing device, not just the touchpad I registered for.
Interact at the location of the Windows mouse pointer, which is not driving the cursor in my app.
Are clicks/taps contained in the WM_INPUT reports, and I'm just not able to find them, or is there another way I can capture raw clicks from only the touchpad in my application?
RAWMOUSE struct that comes with WM_INPUT mouse message contains usButtonFlags with mouse button up/down transition state.
You cannot get clicks/taps because AFAIK classic Win32 API is not suitable for touch input at all - it just emulating mouse in case of touchpad.
According to touchpad spec all compatible Windows Precision Touchpad's are HID devices that are sending touchpad data in their Input Reports. They should contain corresponding Top Level Collection Page 0x0D (Digitizers), Usage 0x05 (Touch Pad) - and this will be seen as separate HID device from Win32 user-mode. See sample HID Report Descriptor.
Knowing this you can register to receive touchpad data with RegisterRawInputDevices call. After that you'll receive WM_INPUT message with RAWHID struct for each tounchpad input report - this needs to be handled manually (according to device's Preparsed HID Report Descriptor Data etc).
It's not easy but doable.
See example code here.
Update: Also there are WM_TOUCH and WM_GESTURE messages available since Windows 7. Maybe its what you're looking for. Sample code.
you can use PostMessage(applicationWinhandle, WM_INPUT, wparam, mouseDeviceHandle) send WM_INPUT message to your application, and then hook GetRawInputData or GetRawInputBuffer to send data.
Is there any way to figure out where did a mouse event come from?
I mean, if I code a C/C++ program on Windows, and get a mouse click event on it, how can I find if this event come from a mouse driver, a touchpad, or if it was send by an application (mouse event simulation by sending appropriate message like WM_LBUTTONDOWN).
Thanks for any help :)
This is not possible for an application in user mode - mouse events generally don't provide documented info on event source. There is the way to obtain some message extra info by Win32 API function GetMessageExtraInfo but there is no safe way to interpret this data. It is very device specific, undocumented and never guaranteed to ever present.
To solve this task you need to develop your own Mouse Filter driver basing on Windows DDK sample.
Its callback has input parameter MOUSE_INPUT_DATA - structure containing mouse event info. There is the field UnitId:
UnitId Specifies the unit number of the mouse device. A mouse device name has the format \Device\PointerPortN, where the suffix N is the unit number of the device. For example, a device, whose name is \Device\PointerPort0, has a unit number of zero, and a device, whose name is \Device\PointerPort1, has a unit number of one.
GetAsyncKeyState function can be used to check if the button was pressed, and unfortunately SendInput cannot trick this function.
So you can simulate a mouse click, but the program can check if the button was really pressed.
So creating your own mouse driver is better.
I needed a safe way so simulate mouse/keyboard behavior for my bot, and I wrote a detailled article on my blog http://poker-botting.blogspot.fr/2012/11/how-to-simulate-mouse-and-keyboard.html
On a windows pc, I have 2 USB keyboards attached.
When a key is pressed on either keyboard it is sent to Windows, And then windows sends notifies applications that a key has been pressed.
What I want is to capture input from one of these two keyboards BEFORE it is sent to other applications, and stop it, do whatever I want with it. The other keyboard has to work normally.
I managed to differentiate keyboard inputs after they are sent to applications via Raw Input, but how do I do this before they are sent to the applications?
What I want to do is have two keyboard, one keyboard that functions normally, and another keyboard that is exclusively used for hotkeys/custom macros.
Is this even possible in Windows? And if yes, how can I accomplish this?
I have been thinking a lot over keyboard handling. How does it work? I can't seem to google me to a good explaining.
I know that a keyboard interrupt is made every time a key is pressed. The processor halts whatever it is processing and load the keyboard data from the keyboard buffer, storing it in a system level buffer.
But what happens next? Let's take a practical example. What happens when I run the following piece of code:
...
std::string s;
std::cin >> s;
....
Does the cin read from a user level representation of the system level keyboard buffer? That makes perfect sense in my head because then 2, or more processes can read from the same buffer, and by that way I don't loose any key presses. But does it work this way?
I know I'm talking in very general terms. The OS I'm using is OS X.
Except in rare situations, your keyboard and display are managed by a Window Manager: X11, Gnome, KDE, Carbon, Cocoa or Windows.
It works like this.
The keyboard driver is part of the OS.
The window manager is a privileged process, which acquires the device during startup. The window manager "owns" the device. Exclusively.
The interrupts go to OS.
The OS responds the interrupt by queueing. Eventually -- when there's nothing of a higher priority to do -- it captures the keyboard input from the interrupt and buffers it.
The owning process (the window manager) is reading this buffer. From this, it creates keyboard events.
Your application works through the window manager.
Example 1 -- You're running a command-line application. In a terminal window. When terminal window is front-most, the window manager directs events at the terminal window. Keyboard events become the stdin stream.
Example 2 -- you're running GUI application. In your own application's window. When your application's window is front-most, the window manager direct events at your application window. Keyboard events are available for your various GUI controls to process. Some keyboard events may cycle among the controls or active buttons.