I have a message-only window (ATL::CWindowImpl) that registers itself for raw input using the RIDEV_INPUTSINK flag, meaning it gets all input regardless of whether the window is the foreground window. This works great when there's only one instance of that window.
However, when I create more than 1 instance of my window, only one receives the WM_INPUT messages (I'm currently creating two, and only the second one to be created gets the messages).
RegisterRawInputDevices (using RIDEV_INPUTSINK | RIDEV_NOLEGACY) is succeeding during the creation of both windows. Also, the window not receiving raw input is still receiving other messages, so it's not a problem with the window itself...
If it's relevant, I'm using the VC11 beta, and windows are created and dispatching messages on different threads (using std::thread).
Is this an API limitation (i.e. you are limited to one input sink per process)? Or is there a way to get this working?
Thanks in advance.
EDIT:
Right now my current workaround is to just have one window and for it to pass on the input messages to the other windows, however this is a mess, and won't work in the case I want it to work in (where I have my app loading plugins which may want raw input, I don't want them to have to register with my app unless I really have to do it that way...).
From MSDN (here and here) the whole API for Raw Input talks always about application and not about window... which means that an application registering for raw input will be trated by the OS as one entitiy... which you indirectly proved by registering a second receiving winow - the first one just stopped receiving because the OS delivers raw input to the application (represented by exactly onw window as the sink).
Related
I need to hook globally mouse clicks and block last click if delay between two clicks is less than was set.
I wrote it for windows using WM_MOUSE_LL hook.
I was unable to find any solution for me. Is it even possible to globally block mouse click in X11 ?
Windows full code
As far as I know the standard X11 protocol doesn't allow this. The XInput 2.0 extension might, but I doubt it.. while Windows assumes a single event queue that every program listens to, so that a program can intercept an event and prevent it from being sent down the queue to other listeners, every X11 client has its own independent queue and all clients that register interest in an event receive an independent copy of it in their queue. This means that under normal circumstances it's impossible for an errant program to block other programs from running; but it also means that, for those times when a client must block other clients, it must do a server grab to prevent the server from processing events for any other client.
Which means you can either
use an X server proxy (won't be hard, but will be pretty slower)
or
do it on the input device level. /dev/input/event<n> give you the input events. You can read off the keypresses there and decide if they should propagate further be consumed. Unfortunately there's no real documentation for this, but the header file linux/include/input.h is quite self explanatory.
My application's user interface consists of two windows: the console (handled by ncurses) and an X11 window for graphics. I would like to handle key events in a centralized way. That is, no matter which of the two windows is active, the same event loop should handle all the key events. I already have an event loop for X11 events. All that remains to be done is to forward all the console events to the X11 window.
The main building block to achieve this forwarding is found here. The only other thing I need is to be able to translate from the value returned by getch() to X11 keycode. After about four hours of searching the web, I found this code, which is part of qemu. However, when I compare the mapping it provides with the output of xev, the two do not match. For example, for the Home key, xev gives 110, while the mentioned mapping gives 71 | 0x0100, which is 327. Are these two different kinds of keycodes? What am I missing?
Hmm, mixing application frameworks is, almost by definition, difficult.
I think the best way to achieve what you want is to have two separate processes or threads, one for the console and the other for the X11 application. In each you would have the relevant event loop handler. To join them together use an IPC channel, either a pipe or socket. You should be able to make the socket/pipe an input to the X11 event loop handler with its own callback. You can have a select() on the console side waiting on the socket or STDIN; this allows you to call getch() when there's a keypress ready or read from the socket when the X11 has sent something through the socket. If you used something like ZeroMQ in place of the socket, even better.
So, what would you send through the socket? You would have to define your own event structure to pass between the console and the X11 application. Each side would populate and dispatch one of these when it needs to send something to the other. The types of event you'd need to describe would include would be things like quit, keypress (+ keypress data), etc.
Most likely you'd arrange the X11 end so that the socket reading callback read the structure from the socket, interpreted it and decided what callback should then be called directly. If your key presses are only for selecting menu entries, buttons, etc then this might be a not-too-bad (but not brilliant) way of avoiding the mapping problem.
This does mean having two event loop handlers, a socket and two processes/threads. But it does avoid blending the two into a single thing. It also means your console could be on a completely different machine! If you had used zeromq you could easily have multiple consoles connected to the X11 application in a PUSH/PULL configuration; perhaps absurd, but possible.
I'm currently diagnosing an issue with window activation, where a call to SetForegroundWindow is failing, even though I believe it is being called from the foreground thread. The issue is reproducible when input comes from a touch digitizer.
Is there any way to find out, which thread received the last input? I tried calling GetWindowThreadProcessId on the handle returned from GetForegroundWindow. However, that appears to return outdated information, just after input activation of a window1).
Since this is only for diagnosing an issue, I'm happy with a solution using undocumented interfaces, if public interfaces aren't available. In case this matters, the target platform is Windows 7 64 bit.
1) GetForegroundWindow returns the same handle, irrespective of whether input comes from a touch digitizer or a pointing device. A subsequent call to SetForegroundWindow succeeds, when input comes from a pointing device, but fails for touch input.
Since this is only for diagnosing an issue, I'm happy with a solution using undocumented interfaces, if public interfaces aren't available.
You can try installing system wide hook for WH_GETMESSAGE with SetWindowsHookEx, and monitor interesting messages like WM_SETFOREGROUND. Ie. log interesting stuff before calling original version.
Another idea is to hook SetForegroundWindow API, with MHOOK or Detours. As you can see here https://superuser.com/questions/18383/preventing-applications-from-stealing-focus, using mhook looks preety simple.
GetWindowThreadProcessId does not return which thread last received input for a window. It tells you which tread created the window and therefore should be processing the input.
You have to know that Windows input works via messages. These messages are delivered to a thread message queue. This explains directly why each window has an associated thread: that's the thread to which the message is delivered.
In a normal application, all windows are created by a single "foreground" or "UI" thread. Therefore, the question "which thread recevied the last input" is always "the foreground thread". Background threads simply do not receive window messages.
Very few applications create multiple windows on multiple threads, even though this is allowed. In those cases, two threads can simultaneously receive messages, which makes the notion of "last input" invalid. Each thread has its own "last input" in these cases.
Getting back to your stated problem, SetForegroundWindow has no documented thread restrictions. In particular, there is no restriction that it has to be called from the foreground thread. In fact, the documentation states that it can be another process altogether (which certainly means another thread).
You specially mention "last input", but the restrictions only mention that in process context: "A process can set the foreground window only if ... the process received the last input event".
This does not answer the question that was asked, but addresses the root issue that led to this question:
The SetForegroundWindow API imposes several restrictions on which thread can successfully call it. One of the prerequisites is that the calling thread's
process received the last input event.
Unfortunately, on Windows 7 this does not include touch input. A process is not eligible to call SetForegroundWindow in response to WM_TOUCH. It is not until the system has synthesized a respective compatibility mouse input event, that the process finally gets foreground activation rights.
This has since changed, starting with Windows 8, where touch input counts as first-class input, and calling SetForegroundWindow succeeds in response to a WM_TOUCH message.
I'm writing a driver for a device with Windows Embedded Compact 7 OS, in which applications written in .NET 3.5 will be running. My requirement is that, I need to send some custom defined system events (for some conditions that occurred in the driver) to these applications so that the corresponding event handlers written in the application should be executed when an event is invoked.
So,
What should I do to invoke/raise such events?
Which function is to be used?
How do a system event differ from a message?
How to add Event handlers in a .NET application?
TIA.
Using the plain win32 API, I would create a named event (i.e. supply a name for CreateEvent()). This can be use across process boundaries, including across the kernel/userspace boundary. You would then simply use WaitForSingleObject() and related functions to check the event state.
If you have a stream driver, you can also call ReadFile() from the application and simply stall inside the according handler function of the driver. This makes it pretty easy to add data to the event, too. Further, it provides separation between different processes that access the driver, or even different instances within the same process. Compare this with the event above, which is effectively system-wide visible and can also be set by different processes, although you can restrict this to some extent.
Another alternative is to use window messages (PostMessage(), SendMessage() etc) which also work across process boundaries. I'm not sure if this works from the kernel though. These would then end up in the "normal" message queue of e.g. the applications main window or any other window you previously told the driver. Compared to the other two approaches, you need a window as target, so it only works one way, and you don't know where a message came from.
I'm working on a Windows Mobile 6.5 application that has a dialog box that displays input from a camera and has a button to save a snapshot of the stream. The camera API recommends calling the function that updates the view of stream when the application is idle, via the Windows Message Loop, but doesn't get any more specific than that. After much Googling, I still can't find anything helpful in terms of actually implementing something like this.
Does anyone know how this might be achieved?
You'll have to implement a message loop, not using the conventional GetMessage which blocks until a message exists in the thread's message queue[1], but rather using PeekMessage, which returns false if no message exists[1].
If it returns false, then you do your idle processing. Note that you should divide your idle processing in small enough chunks so that the message loop doesn't cause unresponsiveness to your app.
This is also a classical alternative to threading on 1 cpu or 1 core.
[1] or should be synthesized (painting or timers)