Our program requires the user to hold the Alt+Shift keys together in order to carry out some operations. The problem is that Windows uses this combination to switch locale on some setups. Anyone got any ideas as to how we can "override" this behaviour of Windows whilst our program is running? Can we do some sort of message intervention?
if your Code is in C#/MFC then
use LowLevelKeyboardProc
LowLevelKeyboardProc is an application-defined or library-defined callback function used with the SetWindowsHookEx function The system calls this function every time a new keyboard input event is about to be posted into a thread input queue.
Example
Related
I need to adjust something by a regular desktop program (not a service) when the system emerges from a sleep state. I expected that the program would get a WM_POWERBROADCAST message, but this message is never received.
According to How can I know when Windows is going into/out of sleep or Hibernate mode?, this message is expected without any preconditions.
Tested on Windows 11 with a simple Win32 program, generated by Visual Studio. Just added to the message loop "case WM_POWERBROADCAST:", which sets some static variable. After waking up from sleep, the variable is untouched.
You can verify with Spy++: there are only multiple WM_DEVICECHANGE messages, plus 0x02C8 and 0x02C9 messages, and repainting messages.
The workaround is to constantly poll the system with, for example, GetTickCount64(), and detect periods of inaction. Of course, better to avoid polling.
If you know something about it, please let me know what I am missing!
You have to register before you will get WM_POWERBROADCAST messages.
Take a look at Registering for Power Events, you will see that you need to call RegisterPowerSettingNotification() in order to get WM_POWERBROADCAST.
We are building an experiment with PyGaze using PsychoPy to handle all the screen related stuff and the keyboard input. Since we are using multiple timers to trigger events, we try using threading in order to have the different tasks not blocked by each other. Therefore we are also planning to create an event handler for all the inputs including keyboard input and also input from an eye tracker and other hardware.
But if we try to run the event handler within its own thread, we can't get any keyboard input. Everything runs without any errors, but there is simply no keyboard input coming through to the psychopy.event.getKeys() call.
PsychoPy itself is initialised in the main tread since otherwise it causes problems with OpenGL and pyglet.
Is there any way to get this setup running or is it fundamentally not compatible with PsychoPy?
I'm currently diagnosing an issue with window activation, where a call to SetForegroundWindow is failing, even though I believe it is being called from the foreground thread. The issue is reproducible when input comes from a touch digitizer.
Is there any way to find out, which thread received the last input? I tried calling GetWindowThreadProcessId on the handle returned from GetForegroundWindow. However, that appears to return outdated information, just after input activation of a window1).
Since this is only for diagnosing an issue, I'm happy with a solution using undocumented interfaces, if public interfaces aren't available. In case this matters, the target platform is Windows 7 64 bit.
1) GetForegroundWindow returns the same handle, irrespective of whether input comes from a touch digitizer or a pointing device. A subsequent call to SetForegroundWindow succeeds, when input comes from a pointing device, but fails for touch input.
Since this is only for diagnosing an issue, I'm happy with a solution using undocumented interfaces, if public interfaces aren't available.
You can try installing system wide hook for WH_GETMESSAGE with SetWindowsHookEx, and monitor interesting messages like WM_SETFOREGROUND. Ie. log interesting stuff before calling original version.
Another idea is to hook SetForegroundWindow API, with MHOOK or Detours. As you can see here https://superuser.com/questions/18383/preventing-applications-from-stealing-focus, using mhook looks preety simple.
GetWindowThreadProcessId does not return which thread last received input for a window. It tells you which tread created the window and therefore should be processing the input.
You have to know that Windows input works via messages. These messages are delivered to a thread message queue. This explains directly why each window has an associated thread: that's the thread to which the message is delivered.
In a normal application, all windows are created by a single "foreground" or "UI" thread. Therefore, the question "which thread recevied the last input" is always "the foreground thread". Background threads simply do not receive window messages.
Very few applications create multiple windows on multiple threads, even though this is allowed. In those cases, two threads can simultaneously receive messages, which makes the notion of "last input" invalid. Each thread has its own "last input" in these cases.
Getting back to your stated problem, SetForegroundWindow has no documented thread restrictions. In particular, there is no restriction that it has to be called from the foreground thread. In fact, the documentation states that it can be another process altogether (which certainly means another thread).
You specially mention "last input", but the restrictions only mention that in process context: "A process can set the foreground window only if ... the process received the last input event".
This does not answer the question that was asked, but addresses the root issue that led to this question:
The SetForegroundWindow API imposes several restrictions on which thread can successfully call it. One of the prerequisites is that the calling thread's
process received the last input event.
Unfortunately, on Windows 7 this does not include touch input. A process is not eligible to call SetForegroundWindow in response to WM_TOUCH. It is not until the system has synthesized a respective compatibility mouse input event, that the process finally gets foreground activation rights.
This has since changed, starting with Windows 8, where touch input counts as first-class input, and calling SetForegroundWindow succeeds in response to a WM_TOUCH message.
So, I have an application which hooks up to a library that handles a number of different tasks in different threads.
In one thread of the library, which is not the library's main thread, an event is created.
However, when I try to open the event from the application which uses the library itself, I always receive an invalid HANDLE.
The event does not use a private namespace, nor does it have any options specified for Win32's kernel object namespaces - it's pretty default.
In fact, here is the function which is used to create the event within the library thread:
CreateEventA(NULL, FALSE, FALSE, eventName);
A later call within the same thread to open the event with the following parameters is valid:
OpenEventA(EVENT_ALL_ACCESS, FALSE, eventName); // returns event without issue
Furthermore, according to MSDN, the following is stated here:
The creating thread can also specify a name for the event object. Threads in other processes can open a handle to an existing event object by specifying its name in a call to the OpenEvent function.
Apparently this isn't a mistake, either, given that the same gist is repeated here:
The process that creates an object can use the handle returned by the creation function (CreateEvent, CreateMutex, CreateSemaphore, or CreateWaitableTimer).Other processes can open a handle to the object by using its name, or through inheritance or duplication.
I've looked through MSDN to find something which would explicitly state scenarios in which this would not be the case, and I have yet to find anything.
I can also state that I've seen the event active in the library's thread when querying for the event within the application - which, as far I can see, rules out the possibility of it simply not being created.
Can someone shed some light on why the event returns NULL from OpenEvent when queried via the application?
Update
In response to #FrerichRaabe:
The error code returned is 2, or ERROR_FILE_NOT_FOUND.
#IInspectable:
Interesting; I forgot to mention that I actually have tried using the global namespace for the event, which obviously didn't work either. The same error as mentioned above is what's returned as well...
Your problem is caused by 2 unfortunate decisions: Using an event with non-ASCII characters in its name, and calling the ANSI version of the API (indicated by the trailing A).
Since the system uses Unicode internally, whenever you call an ANSI API, string parameters are converted to Unicode. The conversion of non-ASCII characters is controlled by the thread's current locale. This explains, why a call to OpenEventA on the same thread succeeds, while a it fails on another thread.
To solve this, replace the calls to the ANSI APIs with their respective Unicode versions CreateEventW and OpenEventW.
It might be the EVENT_ALL_ACCESS flag. Do you really need it?
Usually "SYNCHRONIZE | EVENT_MODIFY_STATE" is sufficient for events.
Try that and let us know.
I'm writing a driver for a device with Windows Embedded Compact 7 OS, in which applications written in .NET 3.5 will be running. My requirement is that, I need to send some custom defined system events (for some conditions that occurred in the driver) to these applications so that the corresponding event handlers written in the application should be executed when an event is invoked.
So,
What should I do to invoke/raise such events?
Which function is to be used?
How do a system event differ from a message?
How to add Event handlers in a .NET application?
TIA.
Using the plain win32 API, I would create a named event (i.e. supply a name for CreateEvent()). This can be use across process boundaries, including across the kernel/userspace boundary. You would then simply use WaitForSingleObject() and related functions to check the event state.
If you have a stream driver, you can also call ReadFile() from the application and simply stall inside the according handler function of the driver. This makes it pretty easy to add data to the event, too. Further, it provides separation between different processes that access the driver, or even different instances within the same process. Compare this with the event above, which is effectively system-wide visible and can also be set by different processes, although you can restrict this to some extent.
Another alternative is to use window messages (PostMessage(), SendMessage() etc) which also work across process boundaries. I'm not sure if this works from the kernel though. These would then end up in the "normal" message queue of e.g. the applications main window or any other window you previously told the driver. Compared to the other two approaches, you need a window as target, so it only works one way, and you don't know where a message came from.