I'm writing a driver for a device with Windows Embedded Compact 7 OS, in which applications written in .NET 3.5 will be running. My requirement is that, I need to send some custom defined system events (for some conditions that occurred in the driver) to these applications so that the corresponding event handlers written in the application should be executed when an event is invoked.
So,
What should I do to invoke/raise such events?
Which function is to be used?
How do a system event differ from a message?
How to add Event handlers in a .NET application?
TIA.
Using the plain win32 API, I would create a named event (i.e. supply a name for CreateEvent()). This can be use across process boundaries, including across the kernel/userspace boundary. You would then simply use WaitForSingleObject() and related functions to check the event state.
If you have a stream driver, you can also call ReadFile() from the application and simply stall inside the according handler function of the driver. This makes it pretty easy to add data to the event, too. Further, it provides separation between different processes that access the driver, or even different instances within the same process. Compare this with the event above, which is effectively system-wide visible and can also be set by different processes, although you can restrict this to some extent.
Another alternative is to use window messages (PostMessage(), SendMessage() etc) which also work across process boundaries. I'm not sure if this works from the kernel though. These would then end up in the "normal" message queue of e.g. the applications main window or any other window you previously told the driver. Compared to the other two approaches, you need a window as target, so it only works one way, and you don't know where a message came from.
Related
I need both event loops: one for Windows service (or Linux daemon) and another for Qt event queue QCoreApplication::exec() (or QApplication::exec() or even QEventLoop::exec()).
Can I have both at the same time in single thread? Or should I create a separate thread for one of them? In the latter case how should be arranged interaction process between QObjects and "window"/"service" thread?
Windows service requires either Message only window along with window procedure to receive and process a messages from the Windows, or Service Control Handler Function. I want to be able to process both kinds of events comes from the Windows and Qt-specific ones.
Can I use QEventLoop/QCoreApplication/QApplication::processEvents to process Qt events between events, that comes from a Windows? How can it affect service responsiveness and QTimer responsiveness?
Try to use QtService library. The QtService is useful for developing Windows services and Unix daemons:
https://github.com/qtproject/qt-solutions/tree/master/qtservice
Alternatively, you can realize it yourself like as in QtService library:
https://github.com/qtproject/qt-solutions/blob/master/qtservice/src/qtservice_win.cpp#L556
Qt event loop integrates native notifications/events on all platforms. The nativeEventFilter is how you react to native events when you wish to.
I need to hook globally mouse clicks and block last click if delay between two clicks is less than was set.
I wrote it for windows using WM_MOUSE_LL hook.
I was unable to find any solution for me. Is it even possible to globally block mouse click in X11 ?
Windows full code
As far as I know the standard X11 protocol doesn't allow this. The XInput 2.0 extension might, but I doubt it.. while Windows assumes a single event queue that every program listens to, so that a program can intercept an event and prevent it from being sent down the queue to other listeners, every X11 client has its own independent queue and all clients that register interest in an event receive an independent copy of it in their queue. This means that under normal circumstances it's impossible for an errant program to block other programs from running; but it also means that, for those times when a client must block other clients, it must do a server grab to prevent the server from processing events for any other client.
Which means you can either
use an X server proxy (won't be hard, but will be pretty slower)
or
do it on the input device level. /dev/input/event<n> give you the input events. You can read off the keypresses there and decide if they should propagate further be consumed. Unfortunately there's no real documentation for this, but the header file linux/include/input.h is quite self explanatory.
In my C++ application I'm using a third party library for Bluetooth discovering process. I'm looking at the examples provided to learn how to use it.
The example that best match my needs is a simple GUI application that call a Discovery(long timeout) function from the library to start the Bluetooth discovery.
That function returns immediatly (so that the GUI is not freezed) and fires an __event called OnDeviceFound once a new BT device has been discovered and OnDiscoveryComplete once the timeout has elapsed.
So in the GUI constructor (of the example) there're __hook defined like this:
__hook(&BluetoothDiscovery::OnDiscoveryComplete, &m_Discovery, &BluetoothClientDlg::OnDiscoveryComplete);
Now, I need to implement the same in my application, that is not a Window application but a console application that runs as a Windows Service, doing a continuos discovering on a separate thread looking for new devices.
So, actually, since my implementation makes use of a thread for discovery, I don't need an event based discovery procedure, but I need a blocking discovering. The library does not provide a blocking API for discovering.
So here comes the question: is it possible to use an event based function in a blocking function? In other words, is it possible to write a function that could be called in the thread main loop every n seconds that does a discovery procedure and return the founded Bluetooth devices (using that event-based library API)?
What you want is a Semaphore which your main thread sits on until the discovery thread completes and then notifies your main thread to wake.
Active waits like what you suggest are nasty, and should be avoided where you can.
My application's user interface consists of two windows: the console (handled by ncurses) and an X11 window for graphics. I would like to handle key events in a centralized way. That is, no matter which of the two windows is active, the same event loop should handle all the key events. I already have an event loop for X11 events. All that remains to be done is to forward all the console events to the X11 window.
The main building block to achieve this forwarding is found here. The only other thing I need is to be able to translate from the value returned by getch() to X11 keycode. After about four hours of searching the web, I found this code, which is part of qemu. However, when I compare the mapping it provides with the output of xev, the two do not match. For example, for the Home key, xev gives 110, while the mentioned mapping gives 71 | 0x0100, which is 327. Are these two different kinds of keycodes? What am I missing?
Hmm, mixing application frameworks is, almost by definition, difficult.
I think the best way to achieve what you want is to have two separate processes or threads, one for the console and the other for the X11 application. In each you would have the relevant event loop handler. To join them together use an IPC channel, either a pipe or socket. You should be able to make the socket/pipe an input to the X11 event loop handler with its own callback. You can have a select() on the console side waiting on the socket or STDIN; this allows you to call getch() when there's a keypress ready or read from the socket when the X11 has sent something through the socket. If you used something like ZeroMQ in place of the socket, even better.
So, what would you send through the socket? You would have to define your own event structure to pass between the console and the X11 application. Each side would populate and dispatch one of these when it needs to send something to the other. The types of event you'd need to describe would include would be things like quit, keypress (+ keypress data), etc.
Most likely you'd arrange the X11 end so that the socket reading callback read the structure from the socket, interpreted it and decided what callback should then be called directly. If your key presses are only for selecting menu entries, buttons, etc then this might be a not-too-bad (but not brilliant) way of avoiding the mapping problem.
This does mean having two event loop handlers, a socket and two processes/threads. But it does avoid blending the two into a single thing. It also means your console could be on a completely different machine! If you had used zeromq you could easily have multiple consoles connected to the X11 application in a PUSH/PULL configuration; perhaps absurd, but possible.
In this thread (posted about a year ago) there is a discussion of problems that can come with running Word in a non-interactive session. The (quite strong) advice given there is not to do so. In one post it is stated "The Office APIs all assume you are running Office in an interactive session on a desktop, with a monitor, keyboard and mouse and, most importantly, a message pump." I'm not sure what that is. (I've been programming in C# for only about a year; my other programming experience has primarily been with ColdFusion.)
Update:
My program runs through a large number of RTF files to extract two pieces of information used to construct a medical report number. Rather than try and figure out how the formatting instructions in RTF work, I decided to just open them in Word and pull the text out from there (without actually starting the GUI). Occasionally, the program hiccuped in the middle of processing one file, and left a Word thread open attached to that document (I still have to figure out how to shut that one down). When I re-ran the program, of course I got a notification that there was a thread using that file, and did I want to open a read-only copy? When I said Yes, the Word GUI suddenly popped up from nowhere and started processing the files. I was wondering why that happened; but it looks like maybe once the dialog box popped up the message pump started pushing the main GUI to Windows as well?
A message loop is a small piece of code that exists in any native Windows program. It roughly looks like this:
MSG msg;
while (GetMessage(&msg, NULL, 0, 0))
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
The GetMessage() Win32 API retrieves a message from Windows. Your program typically spends 99.9% of its time there, waiting for Windows to tell it something interesting happened. TranslateMessage() is a helper function that translates keyboard messages. DispatchMessage() ensures that the window procedure is called with the message.
Every GUI enabled .NET program has a message loop, it is started by Application.Run().
The relevance of a message loop to Office is related to COM. Office programs are COM-enabled programs, that's how the Microsoft.Office.Interop classes work. COM takes care of threading on behalf of a COM coclass, it ensures that calls made on a COM interface are always made from the correct thread. Most COM classes have a registry key in the registry that declares their ThreadingModel, by far the most common ones (including Office) use "Apartment". Which means that the only safe way to call an interface method is by making the call from the same thread that created the class object. Or to put it another way: by far most COM classes are not thread-safe.
Every COM enabled thread belongs to a COM apartment. There are two kinds, Single Threaded Apartments (STA) and a Multi Thread Apartment (MTA). An apartment threaded COM class must be created on an STA thread. You can see this back in .NET programs, the entry point of the UI thread of a Windows Forms or WPF program has the [STAThread] attribute. The apartment model for other threads is set by the Thread.SetApartmentState() method.
Large parts of Windows plumbing won't work correctly if the UI thread is not STA. Notably Drag+Drop, the clipboard, Windows dialogs like OpenFileDialog, controls like WebBrowser, UI Automation apps like screen readers. And many COM servers, like Office.
A hard requirement for an STA thread is that it should never block and must pump a message loop. The message loop is important because that's what COM uses to marshal an interface method call from one thread to another. Although .NET makes marshaling calls easy (Control.BeginInvoke or Dispatcher.BeginInvoke for example), it is actually a very tricky thing to do. The thread that executes the call must be in a well-known state. You can't just arbitrarily interrupt a thread and force it to make a method call, that would cause horrible re-entrancy problems. A thread should be "idle", not busy executing any code that is mutating the state of the program.
Perhaps you can see where that leads: yes, when a program is executing the message loop, it is idle. The actual marshaling takes place through a hidden window that COM creates, it uses PostMessage to have the window procedure of that window execute code. On the STA thread. The message loop ensures that this code runs.
The "message pump" is a core part of any Windows program that is responsible for dispatching windowing messages to the various parts of the application. This is the core of Win32 UI programming. Because of its ubiquity, many applications use the message pump to pass messages between different modules, which is why Office applications will break if they are run without any UI.
Wikipedia has a basic description.
John is talking about how the Windows system (and other window based systems - X Window, original Mac OS....) implement asynchronous user interfaces using events via a message system.
Behind the scenes for each application there is a messaging system where each window can send events to other windows or event listeners -- this is implemented by adding a message to the message queue. There is a main loop which always runs looking at this message queue and then dispatching the messages (or events) to the listeners.
The Wikipedia article Message loop in Microsoft Windows shows example code of a basic Windows program -- and as you can see at the most basic level a Windows program is just the "message pump".
So, to pull it all together. The reason a windows program designed to support a UI can't act as a service is because it needs the message loop running all the time to enable UI support. If you implement it as a service as described, it won't be able to process the internal asynchronous event handling.
In COM, a message pump serialises and de-serialises messages sent between apartments. An apartment is a mini process in which COM components can be run. Apartments come in single threaded and free threaded modes. Single threaded apartments are mainly a legacy system for applications of COM components that don't support multi-threading. They were typically used with Visual BASIC (as this did not support multi-threaded code) and legacy applications.
I guess that the message pump requirement for Word stems from either the COM API or parts of the application not being thread safe. Bear in mind that the .NET threading and garbage collection models don't play nicely with COM out of the box. COM has a very simplistic garbage collection mechanism and threading model that requires you to do things the COM way. Using the standard Office PIAs still requires you to explicitly shut down COM object references, so you need to keep track of every COM handle created. The PIAs will also create stuff behind the scenes if you're not careful.
.NET-COM integration is a whole topic all by itself, and there are even books written on the subject. Even using COM APIs for Office from an interactive desktop application requires you to jump through hoops and make sure that references are explicitly released.
Office can be assumed to be thread-unsafe, so you will need a separate instance of Word, Excel or other Office applications for each thread. You would have to incur the starting overhead or maintain a thread pool. A thread pool would have to be meticulously tested to make sure all COM references were correctly released. Even starting and shutting down instances requires you to make sure all references are released correctly. Failure to dot your i's and cross your t's here will result in large numbers of dead COM objects and even whole running instances of Word being leaked.
Wikipedia suggests it means the program's main Event Loop.
I think that this Channel 9 discussion has a nice succinct explanation:
This process of window communication is made possible by the so-called Windows Message Pump. Think of the Message Pump as an entity that enables cooperation between application windows and the desktop.