I have a Driverkit driver that takes care of a USB device. The driver unpacks data in the USB packets, and writes the data to buffers that are shared between the app and the driver. The shared buffers are created by the app with IOConnectCallAsyncMethod. When a buffer is ready to be consumed by the app, the driver calls IOUserClient::AsyncCompletion with an OSAction object. The OSAction object is also created as a result of the call to IOConnectCallAsyncMethod by the app. There is one OSAction object per shared buffer.
In case of an error in the mechanism that takes care of the events in the app I tell the driver to stop calling the OSAction objects, and the thread that takes care of the events in the app is stopped. At this point I cannot be sure that I have handled all the events in the app, and when I send a message to the driver to start again, I want to be sure that no events from before stopping is in the queue to be handled by the app.
I have looked at OSAction::Cancel, which lets you pass a handler that should be invoked when the callback is cancelled. The documentation for this method says A handler block for the system to call after any in-flight callbacks finish executing.
What does an "in-flight" callback mean?
I call Cancel for all OSAction objects, and decrement a counter for each OSAction object to keep track of the cancellation completion (similar to this example project from Apple). Problem is that I can't see that the block is invoked.
When can I expect the block to be invoked? Some different situations that I can think of is:
An OSAction that was never passed to AsyncCompletion.
An OSAction that was passed to AsyncCompletion but the app did not handle the event.
An OSAction that was passed to AsyncCompletion, the app started to handle the event, but the app is not yet done with the event.
I am also wondering about which dispatch queue in the driver that will be used to call the block.
Related
I am building a GTK application using GTK4. I am stuck with the problem on how to pass some data between threads in GTK. To be specific, I am presenting the problem in detail.
I have a Server and a Client module, where the Client displays the UI based on the data received from Server. The UI related operations happen on the OS Main thread where the GTK event loop runs. Whereas the communication with the Server happens on a non main thread. At some point, Server may send some update to the Client. The update will be received on the communication thread i.e. the non main thread. Since the intention is to update the UI (either modify the UI or Create a new window) based on the information received, somehow the information shall be passed to the OS main thread where event loop is running.
In GTK4 how to pass the information between threads (main to non-main or vice versa)?
I am deliberately avoiding g_idle_add / g_idle_timeout because it keeps running the function continuously or at regular intervals. I am looking for some GTK supported message passing mechanism without building my own message passing system.
The callback you pass to g_idle_add needs to return TRUE/G_SOURCE_CONTINUE or FALSE/G_SOURCE_REMOVE. The latter will remove the function from the main loop after the first invocation so it won't be run continuously.
I have a main loop in my program, which calls this method from dbus:
dbus_connection_read_write_dispatch
I have some registered callbacks, which are invoked, when message arrives. Within this callback I am also processing the response and sending back response. Problem is that sometimes it takes much time so probably it will block receiving messages from DBUS.
Question - can I call dbus_connection_read_write_dispatch() method on the same connection from more than one thread? Then it will be probably possible to receive new DBUS messages while the previous one is being processed.
Or maybe better idea is to process responses in another thread than the main loop, from callback is invoked?
Thank you
you can call dbus_connection_read_write_dispatch() from multiple threads if you have called the function dbus_threads_init_default() atleast once.Instead a better approach is to have a single thread running dbus dispatcher and use a thread-pool to process the data from callbacks.
See dbus_threads_init_default() for more info.
By the document provided by freedesktop.org, you can.
But if you operate with same DBusConnection instance from different threads directly, eg. calling dbus_connection_send_with_reply_and_block in a thread while anothoer thread is blocking on dbus_connection_read_write_dispatch, the connection maybe work unproperly. According to official document, DBus connection will be locked when calling callback functions.DBusConnection
In my situation, the dbus_connection_send_with_reply_and_block didn't return even if the return message was send to my process (I had seen it on dbus-monitor). Calling dbus_thread_init does not work at all.
Recently I used a delegate to send / receive / dispatch all dbus messages in one thread, and problem disappeared.
A mail in mailing list of freedesktop.org
I wonder if anyone familiar with a synchronization mechanism in user-mode, by which an app can register a "callback" function that would be called when another app signals it ... i don't mind the callback to be in an arbitraty thread.
Suppose i'm having lots of "Worker" processes in parallel, And one wants to notify them of a change (no payloaded data needed), by which every process will have to do some internal updates.
The immediate approach to this was to create another thread in each of them, and have an infinite loop that waits for a global event and call the callback function right afterwards. To signal this, one process would only need to signal this global event.
The problem is that i'll have lots of parallel processes in this project, i don't want to add thread*nProcesses to the system just to implement this, even if they're mostly paused.
The current "workaround" i found for this would be to hold my own "dummy" registry key, and every process will "register registery notification callback", when one app wants to notify the others it will just trigger a write to this key... and windows will callback every process which registered to this notification.
Any other ideas?
The nicer solution, which doesn't pollute the registry, would be to use a shared pipe. All workers can connect to the named pipe server, and do an async read. When the server wants to kick the workers, it just writes a byte. This triggers the completion routine of the worker. Basic example
Still, this notification has the same drawback as most other Windows notifications. If all of your worker threads are running worker code, there's no thread on which your notification can arrive - and you didn't create a special thread for that purpose either. The only solution around that is CreateRemoteThread, but that's a very big hammer.
thank you all for the useful ideas,
Eventually, I accidentally came across RegisterWaitForSingleObject which seems to do just that.
I'm still taking in account #MSalters comment about not having enough free worker threads at a given time since i'm assuming this callback mechanism relies on the same callback mechanism most Win32API does
I am designing a game server with scripting capabilities. The general design goes like this:
Client connects to Server,
Server initializes Client,
Server sends Client to EventManager (separate thread, uses libevent),
EventManager receives receive Event from Client socket,
Client manages what it received via callbacks.
Now the last part is what's the most tricky for me now.
Currently my design allows me for a class which inherits Client to create callbacks to specific received events. These callbacks are managed in a list and the received buffer goes through a parsing process each time something is received. If the buffer is valid, the callback is called where it is act upon what is in the buffer. One thing to note is that the callbacks can go down to the scripting engine, at which point nothing is sure what can happen.
Each time a callback finishes, the current receive buffer has to be reset etc. Callbacks currently have no capability of returning a value, because as stated before, anything can happen.
What happens is that when somewhere in the callback something says this->disconnect(), I want to immediately disconnect the Client, remove it from the EventManager, and lastly remove it from the Server, where it also should get finally destructed and free memory. However, I still have some Code running after the callback finishes in the Client, thus I can't free memory.
What should I change in the design? Should I have some timed event in the Server which checks which Clients are free to destroy? Would that create additional overhead I don't need? Would it still be okay after the callback finishes to run minimal code on the stack (return -1;) or not?
I have no idea what to do, but I am open for complete design revamps.
Thanks in advance.
You can use a reference counted pointer like boost::shared_ptr<> to simplify memory management. If the manager's client list uses shared_ptrs and the code that calls the callbacks creates a local copy of the shared_ptr the callback is called on, the object will stay alive until it is removed from the manager and the callback function is complete:
class EventManager {
std::vector< boost::shared_ptr<Client> > clients;
void handle_event(Event &event) {
// local |handler| pointer keeps object alive until end of function, even
// if it removes itselfe from |clients|
boost::shared_ptr<Client> handler = ...;
handler->process(event);
}
};
class Client {
void process(Event &event) {
manager->disconnect(this);
// the caller still holds a reference, so the object lives on
}
}
The Client object will automatically be deleted once the last shared_ptr to it goes out of scope, but not before. So creating a local copy of the shared_ptr before a function call makes sure the object is not deleted unexpectedly.
You should consider having an object like "Session" which will track particular message flow from start to finish (from 1 client).
This object should also take care of current state: primarily the buffers and processing.
Each event which triggers a callback MUST update the state of corresponding session.
Libevent is capable of providing you with any result of scheduled event: success, failure, timeout. Each of this types should be reflected with your logic.
In general, when working with events, consider your processing logic to be an automaton with a state.
http://en.wikipedia.org/wiki/Reactor_pattern may be a good resource for your task.
Let the Client::disconnect() function send an event to the EventManager (or Server) class. This means that you need some sort of event handling in EventManager (or Server), an event loop for instance.
My general idea is that Client::disconnect() does not disconnect the Client immediately, but only after the callback finished executing. Instead, it just posts an event to the EventManager (or Server) class.
One could argue that the Client::disconnect() method is on the wrong class. Maybe it should be Server::disconnect( Client *c ). That would be more in-line with the idea that the Server 'owns' the Client and it's the Server which disconnects Clients (and then updates some internal bookkeeping).
We have an API that handles event timers. This API says that it uses OS callbacks to handle timed events (using select(), apparently).
The api claims this order of execution as well:
readable events
writable events
timer events
This works by creating a point to a Timer object, but passing the create function a function callback:
Something along these lines:
Timer* theTimer = Timer::Event::create(timeInterval,&Thisclass::FunctionName);
I was wondering how this worked?
The operating system is handling the timer itself, and when it sees it fired how does it actually invoke the callback? Does the callback run in a seperate thread of execution?
When I put a pthread_self() call inside the callback function (Thisclass::FunctionName) it appears to have the same thread id as the thread where theTimer is created itself! (Very confused by this)
Also: What does that priority list above mean? What is a writable event vs a readable event vs a timer event?
Any explanation of the use of select() in this scenario is also appreciated.
Thanks!
This looks like a simple wrapper around select(2). The class keeps a list of callbacks, I guess separate for read, write, and timer expiration. Then there's something like a dispatch or wait call somewhere there that packs given file descriptors into sets, calculates minimum timeout, and invokes select with these arguments. When select returns, the wrapper probably goes over read set first, invoking read callback, then write set, then looks if any of the timers have expired and invokes those callbacks. This all might happen on the same thread, or on separate threads depending on the implementation of the wrapper.
You should read up on select and poll - they are very handy.
The general term is IO demultiplexing.
A readable event means that data is available for reading on a particular file descriptor without blocking, and a writable event means that you can write to a particular file descriptor without blocking. These are most often used with sockets and pipes. See the select() manual page for details on these.
A timer event means that a previously created timer has expired. If the library is using select() or poll(), the library itself has to keep track of timers since these functions accept a single timeout. The library must calculate the time remaining until the first timer expires, and use that for the timeout parameter. Another approach is to use timer_create(), or an older variant like setitimer() or alarm() to receive notification via a signal.
You can determine which mechanism is being used at the OS layer using a tool like strace (Linux) or truss (Solaris). These tools trace the actual system calls that are being made by the program.
At a guess, the call to create() stores the function pointer somewhere. Then, when the timer goes off, it calls the function you specified via that pointer. But as this is not a Standard C++ function, you should really read the docs or look at the source to find out for sure.
Regarding your other questions, I don't see mention of a priority list, and select() is a sort of general purpose event multiplexer.
Quite likely there's a framework that works with a typical main loop, the driving force of the main loop is the select call.
select allows you to wait for a filedescriptor to become readable or writable (or for an "exception" on the filedeescriptor) or for a timeout to occur. I'd guess the library also allow you to register callbacks for doing async IO, if it's a GUI library it'll get the low primitive GUI events via a file descriptor on unixes.
To implement timer callbacks in such a loop, you just keep a priority queue of timers and process them on select timeouts or filedescriptor events.
The priority means it processes the file i/o before the timers, which in itself takes time, could result in GUI updates eventually resulting in GUI event handlers being run, or other tasks spending time servicing I/O.
The library is more or less doing
for(;;) {
timeout = calculate_min_timeout();
ret = select(...,timeout); //wait for a timeout event or filedescriptor events
if(ret > 0) {
process_readable_descriptors();
process_writable_descriptors();
}
process_timer_queue(); //scan through a timer priority queue and invoke callbacks
}
Because of the fact that the thread id inside the timer callback is the same as the creator thread I think that it is implemented somehow using signals.
When a signal is sent to a thread that thread's state is saved and the signal handler is called which then calls the event call back.
So the handler is called in the creator thread which is interrupted until the signal handler returns.
Maybe another thread waits for all timers using select() and if a timer expires it sends a signal to the thread the expired timer was created in.