C++ Timers in Unix - c++

We have an API that handles event timers. This API says that it uses OS callbacks to handle timed events (using select(), apparently).
The api claims this order of execution as well:
readable events
writable events
timer events
This works by creating a point to a Timer object, but passing the create function a function callback:
Something along these lines:
Timer* theTimer = Timer::Event::create(timeInterval,&Thisclass::FunctionName);
I was wondering how this worked?
The operating system is handling the timer itself, and when it sees it fired how does it actually invoke the callback? Does the callback run in a seperate thread of execution?
When I put a pthread_self() call inside the callback function (Thisclass::FunctionName) it appears to have the same thread id as the thread where theTimer is created itself! (Very confused by this)
Also: What does that priority list above mean? What is a writable event vs a readable event vs a timer event?
Any explanation of the use of select() in this scenario is also appreciated.
Thanks!

This looks like a simple wrapper around select(2). The class keeps a list of callbacks, I guess separate for read, write, and timer expiration. Then there's something like a dispatch or wait call somewhere there that packs given file descriptors into sets, calculates minimum timeout, and invokes select with these arguments. When select returns, the wrapper probably goes over read set first, invoking read callback, then write set, then looks if any of the timers have expired and invokes those callbacks. This all might happen on the same thread, or on separate threads depending on the implementation of the wrapper.
You should read up on select and poll - they are very handy.
The general term is IO demultiplexing.

A readable event means that data is available for reading on a particular file descriptor without blocking, and a writable event means that you can write to a particular file descriptor without blocking. These are most often used with sockets and pipes. See the select() manual page for details on these.
A timer event means that a previously created timer has expired. If the library is using select() or poll(), the library itself has to keep track of timers since these functions accept a single timeout. The library must calculate the time remaining until the first timer expires, and use that for the timeout parameter. Another approach is to use timer_create(), or an older variant like setitimer() or alarm() to receive notification via a signal.
You can determine which mechanism is being used at the OS layer using a tool like strace (Linux) or truss (Solaris). These tools trace the actual system calls that are being made by the program.

At a guess, the call to create() stores the function pointer somewhere. Then, when the timer goes off, it calls the function you specified via that pointer. But as this is not a Standard C++ function, you should really read the docs or look at the source to find out for sure.
Regarding your other questions, I don't see mention of a priority list, and select() is a sort of general purpose event multiplexer.

Quite likely there's a framework that works with a typical main loop, the driving force of the main loop is the select call.
select allows you to wait for a filedescriptor to become readable or writable (or for an "exception" on the filedeescriptor) or for a timeout to occur. I'd guess the library also allow you to register callbacks for doing async IO, if it's a GUI library it'll get the low primitive GUI events via a file descriptor on unixes.
To implement timer callbacks in such a loop, you just keep a priority queue of timers and process them on select timeouts or filedescriptor events.
The priority means it processes the file i/o before the timers, which in itself takes time, could result in GUI updates eventually resulting in GUI event handlers being run, or other tasks spending time servicing I/O.
The library is more or less doing
for(;;) {
timeout = calculate_min_timeout();
ret = select(...,timeout); //wait for a timeout event or filedescriptor events
if(ret > 0) {
process_readable_descriptors();
process_writable_descriptors();
}
process_timer_queue(); //scan through a timer priority queue and invoke callbacks
}

Because of the fact that the thread id inside the timer callback is the same as the creator thread I think that it is implemented somehow using signals.
When a signal is sent to a thread that thread's state is saved and the signal handler is called which then calls the event call back.
So the handler is called in the creator thread which is interrupted until the signal handler returns.
Maybe another thread waits for all timers using select() and if a timer expires it sends a signal to the thread the expired timer was created in.

Related

libuv: uv_check_t and uv_prepare_t usage

I've been reading The libuv book, however the section on check and prepare watchers is incomplete so the only info i found was in uv.h:
/*
* uv_prepare_t is a subclass of uv_handle_t.
*
* Every active prepare handle gets its callback called exactly once per loop
* iteration, just before the system blocks to wait for completed i/o.
*/
and
/*
* uv_check_t is a subclass of uv_handle_t.
*
* Every active check handle gets its callback called exactly once per loop
* iteration, just after the system returns from blocking.
*/
I was wondering if there's any special usage of libuv's check and prepare watchers.
I'm writing a native node.js binding to a c++ library that needs to handle events fired from different threads, so naturally, the callbacks should be called from the main thread. I tried using uv_async_t, however libuv does not guarantee that the callback will be invoked once per every uv_async_send so this does not work for me.
That's why i decided to go with my own thread-safe event queue which i want to check periodically. So i was wondering whether using a check or prepare watcher will be ok for this purpose.
Actually, my current solution does use an uv_async_t watcher - every time i receive an event, i put it in the queue and call uv_async_send - so when the callback is finally invoked, i handle all events currently in the queue.
My concern with this approach is that many events might actually queue up until the callback is triggered and might get invalidated meanwhile (by invalidated, i mean it's become pointless to handle them at this point).
So i want to be able to check the event queue as frequently as possible - which check/prepare watchers can provide, but maybe it's an overkill to do it (and lock a mutex) on every event loop iteration?
And, more importantly, maybe they are supposed to serve some more special purpose than just securing once-per-loop-iteration callback invocation?
Thanks
You could use a prepare handle to check your queue for events, and a async handle just to wakeup the loop.
If you use only a prepare handle you could en up in the situation where the loop is blocked for i/o and nobody would process the queue until it finishes polling. The async handle would "wakeup" the loop, and the next time prepare handles run you'd process the queue.

Linux: application responsiveness and select()

I have a C++ console app that uses open() [O_RDWR | O_NONBLOCK], write(), select(), read() and close() to work with device file. Also ioctl() can be called to cancel current operation. At any given time only one user can work with device.
I need to come up with C++ class having libsigc++ signals that get fired when data is available from device.
The problem: when calling select() application becomes unresponsive as it waits for the data. How to make it responsive - by calling select() in worker thread? If so - how will worker thread communicate with main thread? Maybe I should look into boost::asio?
How to make it responsive - by calling select() in worker thread
you can use dup(), this will duplicated your file descriptors... thus you can move entire read operations into another thread. thus your write thread and processing thread will be responsive, even when the read [select()] thread is in sleeping.
signal emitting overhead of libsigc++ is minimal, thus i think you can embedded code inside the read thread itself. slots can exist in different thread, this is where you will receive your signals...
I think Thrift source code [entirely boost based] might be of your interest, though thrift does not use libsigc++.
It sounds as though you've misunderstood select; the purpose of select (or poll, epoll, etc) is not "wait for data" but "wait for one or more events to occur on a series of file descriptors or a timer, or a signal to be raised".
What "responsiveness" is going missing while you're in your select call? You said it's a console app so you're not talking about a GUI loop, so presumably it is IO related? If so, then you need to refactor your select so that waiting for the data you're talking about is one element; that is, if you're using select, build FD_SETs of ALL file/socket descriptors (and stdin and stdout are file descriptors) that you want to wait on input for.
Or build a loop that periodically calls "select" with a short timeout to /test/ for any pending input and only try and read it when select tells you there is something to read.
It sounds like you have a producer-consumer style problem. There are various way to implement a solution to this problem, but most folks these days tend to use condition variable based approaches (see this C++11 based example).
There are also a number of design patterns that when implemented can help alleviate your concurrency problem, such as:
Half-Sync / Half-Async
A producer-consumer style pattern that introduces a queue between an asynchronous layer that fills the queue with events, and a synchronous layer that processes those events.
Leader / Followers
Multiple threads take turns handling events
A related discussion is available here.

C++ in Linux: In what forked-task context should a timer callback execute?

I have implemented my own Timer/Callback classes in C/C++ in Linux, wherein a process requiring a timer to fire either ONE_SHOT or PERIODICally instantiates a timer, and instantiates a callback object and associates the callback with previously created Timer object. The Callback class implements a triggered () method, and when the timer fires at the appointed timeout, the triggered () method is executed. (Nothing new in terms of functionality.) The way my Timer class works is I maintain a minheap of Timer objects and thus always know which timer to fire next. There is a timer task (TimerTask) which itself runs as a separate process (created using fork ()) and shares the memory pools from which the Timer objects and the Callback objects are created. The TimerTask has a main while (1) loop which keeps checking if the root of the Timer object minheap has a time since epoch that is LEQ the current time since epoch. If so, the timer at root has "fired."
Currently, when the timer fires, the callback is executed in the TimerTask process context. I am currently changing this behavior to run the callback processing on other tasks (send them the information that the Timer object has fired via a POSIX message queue. For example, send the message to the Timer object creating process), but my question to SO is what are the principles behind this? Executing a callback in the TimerTask context seems like a bad idea if I expect to service a large number of timers. It seems like a good idea to dispatch the callback processing over to other processes.
What are the general rules of thumb for processing the callback in one task/process over the other? My intention is to process the callback in the receiving task using a pthread like so:
void threadFunctionForTimerCallback (void* arg)
{
while (1)
{
if ((mq_receive (msg_fd, buffer, attr.mq_msgsize, NULL)) == -1)
exit (-1);
else
printf ("Message received %s\n", buffer);
}
}
Would this be a reasonable solution? But never mind the actual way of receiving the message from the TimerTask (threads or any other method, doesn't matter), any discussion and insight into the problem of assigning a task for the callback is appreciated.
There is no need to busy spin while(1) to implement a timer. One traditional and robust way of implementing timers has been using minheap as you do to organize times to expiry and then pass the time till the next timer expiry as a timeout argument to select() or epoll(). Using select() call a thread can watch for file descriptor readiness, signals and timers all at the same time.
Recent kernels support timerfd that delivers timer expiry events as file descriptor readiness for read which again can be handled using select()/epoll(). It obviates the need to maintain the minheap, however, requires a system call for each add/modify/delete a timer.
Having timer code in another process requires processes to use inter-process communication mechanisms, thereby introducing more complexity, so it can actually make the system less robust, especially when the processes communicate via shared memory and can corrupt it.
Anyway, one can use Unix domain sockets to send messages back and forth between communicating processes on the same host. Again, select()/epoll() are your best friends. Or a more high level framework can be used for message passing, such as 0MQ.

I want to wait on both a file descriptor and a mutex, what's the recommended way to do this?

I would like to spawn off threads to perform certain tasks, and use a thread-safe queue to communicate with them. I would also like to be doing IO to a variety of file descriptors while I'm waiting.
What's the recommended way to accomplish this? Do I have to created an inter-thread pipe and write to it when the queue goes from no elements to some elements? Isn't there a better way?
And if I have to create the inter-thread pipe, why don't more libraries that implement shared queues allow you to create the shared queue and inter-thread pipe as a single entity?
Does the fact I want to do this at all imply a fundamental design flaw?
I'm asking this about both C++ and Python. And I'm mildly interested in a cross-platform solution, but primarily interested in Linux.
For a more concrete example...
I have some code which will be searching for stuff in a filesystem tree. I have several communications channels open to the outside world through sockets. Requests that may (or may not) result in a need to search for stuff in the filesystem tree will be arriving.
I'm going to isolate the code that searches for stuff in the filesystem tree in one or more threads. I would like to take requests that result in a need to search the tree and put them in a thread-safe queue of things to be done by the searcher threads. The results will be put into a queue of completed searches.
I would like to be able to service all the non-search requests quickly while the searches are going on. I would like to be able to act on the search results in a timely fashion.
Servicing the incoming requests would generally imply some kind of event-driven architecture that uses epoll. The queue of disk-search requests and the return queue of results would imply a thread-safe queue that uses mutexes or semaphores to implement the thread safety.
The standard way to wait on an empty queue is to use a condition variable. But that won't work if I need to service other requests while I'm waiting. Either I end up polling the results queue all the time (and delaying the results by half the poll interval, on average), blocking and not servicing requests.
Whenever one uses an event driven architecture, one is required to have a single mechanism to report event completion. On Linux, if one is using files, one is required to use something from the select or poll family meaning that one is stuck with using a pipe to initiate all none file related events.
Edit: Linux has eventfd and timerfd. These can be added to your epoll list and used to break out of the epoll_wait when either triggered from another thread or on a timer event respectively.
There is another option and that is signals. One can use fcntl modify the file descriptor such that a signal is emitted when the file descriptor becomes active. The signal handler may then push a file-ready message onto any type of queue of your choosing. This may be a simple semaphore or mutex/condvar driven queue. Since one is now no longer using select/poll, one no longer needs to use a pipe to queue none file based messages.
Health warning: I have not tried this and although I cannot see why it will not work, I don't really know the performance implications of the signal approach.
Edit: Manipulating a mutex in a signal handler is probably a very bad idea.
I've solved this exact problem using what you mention, pipe() and libevent (which wraps epoll). The worker thread writes a byte to its pipe FD when its output queue goes from empty to non-empty. That wakes up the main IO thread, which can then grab the worker thread's output. This works great is actually very simple to code.
You have the Linux tag so I am going to throw this out: POSIX Message Queues do all this, which should fulfill your "built-in" request if not your less desired cross-platform wish.
The thread-safe synchronization is built-in. You can have your worker threads block on read of the queue. Alternatively MQs can use mq_notify() to spawn a new thread (or signal an existing one) when there is a new item put in the queue. And since it looks like you are going to be using select(), MQ's identifier (mqd_t) can be used as a file descriptor with select.
It seems nobody has mentioned this option yet:
Don't run select/poll/etc. in your "main thread". Start a dedicated secondary thread which does the I/O and pushes notifications into your thread-safe queue (the same queue which your other threads use to communicate with the main thread) when I/O operations complete.
Then your main thread just needs to wait on the notification queue.
Duck's and twk's are actually better answers than doron's (the one selected by the OP), in my opinion. doron suggests writing to a message queue from within the context of a signal handler, and states that the message queue can be "any type of queue." I would strongly caution you against this since many C library/system calls cannot safely be called from within a signal handler (see async-signal-safe).
In particuliar, if you choose a queue protected by a mutex, you should not access it from a signal handler. Consider this scenario: your consumer thread locks the queue to read it. Immediately after, the kernel delivers the signal to notify you that a file descriptor now has data on it. You signal handler runs in the consumer thread, necessarily), and tries to put something on your queue. To do this, it first has to take the lock. But it already holds the lock, so you are now deadlocked.
select/poll is, in my experience, the only viable solution to an event-driven program in UNIX/Linux. I wish there were a better way inside a mutlithreaded program, but you need some mechanism to "wake up" your consumer thread. I have yet to find a method that does not involve a system call (since the consumer thread is on a waitqueue inside the kernel during any blocking call such as select).
EDIT: I forgot to mention one Linux-specific way to handle signals when using select/poll: signalfd(2). You get a file descriptor you can select/poll on, and you handling code runs normally instead of in a signal handler's context.
This is a very common seen problem, especially when you are developing network server-side program. Most Linux server-side program's main look will loop like this:
epoll_add(serv_sock);
while(1){
ret = epoll_wait();
foreach(ret as fd){
req = fd.read();
resp = proc(req);
fd.send(resp);
}
}
It is single threaded(the main thread), epoll based server framework. The problem is, it is single threaded, not multi-threaded. It requires that proc() should never blocks or runs for a significant time(say 10 ms for common cases).
If proc() will ever runs for a long time, WE NEED MULTI THREADS, and executes proc() in a separated thread(the worker thread).
We can submit task to the worker thread without blocking the main thread, using a mutex based message queue, it is fast enough.
epoll_add(serv_sock);
while(1){
ret = epoll_wait();
foreach(ret as fd){
req = fd.read();
queue.add_job(req); // fast, non blockable
}
}
Then we need a way to obtain the task result from a worker thread. How? If we just check the message queue directly, before or after epoll_wait().
epoll_add(serv_sock);
while(1){
ret = epoll_wait(); // may blocks for 10ms
resp = queue.check_result(); // fast, non blockable
foreach(ret as fd){
req = fd.read();
queue.add_job(req); // fast, non blockable
}
}
However, the checking action will execute after epoll_wait() to end, and epoll_wait() usually blocks for 10 micro seconds(common cases) if all file descriptors it waits are not active.
For a server, 10 ms is quite a long time! Can we signal epoll_wait() to end immediately when task result is generated?
Yes! I will describe how it is done in one of my open source project:
Create a pipe for all worker threads, and epoll waits on that pipe as well. Once a task result is generated, the worker thread writes one byte into the pipe, then epoll_wait() will end in nearly the same time! - Linux pipe has 5 us to 20 us latency.
In my project SSDB(a Redis protocol compatible in-disk NoSQL database), I create a SelectableQueue for passing messages between the main thread and worker threads. Just like its name, SelectableQueue has an file descriptor, which can be wait by epoll.
SelectableQueue: https://github.com/ideawu/ssdb/blob/master/src/util/thread.h#L94
Usage in main thread:
epoll_add(serv_sock);
epoll_add(queue->fd());
while(1){
ret = epoll_wait();
foreach(ret as fd){
if(fd is queue){
sock, resp = queue->pop_result();
sock.send(resp);
}
if(fd is client_socket){
req = fd.read();
queue->add_task(fd, req);
}
}
}
Usage in worker thread:
fd, req = queue->pop_task();
resp = proc(req);
queue->add_result(fd, resp);
C++11 has std::mutex and std::condition_variable. The two can be used to have one thread signal another when a certain condition is met. It sounds to me like you will need to build your solution out of these primitives. If you environment does not yet support these C++11 library features, you can find very similar ones at boost. Sorry, can't say much about python.
One way to accomplish what you're looking to do is by implementing the Observer Pattern
You would register your main thread as an observer with all your spawned threads, and have them notify it when they were done doing what they were supposed to (or updating during their run with the info you need).
Basically, you want to change your approach to an event-driven model.

what can I use to replace sleep and usleep in my Qt app?

I'm importing a portion of existing code into my Qt app and noticed a sleep function in there. I see that this type of function has no place in event programming. What should I do instead?
UPDATE: After thought and feedback I would say the answer is: call sleep outside the GUI main thread only and if you need to wait in the GUI thread use processEvents() or an event loop, this will prevent the GUI from freezing.
It isn't pretty but I found this in the Qt mailing list archives:
The sleep method of QThread is protected, but you can expose it like so:
class SleeperThread : public QThread
{
public:
static void msleep(unsigned long msecs)
{
QThread::msleep(msecs);
}
};
Then just call:
SleeperThread::msleep(1000);
from any thread.
However, a more elegant solution would be to refactor your code to use a QTimer - this might require you saving the state so you know what to do when the timer goes off.
I don't recommend sleep in a event based system but if you want to ...
You can use a waitcondition, that way you can always interrupt the sleep if neccesary.
//...
QMutex dummy;
dummy.lock();
QWaitCondition waitCondition;
waitCondition.wait(&dummy, waitTime);
//...
The reason why sleep is a bad idea in event based programming is because event based programming is effectively a form on non-preemptive multitasking. By calling sleep, you prevent any other event becoming active and therefore blocking the processing of the thread.
In a request response scenario for udp packets, send the request and immediately wait for the response. Qt has good socket APIs which will ensure that the socket does not block while waiting for the event. The event will come when it comes. In your case the QSocket::readReady signal is your friend.
If you want to schedule an event for some point of time in the future, use QTimer. This will ensure that other events are not blocked.
It is not necessary to break down the events at all. All I needed to do was to call QApplication::processEvents() where sleep() was and this prevents the GUI from freezing.
I don't know how the QTs handle the events internally, but on most systems at the lowest level the application life goes like this: the main thread code is basically a loop (the message loop), in which, at each iteration, the application calls a function that gives to it a new message; usually that function is blocking, i.e. if there are no messages the function does not return and the application is stopped.
Each time the function returns, the application has a new message to process, that usually has some recipient (the window to which is sent), a meaning (the message code, e.g. the mouse pointer has been moved) and some additional data (e.g. the mouse has been moved to coords 24, 12).
Now, the application has to process the message; the OS or the GUI toolkit usually do this under the hood, so with some black magic the message is dispatched to its recipient and the correct event handler is executed. When the event handler returns, the internal function that called the event handler returns, so does the one that called it and so on, until the control comes back to the main loop, that now will call again the magic message-retrieving function to get another message. This cycle goes on until the application terminates.
Now, I wrote all this to make you understand why sleep is bad in an event driven GUI application: if you notice, while a message is processed no other messages can be processed, since the main thread is busy running your event handler, that, after all, is just a function called by the message loop. So, if you make your event handler sleep, also the message loop will sleep, which means that the application in the meantime won't receive and process any other messages, including the ones that make your window repaint, so your application will look "hang" from the user perspective.
Long story short: don't use sleep unless you have to sleep for very short times (few hundreds milliseconds at most), otherwise the GUI will become unresponsive. You have several options to replace the sleeps: you can use a timer (QTimer), but it may require you to do a lot of bookkeeping between a timer event and the other. A popular alternative is to start a separate worker thread: it would just handle the UDP communication, and, being separate from the main thread, it would not cause any problem sleeping when necessary. Obviously you must take care to protect the data shared between the threads with mutexes and be careful to avoid race conditions and all the other kind of problems that occur with multithreading.