Not safe to print in signal handler? [duplicate] - c++

I am still a little confused as to why exactly it is unsafe to receive a signal and call a non async safe function from within that signal handler. Could someone explain the reasoning behind this and possibly try and give me some references that I can follow to read up more on this myself?
In other words I am asking why it is unsafe to say call printf from within a signal handler. Is it because of intra-process issues and possible race conditions resulting from two possible calls to printf without protection or is it because of inter process races to the same resource (in this example stdout). Say a thread within process A is calling printf and another thread receives the signal and then calls printf. Is it possibly because the kernel here will not know what to do because it will not be able to distinguish between the two calls.

Say a thread within process A is calling printf and another thread
receives the signal and then calls printf. Is it possibly because the
kernel here will not know what to do because it will not be able to
distinguish between the two calls.
It's not the kernel that will have issues. It's your application itself. printf is not a kernel function. It's a function in the C library, that your application uses. printf is actually a fairly complicated function. It supports a wide variety of output formatting.
The end result of this formatting is a formatted output string that's written to standard output. That process in and of itself also involves some work. The formatted output string gets written into the internal stdout file handle's output buffer. The output buffer gets flushed (and only at this point the kernel takes over and writes a defined chunk of data to a file) whenever certain defined conditions occur, namely when the output buffer is full, and/or whenever a newline character gets written to the output stream.
All of that is supported by the output buffer's internal data structures, which you don't have to worry about because it's the C library's job. Now, a signal can arrive at any point while printf does its work. And I mean, at any time. It might very well arrive while printf is in the middle of updating the output buffer's internal data structure, and they're in a temporarily inconsistent state because printf hasn't yet finished updating it.
Example: on modern C/C++ implementations, printf may not be signal-safe, but it is thread safe. Multiple threads can use printf to write to standard output. It's the threads' responsibility to coordinate this process amongst themselves, to make sure that the eventual output actually makes sense, and it's not jumbled up, at random, from multiple threads' output, but that's beside the point.
The point is that printf is thread safe, and that typically means that somewhere there's a mutex involved in the process. So, the sequence of events that might occur is:
printf acquires the internal mutex.
printf proceeds with its work with formatting the string and writing it to stdout's output buffer.
before printf is done, and can release the acquired mutex, a signal arrives.
Now, the internal mutex is locked. The thing about signal handlers is that it's generally not specified which thread, in a process, gets to handle the signal. A given implementation might pick a thread at random, or it might always pick the thread that's currently running. In any case, it can certainly pick the thread that has locked the printf, here, in order to handle the signal.
So now, your signal handler runs, and it also decides to call printf. Because printf's internal mutex is locked, the thread has to wait for the mutex to get unlocked.
And wait.
And wait.
Because, if you were keeping track of things: the mutex is locked by the thread that was interrupted to service the signal. The mutex won't get unlocked until the thread resumes running. But that won't happen until the signal handler terminates, and the thread resumes running, but the signal handler is now waiting for the mutex to get unlocked.
You're boned.
Now, of course, printf might use the C++ equivalent of std::recursive_mutex, to avoid this problem, but even this won't solve all possible deadlocks that could get introduced by a signal.
To summarize, the reason why it's "unsafe to receive a signal and call a non async safe function from within that signal handler" is because it's not, by definition. It's not safe to call a non-async safe function from within the signal handler" because the signal is an asynchronous event, and since it's not an async-safe function, you can't, by definition. Water is wet because it's water, and an async-unsafe function cannot be called from an asynchronous signal handler.

Related

Libcurl - curl_multi_wakeup

Reading the function description curl_multi_wakeup: enter link description here
Calling this function only guarantees to wake up the current (or the
next if there is no current) curl_multi_poll call, which means it is
possible that multiple calls to this function will wake up the same
waiting operation.
I am confused by the phrase - "the same waiting operation". How's that?
That is, suppose I have a function curl_multi_poll() in event standby mode in thread "A".
Now, for example, I call the curl_multi_wakeup() function twice from thread "B" and thread "C".
And what happens judging by this phrase:
...function will wake up the same waiting operation.
It turns out that the function curl_multi_poll - wakes up only once ?
curl_multi_wakeup is meant to be used with a pool of threads waiting on curl_multi_poll.
What the document says is that if you call curl_multi_wakeup repeatedly, it will possibly wake up only a single thread, not necessarily one thread for each call to curl_multi_wakeup.
curl_multi_poll() is a relatively new call, designed to simplify "interrupting" threads waiting on curl_multi_poll(). Here's a good explanation:
https://daniel.haxx.se/blog/2019/12/09/this-is-your-wake-up-curl/
curl_multi_poll()
[is a] function which asks libcurl to wait for activity on any of the
involved transfers – or sleep and don’t return for the next N
milliseconds.
Calling this waiting function (or using the older curl_multi_wait() or
even doing a select() or poll() call “manually”) is crucial for a
well-behaving program. It is important to let the code go to sleep
like this when there’s nothing to do and have the system wake up it up
again when it needs to do work. Failing to do this correctly, risk
having libcurl instead busy-loop somewhere and that can make your
application use 100% CPU during periods. That’s terribly unnecessary
and bad for multiple reasons.
When ... something happens and the application for example needs to
shut down immediately, users have been asking for a way to do a wake
up call.
curl_multi_wakeup() explicitly makes a curl_multi_poll() function
return immediately. It is designed to be possible to use from a
different thread.

How to cleanly exit a threaded C++ program?

I am creating multiple threads in my program. On pressing Ctrl-C, a signal handler is called. Inside a signal handler, I have put exit(0) at last. The thing is that sometimes the program terminates safely but the other times, I get runtime error stating
abort() has been called
So what would be the possible solution to avoid the error?
The usual way is to set an atomic flag (like std::atomic<bool>) which is checked by all threads (including the main thread). If set, then the sub-threads exit, and the main thread starts to join the sub-threads. Then you can exit cleanly.
If you use std::thread for the threads, that's a possible reason for the crashes you have. You must join the thread before the std::thread object is destructed.
Others have mentioned having the signal-handler set a std::atomic<bool> and having all the other threads periodically check that value to know when to exit.
That approach works well as long as all of your other threads are periodically waking up anyway, at a reasonable frequency.
It's not entirely satisfactory if one or more of your threads is purely event-driven, however -- in an event-driven program, threads are only supposed to wake up when there is some work for them to do, which means that they might well be asleep for days or weeks at a time. If they are forced to wake up every (so many) milliseconds simply to poll an atomic-boolean-flag, that makes an otherwise extremely CPU-efficient program much less CPU-efficient, since now every thread is waking up at short regular intervals, 24/7/365. This can be particularly problematic if you are trying to conserve battery life, as it can prevent the CPU from going into power-saving mode.
An alternative approach that avoids polling would be this one:
On startup, have your main thread create an fd-pipe or socket-pair (by calling pipe() or socketpair())
Have your main thread (or possibly some other responsible thread) include the receiving-socket in its read-ready select() fd_set (or take a similar action for poll() or whatever wait-for-IO function that thread blocks in)
When the signal-handler is executed, have it write a byte (any byte, doesn't matter what) into the sending-socket.
That will cause the main thread's select() call to immediately return, with FD_ISSET(receivingSocket) indicating true because of the received byte
At that point, your main thread knows it is time for the process to exit, so it can start directing all of its child threads to start shutting down (via whatever mechanism is convenient; atomic booleans or pipes or something else)
After telling all the child threads to start shutting down, the main thread should then call join() on each child thread, so that it can be guaranteed that all of the child threads are actually gone before main() returns. (This is necessary because otherwise there is a risk of a race condition -- e.g. the post-main() cleanup code might occasionally free a resource while a still-executing child thread was still using it, leading to a crash)
The first thing you must accept is that threading is hard.
A "program using threading" is about as generic as a "program using memory", and your question is similar to "how do I not corrupt memory in a program using memory?"
The way you handle threading problem is to restrict how you use threads and the behavior of the threads.
If your threading system is a bunch of small operations composed into a data flow network, with an implicit guarantee that if an operation is too big it is broken down into smaller operations and/or does checkpoints with the system, then shutting down looks very different than if you have a thread that loads an external DLL that then runs it for somewhere from 1 second to 10 hours to infinite length.
Like most things in C++, solving your problem is going to be about ownership, control and (at a last resort) hacks.
Like data in C++, every thread should be owned. The owner of a thread should have significant control over that thread, and be able to tell it that the application is shutting down. The shut down mechanism should be robust and tested, and ideally connected to other mechanisms (like early-abort of speculative tasks).
The fact you are calling exit(0) is a bad sign. It implies your main thread of execution doesn't have a clean shutdown path. Start there; the interrupt handler should signal the main thread that shutdown should begin, and then your main thread should shut down gracefully. All stack frames should unwind, data should be cleaned up, etc.
Then the same kind of logic that permits that clean and fast shutdown should also be applied to your threaded off code.
Anyone telling you it is as simple as a condition variable/atomic boolean and polling is selling you a bill of goods. That will only work in simple cases if you are lucky, and determining if it works reliably is going to be quite hard.
Additional to Some programmer dude answer and related to discussion in the comment section, you need to make the flag that controls termination of your threads as atomic type.
Consider following case :
bool done = false;
void pending_thread()
{
while(!done)
{
std::this_thread::sleep(std::milliseconds(1));
}
// do something that depends on working thread results
}
void worker_thread()
{
//do something for pending thread
done = true;
}
Here worker thread can be your main thread also and done is terminating flag of your thread, but pending thread need to do something with given data by working thread, before exiting.
this example has race condition and undefined behaviour along with it, and it's really hard to find what is the actual problem int the real world.
Now the corrected version using std::automic :
std::atomic<bool> done(false);
void pending_thread()
{
while(!done.load())
{
std::this_thread::sleep(std::milliseconds(1));
}
// do something that depends on working thread results
}
void worker_thread()
{
//do something for pending thread
done = true;
}
You can exit thread without being concern of race condition or UB.

How to stop a qThread in QT [duplicate]

This question already has an answer here:
Qt, How to pause QThread immediately
(1 answer)
Closed 5 years ago.
I would like to know how to properly stop a QThread. I havea infinite loop in a thread, and I would like to stop it when I do a specific action :
I have tried :
if (thread->isRunning()){
worker->stop();
thread->terminate();
}
the stop() method set a value to false to go out of my infinite loop.
Furthermore, I don't really understand the difference between quit(), terminate() or wait(). Can someone explain me ?
Thanks.
A proper answer depends on how you actually use QThread and how you've implemented stop().
An intended use case in Qt assumes following model:
You create an object that will do some useful work in response to Signals
You create a `QThread` and move your object to this thread
When you send a signal to your object, it's processed in `QThread` you've created
Now you need to understand some internals of how this is actually implemented. There are several "models" of signals in Qt and in some cases when you "send a signal" you effectively simply call a "slot" function. That's a "direct" slot connection and in this case slot() will be executed in caller thread, one that raised a signal. So in order to communicate with another thread, Qt allows another kind of signals, queued connections. Instead of calling a slot(), caller leaves a message to object that owns this slot. A thread associated with this object will read this message (at some time later) & perform execution of slot() itself.
Now you can understand what's happening when you create and execute QThread. A newly created thread will execute QThread::run() that, by default, will execute QThread::exec() which is nothing, but an infinite loop that looks for messages for objects associated with thread and transfers them to slots of these objects. Calling QThread::quit() posts a termination message to this queue. When QThread::exec() will read it, it will stop further processing of events, exit infinite loop and gently terminate the thread.
Now, as you may guess, in order to receive termination message, two conditions must be met:
You should be running `QThread::exec()`
You should exit from slot that is currently running
The first one is typically violated when people subclass from QThread and override QThread::run with their own code. In most cases this is a wrong usage, but it's still very widely taught and used. In your case it seems that you're violating the second requirement: your code runs infinite loop and therefore QThread::exec() simply doesn't get a control and don't have any chance to check that it needs to exit. Drop that infinite loop of yours to recycle bin, QThread::exec() is already running such loop for you. Think how to re-write your code so it does not running infinite loops, it's always possible. Think about your program in terms of "messages-to-thread" concept. If you're checking something periodically, create a QTimer that will send messages to your object and implement a check in your slot. If you processing some large amount of data, split this data to smaller chunks and write your object so it will process one chunk at a time in response to some message. E.g. if you are processing image line-by-line, make a slot processLine(int line) and send a sequence of signals "0, 1, 2... height-1" to that slot. Note that you will also have to explicitly call QThread::quit() once done processing because event loop is infinite, it doesn't "know" when you processed all the lines of your image. Also consider using QtConcurrent for computationally-intensive tasks instead of QThread.
Now, the QThread::terminate() does stop a thread in a very different manner. It simply asks OS to kill your thread. And OS will simply abruptly stop your thread at arbitrary position in the code. Thread stack memory will be free'd, but any memory this stack pointed to won't. If a thread was owning some resource (such as file or mutex), it won't ever release it. An operation that involve writing data to memory can be stopped in the middle and leave memory block (e.g. object) incompletely filled and in invalid state. As you might guess from this description, you should never, ever call ::terminate() except for very rare cases where keeping running of thread is worse than getting memory & resource leaks.
QThread::wait() is just a convenience function that waits until QThread ceases to execute. It will work both with exit() and terminate().
You can also implement a threading system of your own subclassed from QThread and implement your own thread termination procedure. All you need to exit a thread is, essentially, just to return from QThread::run() when it becomes necessary and you can't use neither exit() nor terminate() for that purpose. Create your own synchronization primitive and use it to signal your code to return. But in most cases it's not a good idea, keep in mind that (unless you work with QEventLoop by yourself), Qt signal and slots won't be working properly in that case.

Why can only async-signal-safe functions be called from signal handlers safely?

I am still a little confused as to why exactly it is unsafe to receive a signal and call a non async safe function from within that signal handler. Could someone explain the reasoning behind this and possibly try and give me some references that I can follow to read up more on this myself?
In other words I am asking why it is unsafe to say call printf from within a signal handler. Is it because of intra-process issues and possible race conditions resulting from two possible calls to printf without protection or is it because of inter process races to the same resource (in this example stdout). Say a thread within process A is calling printf and another thread receives the signal and then calls printf. Is it possibly because the kernel here will not know what to do because it will not be able to distinguish between the two calls.
Say a thread within process A is calling printf and another thread
receives the signal and then calls printf. Is it possibly because the
kernel here will not know what to do because it will not be able to
distinguish between the two calls.
It's not the kernel that will have issues. It's your application itself. printf is not a kernel function. It's a function in the C library, that your application uses. printf is actually a fairly complicated function. It supports a wide variety of output formatting.
The end result of this formatting is a formatted output string that's written to standard output. That process in and of itself also involves some work. The formatted output string gets written into the internal stdout file handle's output buffer. The output buffer gets flushed (and only at this point the kernel takes over and writes a defined chunk of data to a file) whenever certain defined conditions occur, namely when the output buffer is full, and/or whenever a newline character gets written to the output stream.
All of that is supported by the output buffer's internal data structures, which you don't have to worry about because it's the C library's job. Now, a signal can arrive at any point while printf does its work. And I mean, at any time. It might very well arrive while printf is in the middle of updating the output buffer's internal data structure, and they're in a temporarily inconsistent state because printf hasn't yet finished updating it.
Example: on modern C/C++ implementations, printf may not be signal-safe, but it is thread safe. Multiple threads can use printf to write to standard output. It's the threads' responsibility to coordinate this process amongst themselves, to make sure that the eventual output actually makes sense, and it's not jumbled up, at random, from multiple threads' output, but that's beside the point.
The point is that printf is thread safe, and that typically means that somewhere there's a mutex involved in the process. So, the sequence of events that might occur is:
printf acquires the internal mutex.
printf proceeds with its work with formatting the string and writing it to stdout's output buffer.
before printf is done, and can release the acquired mutex, a signal arrives.
Now, the internal mutex is locked. The thing about signal handlers is that it's generally not specified which thread, in a process, gets to handle the signal. A given implementation might pick a thread at random, or it might always pick the thread that's currently running. In any case, it can certainly pick the thread that has locked the printf, here, in order to handle the signal.
So now, your signal handler runs, and it also decides to call printf. Because printf's internal mutex is locked, the thread has to wait for the mutex to get unlocked.
And wait.
And wait.
Because, if you were keeping track of things: the mutex is locked by the thread that was interrupted to service the signal. The mutex won't get unlocked until the thread resumes running. But that won't happen until the signal handler terminates, and the thread resumes running, but the signal handler is now waiting for the mutex to get unlocked.
You're boned.
Now, of course, printf might use the C++ equivalent of std::recursive_mutex, to avoid this problem, but even this won't solve all possible deadlocks that could get introduced by a signal.
To summarize, the reason why it's "unsafe to receive a signal and call a non async safe function from within that signal handler" is because it's not, by definition. It's not safe to call a non-async safe function from within the signal handler" because the signal is an asynchronous event, and since it's not an async-safe function, you can't, by definition. Water is wet because it's water, and an async-unsafe function cannot be called from an asynchronous signal handler.

When to use boost thread join function?

I've recently managed to create a thread using the boost::bind function.
For the time being, I'm having the thread display to stdout. I can see the output if I use thread.join. However, if I don't do this, I don't see any output.
Why is this?
I'm hoping I don't have to use the join function, because I would like to call this function multiple times, without having to wait for the previously launched thread to finish.
Thanks for your responses. What I really wanted to make sure of was that the thread actually executed. So I added a system call to touch a non-existent file, and it was there afterwards, so the thread did execute.
I can see the output if I use thread.join. However, if I don't do this, I don't see any output. Why is this?
Most probably this is a side-effect of the way standard output is buffered on your system. Do you have '\n' and/or endl sprinkled around in every print statement? That should force output (endl will flush the stream in addition).
If you look at the documentation for join, you'd see that this function is called to wait till until termination of the thread. When a thread is terminated (or for that matter, a process) all buffered output is flushed.
You do not need to wait till the thread has completed execution in order to see output. There are at least a couple of ways (I can remember off the top of my head) you can achieve this:
Make cout/stdout unbuffered, or
Use \n and fflush(stdout) (for C-style I/O) or std::endl stream manipulator
By default the thread object's destructor does not join to the main thread, could it be that your main thread terminates and closes STDOUT before the thread manages to flush its output?
Note that in C++0x the default destructor for thread does join (rather than detach as in boost) so this will not happen (see A plea to reconsider detach-on-destruction for thread objects).
Note: Since this was written the C++11 standard was changed and an unjoined thread now terminates the process.