How to create a race condition in C++ - c++

I want to test some Object's function for thread safety in a race condition. In order to test this I would like to call a function simultaneously from two (or more) different threads. How can I write code that guarantee that the function calls will occur at the same time or at least close enough that it will have the desired effect?

The best you can do is hammer heavily at the code and check all the little signs you may get of an issue. If there's a race-condition, you should be able to write code that will eventually trigger it. Consider:
#include <thread>
#include <assert.h>
int x = 0;
void foo()
{
while (true)
{
x = x + 1;
x = x - 1;
assert(x == 0);
}
}
int main()
{
std::thread t(foo);
std::thread t2(foo);
t.join();
t2.join();
}
Everywhere I test it, it asserts pretty quickly. I could then add critical sections until the assert is gone.
But in fact, there's no guarantee that it ever will assert. But I've used this technique repeatedly on large-scale production code. You may just need to hammer at your code for a long while, to be sure.

Have a struct having a field of array of integers of zero, probably 300-500 kB long. Then from two threads, copy two other structs (one having 1s another having 2s) to it, just before some atomic memory issuing barriers(to be sure undefined behavior area has finished, from main thread by checking atomic variable's value).
This should have a high chance of undefined behavior and maybe you could see mixed 1s, 2s (and even 0s?) in it to know it happened.
But when you delete all control stuff such as atomics, then new shape can be also another undefined behavior and behave different.

A great way to do this is by inserting well-timed sleep calls. You can use this, for example, to force combinations of events in an order you want to test (Thread 1 does something, then Thread 2 does something, then Thread 1 does something else). A downside is that you have to have an idea of where to put the sleep calls. After doing this for a little bit you should start to get a feel it, but some good intuition helps in the beginning.
You may be able to conditionally call sleep or hit a breakpoint from a specific thread if you can get a handle to the thread id.
Also, I'm pretty sure that Visual Studio and (I think) GDB allow you to freeze some threads and/or run specific ones.

Related

Forcibly terminate method after a certain amount of time

Say I have a function whose prototype looks like this, belonging to class container_class:
std::vector<int> container_class::func(int param);
The function may or may not cause an infinite loop on certain inputs; it is impossible to tell which inputs will cause a success and which will cause an infinite loop. The function is in a library of which I do not have the source of and cannot modify (this is a bug and will be fixed in the next release in a few months, but for now I need a way to work around it), so solutions which modify the function or class will not work.
I've tried isolating the function using std::async and std::future, and using a while loop to constantly check the state of the thread:
container_class c();
long start = get_current_time(); //get the current time in ms
auto future = std::async(&container_class::func, &c, 2);
while(future.wait_for(0ms) != std::future_status::ready) {
if(get_current_time() - start > 1000) {
//forcibly terminate future
}
sleep(2);
}
This code has many problems. One is that I can't forcibly terminate the std::future object (and the thread that it represents).
At the far extreme, if I can't find any other solution, I can isolate the function in its own executable, run it, and then check its state and terminate it appropriately. However, I would rather not do this.
How can I accomplish this? Is there a better way than what I'm doing right now?
You are out of luck, sorry.
First off, C++ doesn't even guarantee you there will be a thread for future execution. Although it would be extremely hard (probably impossible) to implement all std::async guarantees in a single thread, there is no direct prohibition of that, and also, there is certainly no guarantee that there will be a thread per async call. Because of that, there is no way to cancel the async execution.
Second, there is no such way even in the lowest level of thread implementation. While pthread_cancel exists, it won't protect you from infinite loops not visiting cancellation points, for example.
You can not arbitrarily kill a thread in Posix, and C++ thread model is based on it. A process really can't be a scheduler of it's own threads, and while sometimes it is a pain, it is what it is.

Is there a real-life situation where a simple pointer-to-bool as thread cancellation flag will not effectively cancel a thread?

First and foremost, I understand that formally, using a non-atomic flag to cancel a thread is very much undefined behaviour in the sense that the language does not specify if this variable will be written to before the thread exits.
At work, this was implemented a long time ago, and most calculation threads check the value of this bool throughout their work, as to gracefully cancel whatever it is they're doing. When I first saw this, my first reaction was to change all of this to use a better way (in this case, QThread::requestInterruption and QThread::interruptionRequested seemed like a viable alternative). A quick search through the code turned up about 800 occurences of this variable/construct throughout the codebase, so I let it go.
When I approached a (senior, in terms of years of experience) colleague, he assured me that although it might indeed be wrong, he had never seen it fail to fulfill its purpose. He argued that the only case it would go wrong is if a (group of) thread(s) is allowed to run and another thread that actually changes this flag never gets allowed to execute untill the other threads are finished. He also argued that in this case, the OS would intervene and fairly distribute runtime across all threads, resulting in perhaps a delay of the cancellation.
Now my question is: is there any real-life situation (preferably on a regular system, based upon x86/ARM, preferably C or C++) where this does indeed fail?
Note I'm not trying to win the argument, as my colleague agrees it is technically incorrect, but I would like to know if it could cause problems and under which circumstances this might occur.
The simplest way to beat this is to reduce it to a rather trivial example. The compiler will optimize out reading the flag because it is not atomic and being written to by another thread is UB; therefore the flag won't ever get actually read.
Your colleague's argument is predicated on the assumption that the compiler will actually load the flag when you de-reference the flag. But in fact it has no obligation to do so.
#include <thread>
#include <iostream>
bool cancelled = false;
bool finished = false;
void thread1() {
while(!cancelled) {
std::cout << "Not cancelled";
}
}
int main() {
std::thread t(thread1);
t.detach();
cancelled = true;
while(!finished) {}
}
To run on coliru, load http://coliru.stacked-crooked.com/a/5be139ee34bf0a80, you will need to edit and make a trivial change because the caching is broken for snippets that do not terminate.
Effectively, he's simply betting that the compiler's optimizer will do a poor job, which seems like a truly terrible thing to rely upon.
As long as you wait for the threads to finish before using their data, you'll be OK in practice: the memory barriers set by std::thread::join or QThread::wait will protect you.
Your worry isn't about the cancelled variable, as long as it's volatile you're in practice fine. You should worry about reading inconsistent state of the data modified by the threads.
As can be inferred from Mine's comment, Puppy's code example does not demonstrate the problem. A few minor modifications are necessary.
Firstly, we must add finished = true; at the end of thread1 so that the program even pretends to be able to terminate.
Now, the optimizer isn't able to check every function in every translation unit to be sure that cancelled is in fact always false when entering thread1, so it cannot make the daring optimization to remove the while loop and everything after it. We can fix that by setting cancelled to false at the start of thread1.
With the previous addition, for fairness, we must also continually set cancelled to true in main, because otherwise we cannot guarantee that the single assignment in main is not scheduled after the initial assignment of in thread1.
Edit: Added qualifiers, and synchronous join instead of detachment.
#include <thread>
#include <iostream>
bool cancelled = false;
bool finished = false;
void thread1() {
cancelled = false;
while(!cancelled)
;
finished = true;
}
int main() {
std::thread t(thread1);
while(!finished) {
std::cout << "trying to cancel\n";
cancelled = true;
}
t.join();
}

Is mutex mandatory to access extern variable from a different thread?

I am developing an application in Qt/C++. At some point, there are two threads : one is the UI thread and the other one is the background thread. I have to do some operation from the background thread based on the value of an extern variable which is type of bool. I am setting this value by clicking a button on UI.
header.cpp
extern bool globalVar;
mainWindow.cpp
//main ui thread on button click
setVale(bool val){
globalVar = val;
}
backgroundThread.cpp
while(1){
if(globalVar)
//do some operation
else
//do some other operation
}
Here, writing to globalVar happens only when the user clicks the button whereas reading happens continuously.
So my question is :
In a situation like the one above, is mutex mandatory?
If read and write happens at the same time, does this cause the application to crash?
If read and write happens at same time, is globalVar going to have some value other than true or false?
Finally, does the OS provide any kind of locking mechanism to prevent the read/write operation to access a memory location at the same time by a different thread?
The loop
while(1){
if(globalVar)
//do some operation
else
//do some other operation
}
is busy waiting, which is extremely wasteful. Thus, you're probably better off with some classic synchronization that will wake the background thread (mostly) when there is something to be done. You should consider adapting this example of std::condition_variable.
Say you start with:
#include <thread>
#include <mutex>
#include <condition_variable>
std::mutex m;
std::condition_variable cv;
bool ready = false;
Your worker thread can then be something like this:
void worker_thread()
{
while(true)
{
// Wait until main() sends data
std::unique_lock<std::mutex> lk(m);
cv.wait(lk, []{return ready;});
ready = false;
lk.unlock();
}
The notifying thread should do something like this:
{
std::lock_guard<std::mutex> lk(m);
ready = true;
}
cv.notify_one();
Since it is just a single plain bool, I'd say a mutex is overkill, you should just go for an atomic integer instead. An atomic will read and write in a single CPU clock so no worries there, and it will be lock free, which is always better if possible.
If it is something more complex, then by all means go for a mutex.
It won't crash from that alone, but you can get data corruption, which may crash the application.
The system will not manage that stuff for you, you do it manually, just make sure all access to the data goes through the mutex.
Edit:
Since you specify a number of times that you don't want a complex solution, you may opt for simply using a mutex instead of the bool. There is no need to protect the bool with a mutex, since you can use the mutex as a bool, and yes, you could go with an atomic, but that's what the mutex already does (plus some extra functionality in the case of recursive mutexes).
It also matters what is your exact workload, since your example doesn't make a lot of sense in practice. It would be helpful to know what those some operations are.
So in your ui thread you could simply val ? mutex.lock() : mutex.unlock(), and in your secondary thread you could use if (mutex.tryLock()) doStuff; mutex.unlock(); else doOtherStuff;. Now if the operation in the secondary thread takes too long and you happen to be changing the lock in the main thread, that will block the main thread until the secondary thread unlocks. You could use tryLock(timeout) in the main thread, depending on what you prefer, lock() will block until success, while tryLock(timeout) will prevent blocking but the lock may fail. Also, take care not to unlock from a thread other than the one you locked with, and not to unlock an already unlocked mutex.
Depending on what you are actually doing, maybe an asynchronous event driven approach would be more appropriate. Do you really need that while(1)? How frequently do you perform those operations?
In situation like above does mutex is necessary?
A mutex is one tool that will work. What you actually need are three things:
a means of ensuring an atomic update (a bool will give you this as it's mandated to be an integral type by the standard)
a means of ensuring that the effects of a write made by one thread is actually visible in the other thread. This may sound counter-intuitive but the c++ memory model is single-threaded and optimisations (software and hardware) do not need to consider cross-thread communication, and...
a means of preventing the compiler (and CPU!!) from re-ordering the reads and writes.
The answer to the implied question is 'yes'. You will need something at does all of these things (see below)
If read and write happend at the same time does this cause to crash the application?
not when it's a bool, but the program won't behave as you expect. In fact, because the program is now exhibiting undefined behaviour you can no longer reason about its behaviour at all.
If read and write happens at same time, is globalVar going to have some value other thantrue or false?
not in this case because it's an intrinsic (atomic) type.
And is it going to happen the access(read/write) of a memory location at same time by different thread, does OS providing any kind of locking mechanism to prevent it?
Not unless you specify one.
Your options are:
std::atomic<bool>
std::mutex
std::atomic_signal_fence
Realistically speaking, as long as you use an integer type (not bool), make it volatile, and keep inside of its own cache line by properly aligning its storage, you don't need to do anything special at all.
In situation like above does mutex is necessary?
Only if you want to keep the value of the variable synchronized with other state.
If read and write happed at the same time does this cause to crash the application?
According to C++ standard, it's undefined behavior. So anything can happen: e.g. your application might not crash, but its state might be subtly corrupted. In real life, though, compilers often offer some sane implementation defined behavior and you're fine unless your platform is really weird. Anything commonplace, like 32 and 64 bit intel, PPC and ARM will be fine.
If read and write happens at same time, is globalVar going to have some value other thantrue or false?
globalVar can only have these two values, so it makes no sense to speak of any other values unless you're talking about its binary representation. Yes, it could happen that the binary representation is incorrect and not what the compiler would expect. That's why you shouldn't use a bool but a uint8_t instead.
I wouldn't love to see such flag in a code review, but if a uint8_t flag is the simplest solution to whatever problem you're solving, I say go for it. The if (globalVar) test will treat zero as false, and anything else as true, so temporary "gibberish" is OK and won't have any odd effects in practice. According to the standard, you'll be facing undefined behavior, of course.
And is it going to happen the access(read/write) of a memory location at same time by different thread, does OS providing any kind of locking mechanism to prevent it?
It's not the OS's job to do that.
Speaking of practice, though: on any reasonable platform, the use of a std::atomic_bool will have no overhead over the use of a naked uint8_t, so just use that and be done.

while inside while not working properly in c++

I have curious situation (at least for me :D ) in C++
My code is:
static void startThread(Object* r){
while(true)
{
while(!r->commands->empty())
{
doSomthing();
}
}
}
I start this function as thread using boost where commands in r is queue... this queue I fill up in another thread....
The problem is that if I fill the queue first and then start this tread everything works fine... But if I run the startThread first and after that I fill up queue commands, it is not working... doSomething() will not run...
Howewer if I modify startThread:
static void startThread(Object* r){
while(true)
{
std::cout << "c" << std::endl;
while(!r->commands->empty())
{
doSomthing();
}
}
}
I just added cout... and it is working... Can anybody explain why it is working with cout and not without? Or anybody has idea what can be wrong?
Maybe compiler is doing some kind of optimalization? I do not think so... :(
Thanks
But if I run the startThread first and after that I fill up queue commands, it is not working... doSomething() will not run
Of course not! What did you expect? Your queue is empty, so !r->commands->empty() will be false.
I just added cout... and it is working
You got lucky. cout is comparatively slow, so your main thread had a chance to fill the queue before the inner while test was executed for the first time.
So why does the thread not see an updated version of r->commands after it has been filled by the main thread? Because nothing in your code indicates that your variable is going to change from the outside, so the compiler assumes that it doesn’t.
In fact, the compiler sees that your r’s pointee cannot change, so it can just remove the redundant checks from the inner loop. When working with multithreaded code, you explicitly need to tell C++ that variables can be changed from a different context, using atomic memory access.
When u first run the thread and then fill up the queue, not entering the inner loop is logical, since the test !r->commands->empty() is true. After u add the cout statement, it is working because it takes some time to print the output, and meanwhile the other thread fills up the queue. so the condition becomes again true. But this is not good programming to rely on this facts in a multi-threading environment.
There are two inter-related issues:
You are not forcing a reload of r->commands or r->commands-Yempty(), thus your compiler, diligent as it is in search of the pinnacle of performance, cached the result. Adding some more code might make the compiler remove this optimisation if it cannot prove the caching is still valid.
You have a data-race, so your program has undefined behavior. (I am assuming doSomething() removes an element and some other thread adds elements.
1.10 Multi-threaded executions and data races § 21
The execution of a program contains a data race if it contains two conflicting actions in different threads,
at least one of which is not atomic, and neither happens before the other. Any such data race results in
undefined behavior. [ Note: It can be shown that programs that correctly use mutexes and memory_order_-
seq_cst operations to prevent all data races and use no other synchronization operations behave as if the
operations executed by their constituent threads were simply interleaved, with each value computation of an
object being taken from the last side effect on that object in that interleaving. This is normally referred to as
“sequential consistency”. However, this applies only to data-race-free programs, and data-race-free programs
cannot observe most program transformations that do not change single-threaded program semantics. In
fact, most single-threaded program transformations continue to be allowed, since any program that behaves
differently as a result must perform an undefined operation. —end note ]
22

C++ Thread question - setting a value to indicate the thread has finished

Is the following safe?
I am new to threading and I want to delegate a time consuming process to a separate thread in my C++ program.
Using the boost libraries I have written code something like this:
thrd = new boost::thread(boost::bind(&myclass::mymethod, this, &finished_flag);
Where finished_flag is a boolean member of my class. When the thread is finished it sets the value and the main loop of my program checks for a change in that value.
I assume that this is okay because I only ever start one thread, and that thread is the only thing that changes the value (except for when it is initialised before I start the thread)
So is this okay, or am I missing something, and need to use locks and mutexes, etc
You never mentioned the type of finished_flag...
If it's a straight bool, then it might work, but it's certainly bad practice, for several reasons. First, some compilers will cache the reads of the finished_flag variable, since the compiler doesn't always pick up the fact that it's being written to by another thread. You can get around this by declaring the bool volatile, but that's taking us in the wrong direction. Even if reads and writes are happening as you'd expect, there's nothing to stop the OS scheduler from interleaving the two threads half way through a read / write. That might not be such a problem here where you have one read and one write op in separate threads, but it's a good idea to start as you mean to carry on.
If, on the other hand it's a thread-safe type, like a CEvent in MFC (or equivilent in boost) then you should be fine. This is the best approach: use thread-safe synchronization objects for inter-thread communication, even for simple flags.
Instead of using a member variable to signal that the thread is done, why not use a condition? You are already are using the boost libraries, and condition is part of the thread library.
Check it out. It allows the worker thread to 'signal' that is has finished, and the main thread can check during execution if the condition has been signaled and then do whatever it needs to do with the completed work. There are examples in the link.
As a general case I would neve make the assumption that a resource will only be modified by the thread. You might know what it is for, however someone else might not - causing no ends of grief as the main thread thinks that the work is done and tries to access data that is not correct! It might even delete it while the worker thread is still using it, and causing the app to crash. Using a condition will help this.
Looking at the thread documentation, you could also call thread.timed_join in the main thread. timed_join will wait for a specified amount for the thread to 'join' (join means that the thread has finsihed)
I don't mean to be presumptive, but it seems like the purpose of your finished_flag variable is to pause the main thread (at some point) until the thread thrd has completed.
The easiest way to do this is to use boost::thread::join
// launch the thread...
thrd = new boost::thread(boost::bind(&myclass::mymethod, this, &finished_flag);
// ... do other things maybe ...
// wait for the thread to complete
thrd.join();
If you really want to get into the details of communication between threads via shared memory, even declaring a variable volatile won't be enough, even if the compiler does use appropriate access semantics to ensure that it won't get a stale version of data after checking the flag. The CPU can issue reads and writes out of order as long (x86 usually doesn't, but PPC definitely does) and there is nothing in C++9x that allows the compiler to generate code to order memory accesses appropriately.
Herb Sutter's Effective Concurrency series has an extremely in depth look at how the C++ world intersects the multicore/multiprocessor world.
Having the thread set a flag (or signal an event) before it exits is a race condition. The thread has not necessarily returned to the OS yet, and may still be executing.
For example, consider a program that loads a dynamic library (pseudocode):
lib = loadLibrary("someLibrary");
fun = getFunction("someFunction");
fun();
unloadLibrary(lib);
And let's suppose that this library uses your thread:
void someFunction() {
volatile bool finished_flag = false;
thrd = new boost::thread(boost::bind(&myclass::mymethod, this, &finished_flag);
while(!finished_flag) { // ignore the polling loop, it's besides the point
sleep();
}
delete thrd;
}
void myclass::mymethod() {
// do stuff
finished_flag = true;
}
When myclass::mymethod() sets finished_flag to true, myclass::mymethod() hasn't returned yet. At the very least, it still has to execute a "return" instruction of some sort (if not much more: destructors, exception handler management, etc.). If the thread executing myclass::mymethod() gets pre-empted before that point, someFunction() will return to the calling program, and the calling program will unload the library. When the thread executing myclass::mymethod() gets scheduled to run again, the address containing the "return" instruction is no longer valid, and the program crashes.
The solution would be for someFunction() to call thrd->join() before returning. This would ensure that the thread has returned to the OS and is no longer executing.