Testing if the initialization of static local objects is thread-safe - c++

I'm trying to write a test (without examining the assembly code) to see whether a certain compiler is conformant with the thread-safe requirement of the c++11 standard about the initialization of static local objects.
So far I can only come up with non-deterministic approaches (sleeping for a long enough time on one thread to make it likely (but not surely, problem!) that the other thread has run to a certain point of execution).
Is there a way to do it deterministically?

E.g. a sync voodoo (see comments) like this:
#include <thread>
#include <mutex>
#include <chrono>
#include <iostream>
std::mutex g_mutex;
const std::chrono::seconds g_dura(1);
void log(const char* msg) {
std::clog << std::this_thread::get_id()
<< " " << msg
<< std::endl;
}
struct Asset {
Asset () {
log("before lock attempt");
g_mutex.lock();
log("after lock attempt");
/*EDIT*/g_mutex.unlock();
}
};
void test() {
log("entering test()");
static Asset asset;
log("leaving test()");
}
int main() {
g_mutex.lock();
std::thread t1(test), t2(test);
std::this_thread::sleep_for(g_dura);
// cleanup
g_mutex.unlock();
t1.join();
t2.join();
}
This lets the first thread (not necessarily t1), which has to do the init, wait in the ctor, and desired behaviour is, that the second (not necessarily t1) thread waits before the pending static variable init (in the first thread) is completed.
So there is just one pair of "before lock attempt"/"after lock attempt" messages printed if the compiler works correctly.
g++ (Debian 4.8.2-16) behaved well.
That voodoo can be put to the top if the t1,t2 themselves manage the control flow of the main thread; I skipped that and simply set a timer.

Related

How can I syncronize these two threads properly?

I would like to synchronize different threads properly but so far I have only be able to write an inelegant solution. Can somebody kindly point out how I can improve the following code?
typedef void (*func)();
void thread(func func1, func func2, int& has_finished, int& id) {
has_finished--;
func1();
has_finished++;
while (has_finished != 0) std::cout << "thread " << id << " waiting\n";
std::cout << "thread" << id << "resuming\n";
func2();
}
int main() {
int has_finished(0), id_one(0), id_two(1);
std::thread t1(thread, fun, fun, std::ref(has_finished), std::ref(id_one));
std::thread t2(thread, fun, fun, std::ref(has_finished), std::ref(id_two));
t1.join();
t2.join();
};
The gist of the program is described by the function thread. The function is executed by two std::threads. The function accepts two long-running functions func1 and func2 and two references of ints as arguments. The threads should only invoke func2 after all threads exited func1. The argument has_finished is used to coordinate the different threads: Upon entering the function, has_arguments is zero. Then each std::thread decrements the value and invokes the long-running function func1. After having left func1, has_finished is incremented again. As long as this value is not at its original value of zero a thread waits. Then, each thread works on func2. The main function is shown at the end.
How can I coordinate the two threads better? I was thinking of using a std::mutex and std::condition_variable but could not figure out how to use them properly? Does somebody have any idea how I can improve the program?
Don't write this yourself. This kind of synchronization is known as a "latch" (or more generally a "barrier", and it's available through various libraries and through the C++ Concurrency TS. (It might also make it into C++20 in some form.)
For example, using a version from Boost:
#include <iostream>
#include <thread>
#include <boost/thread/latch.hpp>
void f(boost::latch& c) {
std::cout << "Doing work in round 1\n";
c.count_down_and_wait();
std::cout << "Doing work in round 2\n";
}
int main() {
boost::latch c(2);
std::thread t1(f, std::ref(c)), t2(f, std::ref(c));
t1.join();
t2.join();
}
The method you've chosen won't actually work and results in undefined behavior because of the race conditions. As you surmised, you need a condition variable.
Here is a Gate class demonstrating how to use a condition variable to implement a gate that waits for some number of threads to arrive at it before continuing:
#include <thread>
#include <mutex>
#include <condition_variable>
#include <iostream>
#include <sstream>
#include <utility>
#include <cassert>
struct Gate {
public:
explicit Gate(unsigned int count = 2) : count_(count) { } // How many threads need to reach the gate before it unlocks
Gate(Gate const &) = delete;
void operator =(Gate const &) = delete;
void wait_for_gate();
private:
int count_;
::std::mutex count_mutex_;
::std::condition_variable count_gate_;
};
void Gate::wait_for_gate()
{
::std::unique_lock<::std::mutex> guard(count_mutex_);
assert(count > 0); // Count being 0 here indicates an irrecoverable programming error.
--count_;
count_gate_.wait(guard, [this](){ return this-> count_ <= 0; });
guard.unlock();
count_gate_.notify_all();
}
void f1()
{
::std::ostringstream msg;
msg << "In f1 with thread " << ::std::this_thread::get_id() << '\n';
::std::cout << msg.str();
}
void f2()
{
::std::ostringstream msg;
msg << "In f2 with thread " << ::std::this_thread::get_id() << '\n';
::std::cout << msg.str();
}
void thread_func(Gate &gate)
{
f1();
gate.wait_for_gate();
f2();
}
int main()
{
Gate gate;
::std::thread t1{thread_func, ::std::ref(gate)};
::std::thread t2{thread_func, ::std::ref(gate)};
t1.join();
t2.join();
}
Hopefully the structure of this code looks enough like your code that you can understand what's going on here. From reading your code, it seems like you're looking for all threads to execute func1, then func2. You do not want func2 running while any thread is executing func1.
That can be thought of as a gate where all the threads are waiting to arrive at the 'finished func1' location before moving on to run func2.
I tested this code on my own local version of compiler explorer.
The main disadvantage of the latch in the other answer is that it is not yet standard C++. My Gate class is a simple implementation of the latch class mentioned in the other answer, and it is standard C++.
The basic way a condition variable works is that it unlocks a mutex, waits for a notify, then locks that mutex and tests the condition. If the condition is true, it continues without unlocking the mutex. If the condition is false, it starts over again.
So, after the condition variable says the condition is true, you have to do whatever you need to do, then unlock the mutex and notify everybody that you've done it.
The mutex here is guarding the shared count variable. Whenever you have a shared value you should guard it with a mutex so that no thread can see that value in an inconsistent state. The condition is that threads can wait for that count to reach 0, indicating that all threads have decremented the count variable.

Initialising with std::call_once() in a multithreading environment [duplicate]

This question already has an answer here:
Is std::call_once a blocking call?
(1 answer)
Closed 3 years ago.
I'm reading the book C++ Concurrency in Action, 2nd Edition X. The book contains an example that uses the std::call_once() function template together with an std::once_flag object to provide some kind of lazy initialisation in thread-safe way.
Here a simplified excerpt from the book:
class X {
public:
X(const connection_details& details): connection_details_{details}
{}
void send_data(const data_packet& data) {
std::call_once(connection_init_, &X::open_connection, this);
connection_.send(data); // connection_ is used
}
data_packet receive_data() {
std::call_once(connection_init_, &X::open_connection, this);
return connection_.recv(data); // connection_ is used
}
private:
void open_connection() {
connection_.open(connection_details_); // connection_ is modified
}
connection_details connection_details_;
connection_handle connection_;
std::once_flag connection_init_;
};
What the code above does, is to delay the creation of the connection until the client wants to receive data or has data to send. The connection is created by the open_connection() private member function, not by the constructor of X. The constructor only saves the connection details to be able to create the connection at some later point.
The open_connection() member function above is called only once, so far so good. In a single-threaded context, this will work as expected. However, what if multiple threads are calling either the send_data() or the receive_data() member function on the same object?
Apparently, the modification/update of the connection_ data member in open_connection() is not synchronised with any of its uses in send_data() or receive_data().
Does std::call_once() block a second thread until the first one returns from std::call_once()?
XSection 3.3.1.: Protecting shared data during initialization
Based on this post I've created this answer.
I wanted to see whether std::call_once() synchronises with other calls to std::call_once() on the same std::once_flag object. The following program creates several threads that call a function that contains a call to std::call_once() that puts the calling thread to sleep for long time.
#include <mutex>
std::once_flag init_flag;
std::mutex mtx;
init_flag is the std::once_flag object to be used with the std::call_once() call. The mutex mtx is just for avoiding interleaved output on std::cout when streaming characters into std::cout from different threads.
The init() function is the one called by std::call_once(). It displays the text initialising..., puts the calling thread to sleep for three seconds and then displays the text done before returning:
#include <thread>
#include <chrono>
#include <iostream>
void init() {
{
std::lock_guard<std::mutex> lg(mtx);
std::cout << "initialising...";
}
std::this_thread::sleep_for(std::chrono::seconds{3});
{
std::lock_guard<std::mutex> lg(mtx);
std::cout << "done" << '\n';
}
}
The purpose of this function is to sleep for long enough (three seconds in this case), so that the remaining threads have enough time to reach the std::call_once() call. This way we will be able to see whether they block until the thread executing this function returns from it.
The function do_work() is called by all threads that are created in main():
void do_work() {
std::call_once(init_flag, init);
print_thread_id();
}
init() will be only called by one thread (i.e., it will be called only once). All threads call print_thread_id(), i.e., it is executed once for every thread created in main().
The print_thread_id() simply displays the current thread id:
void print_thread_id() {
std::lock_guard<std::mutex> lg(mtx);
std::cout << std::this_thread::get_id() << '\n';
}
A total of 16 threads, which call the do_work() function, are created in main():
#include <vector>
int main() {
std::vector<std::thread> threads(16);
for (auto& th: threads)
th = std::thread{do_work};
for (auto& th: threads)
th.join();
}
The output I get on my system is:
initialising...done
0x7000054a9000
0x700005738000
0x7000056b5000
0x700005632000
0x700005426000
0x70000552c000
0x7000055af000
0x7000057bb000
0x70000583e000
0x7000058c1000
0x7000059c7000
0x700005a4a000
0x700005944000
0x700005acd000
0x700005b50000
0x700005bd3000
This output means that no thread executes print_thread_id() until the first thread that called std::call_once() returns from it. This implies that those threads are blocked at the std::call_once() call.

Wake up a std::thread from usleep

Consider the following example:
#include <iostream>
#include <fstream>
#include <unistd.h>
#include <signal.h>
#include <thread>
void sleepy() {
usleep(1.0E15);
}
int main() {
std :: thread sleepy_thread(sleepy);
// Wake it up somehow...?
sleepy_thread.join();
}
Here we have a thread that just sleeps forever. I want to join it, without having to wait forever for it to spontaneously wake from usleep. Is there a way to tell it from the extern "hey man, wake up!", so that I can join it in a reasonable amount of time?
I am definitely not an expert on threads, so if possible don't assume anything.
No, it is not possible using the threads from the standard library.
One possible workaround is to use condition_variable::sleep_for along with a mutex and a boolean condition.
#include <mutex>
#include <thread>
#include <condition_variable>
std::mutex mymutex;
std::condition_variable mycond;
bool flag = false;
void sleepy() {
std::unique_lock<std::mutex> lock(mymutex);
mycond.wait_for( lock,
std::chrono::seconds(1000),
[]() { return flag; } );
}
int main()
{
std :: thread sleepy_thread(sleepy);
{
std::lock_guard<std::mutex> lock(mymutex);
flag = true;
mycond.notify_one();
}
sleepy_thread.join();
}
Alternatively, you can use the Boost.Thread library, which implements the interruption-point concept:
#include <boost/thread/thread.hpp>
void sleepy()
{
// this_thread::sleep_for is an interruption point.
boost::this_thread::sleep_for( boost::chrono::seconds(1000) );
}
int main()
{
boost::thread t( sleepy );
t.interrupt();
t.join();
}
Other answers are saying you can use a timed muted to accomplish this. I've put together a small class using a timed mutex to block the 'sleeping' threads, and release the mutex if you want to 'wake' them early. The standard library provides a function for timed_mutex called try_lock_for which will try to lock a mutex for a period of time, before continuing on anyway (and returning an indication of failure)
This can be encapsulated in a class, like the following implementation, which only allows a single call to wake waiting threads. It could also be improved by including a waitUntil function for waiting until a time series to correspond to the timed_mutex's other timed waiting function, try_lock_until but I will leave that as an exercise to the interested, since it seems a simple modification.
#include <iostream>
#include <mutex>
#include <thread>
#include <chrono>
#include <atomic>
// one use wakable sleeping class
class InterruptableSleeper{
std::timed_mutex
mut_;
std::atomic_bool
locked_; // track whether the mutex is locked
void lock(){ // lock mutex
mut_.lock();
locked_ = true;
}
void unlock(){ // unlock mutex
locked_ = false;
mut_.unlock();
}
public:
// lock on creation
InterruptableSleeper() {
lock();
}
// unlock on destruction, if wake was never called
~InterruptableSleeper(){
if(locked_){
unlock();
}
}
// called by any thread except the creator
// waits until wake is called or the specified time passes
template< class Rep, class Period >
void sleepFor(const std::chrono::duration<Rep,Period>& timeout_duration){
if(mut_.try_lock_for(timeout_duration)){
// if successfully locked,
// remove the lock
mut_.unlock();
}
}
// unblock any waiting threads, handling a situation
// where wake has already been called.
// should only be called by the creating thread
void wake(){
if(locked_){
unlock();
}
}
};
The following code:
void printTimeWaited(
InterruptableSleeper& sleeper,
const std::chrono::milliseconds& duration){
auto start = std::chrono::steady_clock::now();
std::cout << "Started sleep...";
sleeper.sleepFor(duration);
auto end = std::chrono::steady_clock::now();
std::cout
<< "Ended sleep after "
<< std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count()
<< "ms.\n";
}
void compareTimes(unsigned int sleep, unsigned int waker){
std::cout << "Begin test: sleep for " << sleep << "ms, wakeup at " << waker << "ms\n";
InterruptableSleeper
sleeper;
std::thread
sleepy(&printTimeWaited, std::ref(sleeper), std::chrono::milliseconds{sleep});
std::this_thread::sleep_for(std::chrono::milliseconds{waker});
sleeper.wake();
sleepy.join();
std::cout << "End test\n";
}
int main(){
compareTimes(1000, 50);
compareTimes(50, 1000);
}
prints
Begin test: sleep for 1000ms, wakeup at 50ms
Started sleep...Ended sleep after 50ms.
End test
Begin test: sleep for 50ms, wakeup at 1000ms
Started sleep...Ended sleep after 50ms.
End test
Example & Use on Coliru
"Is there a way to tell it from the extern "hey man, wake up!", so that I can join it in a reasonable amount of time?"
No, there's no way to do so according c++ standard mechanisms.
Well, to get your thread being woken, you'll need a mechanism that leaves other threads in control of it. Besides usleep() is a deprecated POSIX function:
Issue 6
The DESCRIPTION is updated to avoid use of the term "must" for application requirements.
This function is marked obsolescent.
IEEE Std 1003.1-2001/Cor 2-2004, item XSH/TC2/D6/144 is applied, updating the DESCRIPTION from "process' signal mask" to "thread's signal mask", and adding a statement that the usleep() function need not be reentrant.
there's no way you could get control of another thread, that's going to call that function.
Same thing for any other sleep() functions even if declared from std::thread.
As mentioned in other answers or comments, you'll need to use a timeable synchronization mechanism like a std::timed_mutex or a std::condition_variable from your thread function.
Just use a semaphore, call sem_timedwait instead of usleep, and call sem_post before calling join
One possible approach:(There are many ways to accomplish..also its not good idea to use sleep in your thread)
///Define a mutex
void sleepy()
{
//try to take mutex lock which this thread will get if main thread leaves that
//usleep(1.0E15);
}
int main()
{
//Init the Mutex
//take mutex lock
std :: thread sleepy_thread(sleepy);
//Do your work
//unlock the mutex...This will enable the sleepy thread to run
sleepy_thread.join();
}
Sleep for a short amount of time and look to see if a variable has changed.
#include <atomic>
#include <unistd.h>
#include <thread>
std::atomic<int> sharedVar(1);
void sleepy()
{
while (sharedVar.load())
{
usleep(500);
}
}
int main()
{
std :: thread sleepy_thread(sleepy);
// wake up
sharedVar.store(0);
}

unique_lock across threads?

I am having some trouble conceptualizing how unique_lock is supposed to operate across threads. I tried to make a quick example to recreate something that I would normally use a condition_variable for.
#include <mutex>
#include <thread>
using namespace std;
mutex m;
unique_lock<mutex>* mLock;
void funcA()
{
//thread 2
mLock->lock();//blocks until unlock?Access violation reading location 0x0000000000000000.
}
int _tmain(int argc, _TCHAR* argv[])
{
//thread 1
mLock = new unique_lock<mutex>(m);
mLock->release();//Allows .lock() to be taken by a different thread?
auto a = std::thread(funcA);
std::chrono::milliseconds dura(1000);//make sure thread is running
std::this_thread::sleep_for(dura);
mLock->unlock();//Unlocks thread 2's lock?
a.join();
return 0;
}
unique_lock should not be accessed from multiple threads at once. It was not designed to be thread-safe in that manner. Instead, multiple unique_locks (local variables) reference the same global mutex. Only the mutex itself is designed to be accessed by multiple threads at once. And even then, my statement excludes ~mutex().
For example, one knows that mutex::lock() can be accessed by multiple threads because its specification includes the following:
Synchronization: Prior unlock() operations on the same object shall synchronize with (4.7) this operation.
where synchronize with is a term of art defined in 4.7 [intro.multithread] (and its subclauses).
That doesn't look at all right. First, release is "disassociates the mutex without unlocking it", which is highly unlikely that it is what you want to do in that place. It basically means that you no longer have a mutex in your unique_lock<mutex> - which will make it pretty useless - and probably the reason you get "access violation".
Edit: After some "massaging" of your code, and convincing g++ 4.6.3 to do what I wanted (hence the #define _GLIBCXX_USE_NANOSLEEP), here's a working example:
#define _GLIBCXX_USE_NANOSLEEP
#include <chrono>
#include <mutex>
#include <thread>
#include <iostream>
using namespace std;
mutex m;
void funcA()
{
cout << "FuncA Before lock" << endl;
unique_lock<mutex> mLock(m);
//thread 2
cout << "FuncA After lock" << endl;
std::chrono::milliseconds dura(500);//make sure thread is running
std::this_thread::sleep_for(dura); //this_thread::sleep_for(dura);
cout << "FuncA After sleep" << endl;
}
int main(int argc, char* argv[])
{
cout << "Main before lock" << endl;
unique_lock<mutex> mLock(m);
auto a = std::thread(funcA);
std::chrono::milliseconds dura(1000);//make sure thread is running
std::this_thread::sleep_for(dura); //this_thread::sleep_for(dura);
mLock.unlock();//Unlocks thread 2's lock?
cout << "Main After unlock" << endl;
a.join();
cout << "Main after a.join" << endl;
return 0;
}
Not sure why you need to use new to create the lock tho'. Surely unique_lock<mutex> mlock(m); should do the trick (and corresponding changes of mLock-> into mLock. of course).
A lock is just an automatic guard that operates a mutex in a safe and sane fashion.
What you really want is this code:
std::mutex m;
void f()
{
std::lock_guard<std::mutex> lock(m);
// ...
}
This effectively "synchronizes" calls to f, since every thread that enters it blocks until it manages to obtain the mutex.
A unique_lock is just a beefed-up version of the lock_guard: It can be constructed unlocked, moved around (thanks, #MikeVine) and it is itself a "lockable object", like the mutex itself, and so it can be used for example in the variadic std::lock(...) to lock multiple things at once in a deadlock-free way, and it can be managed by an std::condition_variable (thanks, #syam).
But unless you have a good reason to use a unique_lock, prefer to use a lock_guard. And once you need to upgrade to a unique_lock, you'll know why.
As a side-note, the above answers skip over the difference between immediate and deferred locking of mutex:
#include<mutex>
::std::mutex(mu);
auto MyFunction()->void
{
std::unique_lock<mutex> lock(mu); //Created instance and immediately locked the mutex
//Do stuff....
}
auto MyOtherFunction()->void
{
std::unique_lock<mutex> lock(mu,std::defer_lock); //Create but not locked the mutex
lock.lock(); //Lock mutex
//Do stuff....
lock.unlock(); //Unlock mutex
}
MyFunction() shows the widely used immediate lock, whilst MyOtherFunction() shows the deferred lock.

Is possible to get a thread-locking mechanism in C++ with a std::atomic_flag?

Using MS Visual C++2012
A class has a member of type std::atomic_flag
class A {
public:
...
std::atomic_flag lockFlag;
A () { std::atomic_flag_clear (&lockFlag); }
};
There is an object of type A
A object;
who can be accessed by two (Boost) threads
void thr1(A* objPtr) { ... }
void thr2(A* objPtr) { ... }
The idea is wait the thread if the object is being accessed by the other thread.
The question is: do it is possible construct such mechanism with an atomic_flag object? Not to say that for the moment, I want some lightweight that a boost::mutex.
By the way the process involved in one of the threads is very long query to a dBase who get many rows, and I only need suspend it in a certain zone of code where the collision occurs (when processing each row) and I can't wait the entire thread to finish join().
I've tryed in each thread some as:
thr1 (A* objPtr) {
...
while (std::atomic_flag_test_and_set_explicit (&objPtr->lockFlag, std::memory_order_acquire)) {
boost::this_thread::sleep(boost::posix_time::millisec(100));
}
... /* Zone to portect */
std::atomic_flag_clear_explicit (&objPtr->lockFlag, std::memory_order_release);
... /* the process continues */
}
But with no success, because the second thread hangs. In fact, I don't completely understand the mechanism involved in the atomic_flag_test_and_set_explicit function. Neither if such function returns inmediately or can delay until the flag can be locked.
Also it is a mistery to me how to get a lock mechanism with such a function who always set the value, and return the previous value. with no option to only read the actual setting.
Any suggestion are welcome.
By the way the process involved in one of the threads is very long query to a dBase who get many rows, and I only need suspend it in a certain zone of code where the collision occurs (when processing each row) and I can't wait the entire thread to finish join().
Such a zone is known as the critical section. The simplest way to work with a critical section is to lock by mutual exclusion.
The mutex solution suggested is indeed the way to go, unless you can prove that this is a hotspot and the lock contention is a performance problem. Lock-free programming using just atomic and intrinsics is enormously complex and cannot be recommended at this level.
Here's a simple example showing how you could do this (live on http://liveworkspace.org/code/6af945eda5132a5221db823fa6bde49a):
#include <iostream>
#include <thread>
#include <mutex>
struct A
{
std::mutex mux;
int x;
A() : x(0) {}
};
void threadf(A* data)
{
for(int i=0; i<10; ++i)
{
std::lock_guard<std::mutex> lock(data->mux);
data->x++;
}
}
int main(int argc, const char *argv[])
{
A instance;
auto t1 = std::thread(threadf, &instance);
auto t2 = std::thread(threadf, &instance);
t1.join();
t2.join();
std::cout << instance.x << std::endl;
return 0;
}
It looks like you're trying to write a spinlock. Yes, you can do that with std::atomic_flag, but you are better off using std::mutex instead. Don't use atomics unless you really know what you're doing.
To actually answer the question asked: Yes, you can use std::atomic_flag to create a thread locking object called a spinlock.
#include <atomic>
class atomic_lock
{
public:
atomic_lock()
: lock_( ATOMIC_FLAG_INIT )
{}
void lock()
{
while ( lock_.test_and_set() ) { } // Spin until the lock is acquired.
}
void unlock()
{
lock_.clear();
}
private:
std::atomic_flag lock_;
};