Why is Boost scoped_lock not unlocking the mutex? - c++

I've been using boost::mutex::scoped_lock in this manner:
void ClassName::FunctionName()
{
{
boost::mutex::scoped_lock scopedLock(mutex_);
//do stuff
waitBoolean=true;
}
while(waitBoolean == true ){
sleep(1);
}
//get on with the thread's activities
}
Basically it sets waitBoolean, and the other thread signals that it is done by setting waitBoolean to false;
This doesn't seem to work, however, because the other thread can't get a lock on mutex_ !!
I was assuming that by wrapping the scoped_lock in brackets I would be terminating its lock. This isn't the case? Reading online says that it only gives up the mutex when the destructor is called. Won't it be destroyed when it goes out of that local scope?
Signaling part of code:
while(running_){
boost::mutex::scoped_lock scopedLock(mutex_);
//Run some function that need to be done...
if(waitBoolean){
waitBoolean=false;
}
}
Thanks!

To synchronize two threads use a condition variable. That is the state of the art way to synchronize two threads the way you want :
Using boost, the waiting part is something like :
void BoostSynchronisationPoint::waitSynchronisation()
{
boost::unique_lock<boost::mutex> lock(_mutex);
_synchronisationSent = false;
while(!_synchronisationSent)
{
_condition.wait(lock); // unlock and wait
}
}
The notify part is something like :
void BoostSynchronisationPoint::sendSynchronisation()
{
{
boost::lock_guard<boost::mutex> lock(_mutex);
_synchronisationSent = true;
}
_condition.notify_all();
}
The business with _synchronisationSent is to avoid spurrious wakeups : see wikipedia

The scoped_lock should indeed be released at the end of the scope. However you don't lock the waitBoolean when you're looping on it, suggesting you don't protect it properly other places as well - e.g. where it's set to false, and you'll end up with nasty race conditions.
I'd say you should use a boost::condition_variable to do this sort of things, instead of sleep + thread-unsafe checking.

Also I would suggest to mark as volatile that waitBoolean, however you have to use a condition or even better a barrier.

Related

Conditionally acquire an std::mutex

I have a multi-threaded app that uses the GPU, which is inherently single-threaded, and the actual APIs I use, cv::gpu::FAST_GPU, does crash when I try to use them multi-threaded, so basically I have:
static std::mutex s_FAST_GPU_mutex;
{
std::lock_guard<std::mutex> guard(s_FAST_GPU_mutex);
cv::gpu::FAST_GPU(/*params*/)(/*parameters*/);
}
Now, benchmarking the code shows me FAST_GPU() in isolation is faster than the CPU FAST(), but in the actual application my other threads spend a lot of time waiting for the lock, so the overall throughput is worse.
Looking through the documentation, and at this answer it seems that this might be possible:
static std::mutex s_FAST_GPU_mutex;
static std::unique_lock<std::mutex> s_FAST_GPU_lock(s_FAST_GPU_mutex, std::defer_lock);
{
// Create an unlocked guard
std::lock_guard<decltype(s_FAST_GPU_lock)> guard(s_FAST_GPU_lock, std::defer_lock);
if (s_FAST_GPU_lock.try_lock())
{
cv::gpu::FAST_GPU(/*params*/)(/*parameters*/);
}
else
{
cv::FAST(/*parameters*/);
}
}
However, this will not compile as std::lock_guard only accepts a std::adopt_lock. How can I implement this properly?
It is actually unsafe to have a unique_lock accessible from multiple threads at the same time. I'm not familiar with the opencv portion of your question, so this answer is focused on the mutex/lock usage.
static std::mutex s_FAST_GPU_mutex;
{
// Create a unique lock, attempting to acquire
std::unique_lock<std::mutex> guard(s_FAST_GPU_mutex, std::try_to_lock);
if (guard.owns_lock())
{
cv::gpu::FAST_GPU(/*params*/)(/*parameters*/);
guard.unlock(); // Or just let it go out of scope later
}
else
{
cv::FAST(/*parameters*/);
}
}
This attempts to acquire the lock, if it succeeds, uses FAST_GPU, and then releases the lock. If the lock was already acquired, then goes down the second branch, invoking FAST
You can use std::lock_guard, if you adopt the mutex in the locked state, like this:
{
if (s_FAST_GPU_mutex.try_lock())
{
std::lock_guard<decltype(s_FAST_GPU_lock)> guard(s_FAST_GPU_mutex, std::adopt_lock);
cv::gpu::FAST_GPU(/*params*/)(/*parameters*/);
}
else
{
cv::FAST(/*parameters*/);
}
}

Thread synchronisation: Wait on two bool variables

I want to wait for two bool variables to be true in one thread. They are changed in different places. I can use boost in my project, but not C++11.
I did find Info on how to use mutexes and condition variables, but im not sure if its possible to wait for two mutexes.
This is some pseudocode of my program.
bool job1_dataready, job2_dataready;
//t1:
void job1()
{
//do stuff
job1_dataready = true;
}
//t2:
void job2()
{
//do stuff
job2_dataready= true;
}
main()
{
boost::thread t1(job1);
boost::thread t1(job2);
if(job1_dataready&& job2_dataready)
{
//do stuff with data from both jobs
}
}
from what I see, you don't need bool variables, use std::thread::join instead:
main() {
std::thread t1(job1);
std::thread t1(job2);
t1.join();
t2.join();
// do jobs after threads t1 and t2 finish working
}
you would block on the condition variable, check your boolean values when woken, and either go back to waiting or continue processing. Your threads will signal the condition variable after they have set the boolean flag. All with appropriate mutex locking of course. You can wait on an infinite number of conditions, just check when woken after blocking on the condition.
In simple situations like this, you wait on two mutexes simply by locking them in order. First you lock the mutex from thread 1, then the mutex from thread 2. If thread 2 would finish before thread 1, the main thread would simply not block when locking mutex 2.
However, note that this is an answer you your question, but not a solution to your problem. The reason is that you have a race condition with the mutex: the main thread might lock the mutex before the worker thread even starts. So, while Andrei R.s response (std::thread::join) isn't a direct answer, it is the correct solution.
If you plan to set your two bools just before the respective threads terminate, then Andrei R.'s solution of just joining the two threads is definitely the best way to go. However, if your threads actually continue working after the dataready points are reached, and are thus not terminating yet, you need a different approach. In that case, you could use two std::future/std::promise objects, which would look something like this:
std::promise<bool> job1_dataready, job2_dataready;
//t1:
void job1()
{
//do stuff
job1_dataready.set_value(true); // The value doesn't actually matter
//do more stuff
}
//t2:
void job2()
{
//do stuff
job2_dataready.set_value(true);
//do more stuff
}
main()
{
std::future<bool> job1_future = job1_dataready.get_future();
std::future<bool> job2_future = job2_dataready.get_future();
boost::thread t1(job1);
boost::thread t2(job2);
job1_future.wait();
job2_future.wait();
if (job1_future.get() && job2_future.get()) // True unless something was aborted
{
//do stuff with data from both jobs
}
}

Query std::mutex for lock state

I have a situation where I'd like to do something like that shown below, but there doesn't seem to be a way of querying the mutex without changing its state. I don't want the someFunctionCalledRepeatedlyFromAnotherThread() to hang around waiting for the mutex to free up if it is locked. It must return immediately after performing some alternative action. I'm guessing this omission is there for safety, as the lock may free up in the time between querying it and the function returning. In my case, bad stuff will not happen if the lock is freed while doSomeAlternativeAction() is happening. The fact I'm in this situation probably means I'm doing something wrong, so how should I change my design?
class MyClass
{
std::mutex initMutex;
public:
void someInitializationFunction()
{
std::lock_guard<std::mutex> lock(initMutex);
// Do some work
}
void someFunctionCalledRepeatedlyFromAnotherThread()
{
if (initMutex.isLocked())
{
doSomeAlternativeAction();
return;
}
// otherwise . . .
std::lock_guard<std::mutex> lock(initMutex);
// carry on with work as usual
}
}
Asking mutex for its state is useless: it may be unlocked now, but by the time you get around to lock it, it may be locked. So it does not have such method.
It does, however, have a method try_lock() which locks it if it is not locked and returns true if it acquired the lock and false otherwise. And std::unique_lock (the more advanced version of std::lock_guard) has option to call it.
So you can do:
void someFunctionCalledRepeatedlyFromAnotherThread()
{
std::unique_lock<std::mutex> lock(initMutex, std::try_to_lock);
if(!lock.owns_lock())
{
doSomeAlternativeAction();
return;
}
// otherwise ... go ahead, you have the lock
}
Sounds like you want to use std::unique_lock instead of std::lock_guard. The try_lock method works similar to the TryEnterCriticalSection on Windows, whereby the function will atomically acquire the lock and return 'true' if it can, or just return 'false' if the lock cannot be acquired (and will not block). Refer to http://msdn.microsoft.com/en-us/library/hh921439.aspx and http://en.cppreference.com/w/cpp/thread/unique_lock/try_lock. Note that unique_lock also has other members available for trying to lock such as try_lock_for and try_lock_until.

ls this code thread-safe?

I'm refactoring some time consuming function so that it can be called from a thread, but I'm having trouble wrapping my head around the issue (not very familiar with thread programming).
At any point, the user can cancel and the function will stop. I do not want to kill the thread as soon as the user cancels since it could cause some data integrity problems. Instead, in several places in the function, I will check if the function has been cancelled and, if so, exit. I will only do that where I know it's safe to exit.
The whole code of the function will be within a mutex. This is the pseudo-code I have in mind:
SomeClass::SomeClass() {
cancelled_ = false;
}
void SomeClass::cancelBigSearch() {
cancelled_ = true;
}
void SomeClass::bigSearch() {
mutex.lock();
// ...
// Some code
// ...
// Safe to exit at this point
if (cancelled_) {
mutex.unlock();
cancelled_ = false;
return;
}
// ...
// Some more code
// ...
if (cancelled_) {
mutex.unlock();
cancelled_ = false;
return;
}
// ...
// Again more code
// ...
if (cancelled_) {
mutex.unlock();
cancelled_ = false;
return;
}
mutex.unlock();
}
So when the user starts a search, a new thread calls bigSearch(). If the user cancels, cancelBigSearch() is called and a cancelled_ flag is set. Then, when bigSearch() reaches a point where it's safe to exit, it will exit.
Any idea if this is all thread-safe?
You should lock access to cancelled_ with another mutex, so checking and setting does not happen simultaneously. Other than that, I think your approach is OK
Update: Also, make sure no exceptions can be thrown from SomeClass::bigSearch(), otherwise the mutex might remain in a locked state. To make sure that all return paths unlock the mutex, you might want to surround the processing parts of the code with if (!cancelled_) and return only at the very end of the method (where you have the one unlock() call on the mutex.
Better yet, wrap the mutex in a RAII (acronym for Resource Allocation Is Initialization) object, so no matter how the function ends (exception or otherwise), the mutex is guaranteed to be unlocked.
Yes, this is thread safe. But:
Processors can have separate cache and cache it's own copy of cancelled_, typically mutex synchronization functions applies proper cache synchronization.
Compiler generated code, can make invalid assumptions about Your data locality, this can lead to not update in time cancelled_. Some platform specific commands can help here, or you can simply use other mechanisms.
All these lead to a thread that isn't canceled in time as you wish.
Your code usage pattern is simple "signaling". So you need to transfer signal to thread. Signal patterns allows trigger multiple times same trigger (signal), and clear it later.
This can be simulated using:
atomic operations
mutex protected variables
signal synchronization primitives
It's not thread-safe, because one thread could read cancelled_ at the same time another thread writes to it, which is a data race, which is undefined behaviour.
As others suggested, either use an atomic type for cancelled_ or protect it with another mutex.
You should also use RAII types to lock the mutexes.
e.g.
void SomeClass::cancelBigSearch() {
std::lock_guard<std::mutex> lock(cxlMutex_);
cancelled_ = true;
}
bool SomeClass::cancelled() {
std::lock_guard<std::mutex> lock(cxlMutex_);
if (cancelled_) {
// reset to false, to avoid caller having to lock mutex again to reset it
cancelled_ = false;
return true;
}
return false;
}
void SomeClass::bigSearch() {
std::lock_guard<std::mutex> lock(mutex);
// ...
// Some code
// ...
// Safe to exit at this point
if (cancelled())
return;
// ...
// Some more code
// ...
if (cancelled())
return;
// ...
// Again more code
// ...
if (cancelled())
return;
}

waiting for multiple condition variables in boost?

I'm looking for a way to wait for multiple condition variables.
ie. something like:
boost::condition_variable cond1;
boost::condition_variable cond2;
void wait_for_data_to_process()
{
boost::unique_lock<boost::mutex> lock(mut);
wait_any(lock, cond1, cond2); //boost only provides cond1.wait(lock);
process_data();
}
Is something like this possible with condition variables. And if not are there alternative solutions?
Thanks
I don't believe you can do anything like this with boost::thread. Perhaps because POSIX condition variables don't allow this type of construct. Of course, Windows has WaitForMultipleObjects as aJ posted, which could be a solution if you're willing to restrict your code to Windows synchronization primitives.
Another option would to use fewer condition variables: just have 1 condition variable that you fire when anything "interesting" happens. Then, any time you want to wait, you run a loop that checks to see if your particular situation of interest has come up, and if not, go back to waiting on the condition variable. You should be waiting on those condition variables in such a loop anyways, as condition variable waits are subject to spurious wakeups (from boost::thread docs, emphasis mine):
void wait(boost::unique_lock<boost::mutex>& lock)
...
Effects:
Atomically call lock.unlock() and blocks the current thread. The thread will unblock when notified by a call to this->notify_one() or this->notify_all(), or spuriously. ...
As Managu already answered, you can use the same condition variable and check for multiple "events" (bool variables) in your while loop. However, concurrent access to these bool variables must be protected using the same mutex that the condvar uses.
Since I already went through the trouble of typing this code example for a related question, I'll repost it here:
boost::condition_variable condvar;
boost::mutex mutex;
bool finished1 = false;
bool finished2 = false;
void longComputation1()
{
{
boost::lock_guard<boost::mutex> lock(mutex);
finished1 = false;
}
// Perform long computation
{
boost::lock_guard<boost::mutex> lock(mutex);
finished1 = true;
}
condvar.notify_one();
}
void longComputation2()
{
{
boost::lock_guard<boost::mutex> lock(mutex);
finished2 = false;
}
// Perform long computation
{
boost::lock_guard<boost::mutex> lock(mutex);
finished2 = true;
}
condvar.notify_one();
}
void somefunction()
{
// Wait for long computations to finish without "spinning"
boost::lock_guard<boost::mutex> lock(mutex);
while(!finished1 && !finished2)
{
condvar.wait(lock);
}
// Computations are finished
}
alternative solutions?
I am not sure of Boost library but you can use WaitForMultipleObjects Function to wait for multiple kernel objects. Just check if this helps.
As Managu points out using multiple conditions might not be a good solution in the first place. What you want to do should be possible to be implemented using Semaphores.
Using the same condition variable for multiple events technically works, but it doesn't allow encapsulation. So I had an attempt at making a class that supports it. Not tested yet! Also it doesn't support notify_one() as I haven't worked out how to implement that.
#pragma once
#include <condition_variable>
#include <unordered_set>
// This is like a `condition_variable` but you can wait on multiple `multi_condition_variable`s.
// Internally it works by creating a new `condition_variable` for each `wait_any()` and registering
// it with the target `multi_condition_variable`s. When `notify_all()` is called, the main `condition_variable`
// is notified, as well as all the temporary `condition_variable`s created by `wait_any()`.
//
// There are two caveats:
//
// 1. You can't call the destructor if any threads are `wait()`ing. This is difficult to get around but
// it is the same as `std::wait_condition` anyway.
//
// 2. There is no `notify_one()`. You can *almost* implement this, but the only way I could think to do
// it was to add an `atomic_int` that indicates the number of waits(). Unfortunately there is no way
// to atomically increment it, and then wait.
class multi_condition_variable
{
public:
multi_condition_variable()
{
}
// Note that it is only safe to invoke the destructor if no thread is waiting on this condition variable.
~multi_condition_variable()
{
}
// Notify all threads calling wait(), and all wait_any()'s that contain this instance.
void notify_all()
{
_condition.notify_all();
for (auto o : _others)
o->notify_all();
}
// Wait for notify_all to be called, or a spurious wake-up.
void wait(std::unique_lock<std::mutex>& loc)
{
_condition.wait(loc);
}
// Wait for any of the notify_all()'s in `cvs` to be called, or a spurious wakeup.
static void wait_any(std::unique_lock<std::mutex>& loc, std::vector<std::reference_wrapper<multi_condition_variable>> cvs)
{
std::condition_variable c;
for (multi_condition_variable& cv : cvs)
cv.addOther(&c);
c.wait(loc);
for (multi_condition_variable& cv : cvs)
cv.removeOther(&c);
}
private:
void addOther(std::condition_variable* cv)
{
std::lock_guard<std::mutex> lock(_othersMutex);
_others.insert(cv);
}
void removeOther(std::condition_variable* cv)
{
// Note that *this may have been destroyed at this point.
std::lock_guard<std::mutex> lock(_othersMutex);
_others.erase(cv);
}
// The condition variable.
std::condition_variable _condition;
// When notified, also notify these.
std::unordered_set<std::condition_variable*> _others;
// Mutex to protect access to _others.
std::mutex _othersMutex;
};
// Example use:
//
// multi_condition_variable cond1;
// multi_condition_variable cond2;
//
// void wait_for_data_to_process()
// {
// unique_lock<boost::mutex> lock(mut);
//
// multi_condition_variable::wait_any(lock, {cond1, cond2});
//
// process_data();
// }