I want to create scoped lock, but I want something like:
{
if(lockRequired)
boost::mutex::scoped_lock(Mutex); //After this line we go out of scope
/* Here I also want to have Mutex */
}
if condition is true I want to have lock mutex but in level up scope. I know that I can use simple .lock and in the end of scope use .unlock but I have many return path. I can also create some SynchronizationGuard in scope and whed destructor is called unlock mutex but it's not clean solution. Some advices ?
Best regards.
Use ternary operator.
boost::mutex::scoped_lock lock = lockRequired ?
boost::mutex::scoped_lock(Mutex) : boost::mutex::scoped_lock();
Or just use swap under condition.
boost::mutex::scoped_lock lock;
if (lockRequired)
{
boost::mutex::scoped_lock lock_(Mutex);
lock.swap(lock_);
}
Or just construct lock with defer_lock_t and then call lock function.
boost::mutex::scoped_lock lock(Mutex, boost::defer_lock);
if (lockRequired)
{
lock.lock();
}
You can construct the lock deferred:
#include <boost/thread.hpp>
int main() {
boost::mutex mx;
boost::mutex::scoped_lock sl(mx, boost::defer_lock);
if (condition)
sl.lock();
// sl will unlock on end of scope
}
Also works for std::unique_lock, std::lock_guard and corresponding boost types
Analogously there's the adopt_lock tag type.
Related
In my multithreaded server I have somefunction(), which needs to protect two independent of each other global data using EnterCriticalSection.
somefunction()
{
EnterCriticalSection(&g_List);
...
EnterCriticalSection(&g_Variable);
...
LeaveCriticalSection(&g_Variable);
...
LeaveCriticalSection(&g_List);
}
Following the advice of better programmers i'm going to use a RAII wrapper. For example:
class Locker
{
public:
Locker(CSType& cs): m_cs(cs)
{
EnterCriticalSection(&m_cs);
}
~Locker()
{
LeaveCriticalSection(&m_cs);
}
private:
CSType& m_cs;
}
My question: Is it ok to transform somefunction() to this?
(putting 2 Locker in one function):
somefunction()
{
// g_List,g_Variable previously initialized via InitializeCriticalSection
Locker lock(g_List);
Locker lock(g_Variable);
...
...
}
?
Your current solution has potential dead lock case. If you have two (or more) CSTypes which will be locked in different order this way, you will end up in dead lock. Best way would be to lock them both atomically. You can see an example of this in boost thread library. shared_lock and unique_lock can be used in deferred mode so that first you prepare all raii objects for all mutex objects, and then lock them all atomically in one call to lock function.
As long as you keep lock order the same in your threads its OK. Do you really need to lock them both at the same time? Also with scoped lock you can add scopes to control when to unlock, something like this:
{
// use inner scopes to control lock duration
{
Locker lockList (g_list);
// do something
} // unlocked at the end
Locker lockVariable (g_variable);
// do something
}
I have a problem with mutexes...
This is the general structure of my code:
#include <mutex>
std::mutex m;
While(1){
m.lock();
if(global_variable1==1){
//CODE GOES HERE
if (err==error::eof){
cout<<"error!"<<endl;
//should I put a m.unlock() here??
continue;
}
int something=1;
global_variable2=something;
}
m.unlock();
usleep(100000);
}
Basically, I want to change global variable safely, so I think I need to use mutexes. I should only unlock the mutex after that "if(global_variable1==1)" function, but if there is an error, the mutex won't be unlocked.. Can I unlock it before the "continue"? Or is this going mess up with anything else? Can having two unlocks for the same mutex.lock() have a undesired behaviour?
This is why C++ has separate lock and mutex classes: a lock is a handy RAII class that will make sure that your mutex gets unlocked even when exceptions are thrown or some other idiot programmer adds a new return/break/continue into the program. Here's how this program works with std::unique_lock:
#include <mutex>
std::mutex m;
While(1){
std::unique_lock<std::mutex> lock(m);
if(global_variable1==1){
//CODE GOES HERE
if (err==error::eof){
cout<<"error!"<<endl;
continue;
}
int something=1;
global_variable2=something;
}
lock.unlock();
usleep(100000);
}
Do not lock/unlock mutexes manually! Instead use a guard, e.g., std::lock_guard<std::mutex>: the guard will acquire a lock upon construction an release it upon destruction. To limit the time the lock is held, just use a block:
while (true) {
{
std::lock_guard<std::mutex> cerberos(m);
// ...
}
sleep(n);
}
As per title, how to try_lock on a boost::unique_lock ?
I've this code:
void mySafeFunct()
{
if(myMutex.try_lock() == false)
{
return -1;
}
// mutex ownership is automatically acquired
// do stuff safely
myMutex.unlock();
}
Now I'd like to use a unique_lock (which is also a scoped mutex) instead of the plain boost::mutex. I want this to avoid all the unlock() calls from the function body.
You can either defer the locking with the Defer constructor, or use the the Try constructor when creating your unique_lock:
boost::mutex myMutex;
boost::unique_lock<boost::mutex> lock(myMutex, boost::try_lock);
if (!lock.owns_lock())
return -1;
...
boost::mutex myMutex;
boost::unique_lock<boost::mutex> lock(myMutex, boost::defer_lock);
lock.try_lock()
Previous answers may be outdated. I'm using boost 1.53 and this seems to work:
boost::unique_lock<boost::mutex> lk(myMutex, boost::try_to_lock);
if (lk)
doTheJob();
I'm looking for a way to wait for multiple condition variables.
ie. something like:
boost::condition_variable cond1;
boost::condition_variable cond2;
void wait_for_data_to_process()
{
boost::unique_lock<boost::mutex> lock(mut);
wait_any(lock, cond1, cond2); //boost only provides cond1.wait(lock);
process_data();
}
Is something like this possible with condition variables. And if not are there alternative solutions?
Thanks
I don't believe you can do anything like this with boost::thread. Perhaps because POSIX condition variables don't allow this type of construct. Of course, Windows has WaitForMultipleObjects as aJ posted, which could be a solution if you're willing to restrict your code to Windows synchronization primitives.
Another option would to use fewer condition variables: just have 1 condition variable that you fire when anything "interesting" happens. Then, any time you want to wait, you run a loop that checks to see if your particular situation of interest has come up, and if not, go back to waiting on the condition variable. You should be waiting on those condition variables in such a loop anyways, as condition variable waits are subject to spurious wakeups (from boost::thread docs, emphasis mine):
void wait(boost::unique_lock<boost::mutex>& lock)
...
Effects:
Atomically call lock.unlock() and blocks the current thread. The thread will unblock when notified by a call to this->notify_one() or this->notify_all(), or spuriously. ...
As Managu already answered, you can use the same condition variable and check for multiple "events" (bool variables) in your while loop. However, concurrent access to these bool variables must be protected using the same mutex that the condvar uses.
Since I already went through the trouble of typing this code example for a related question, I'll repost it here:
boost::condition_variable condvar;
boost::mutex mutex;
bool finished1 = false;
bool finished2 = false;
void longComputation1()
{
{
boost::lock_guard<boost::mutex> lock(mutex);
finished1 = false;
}
// Perform long computation
{
boost::lock_guard<boost::mutex> lock(mutex);
finished1 = true;
}
condvar.notify_one();
}
void longComputation2()
{
{
boost::lock_guard<boost::mutex> lock(mutex);
finished2 = false;
}
// Perform long computation
{
boost::lock_guard<boost::mutex> lock(mutex);
finished2 = true;
}
condvar.notify_one();
}
void somefunction()
{
// Wait for long computations to finish without "spinning"
boost::lock_guard<boost::mutex> lock(mutex);
while(!finished1 && !finished2)
{
condvar.wait(lock);
}
// Computations are finished
}
alternative solutions?
I am not sure of Boost library but you can use WaitForMultipleObjects Function to wait for multiple kernel objects. Just check if this helps.
As Managu points out using multiple conditions might not be a good solution in the first place. What you want to do should be possible to be implemented using Semaphores.
Using the same condition variable for multiple events technically works, but it doesn't allow encapsulation. So I had an attempt at making a class that supports it. Not tested yet! Also it doesn't support notify_one() as I haven't worked out how to implement that.
#pragma once
#include <condition_variable>
#include <unordered_set>
// This is like a `condition_variable` but you can wait on multiple `multi_condition_variable`s.
// Internally it works by creating a new `condition_variable` for each `wait_any()` and registering
// it with the target `multi_condition_variable`s. When `notify_all()` is called, the main `condition_variable`
// is notified, as well as all the temporary `condition_variable`s created by `wait_any()`.
//
// There are two caveats:
//
// 1. You can't call the destructor if any threads are `wait()`ing. This is difficult to get around but
// it is the same as `std::wait_condition` anyway.
//
// 2. There is no `notify_one()`. You can *almost* implement this, but the only way I could think to do
// it was to add an `atomic_int` that indicates the number of waits(). Unfortunately there is no way
// to atomically increment it, and then wait.
class multi_condition_variable
{
public:
multi_condition_variable()
{
}
// Note that it is only safe to invoke the destructor if no thread is waiting on this condition variable.
~multi_condition_variable()
{
}
// Notify all threads calling wait(), and all wait_any()'s that contain this instance.
void notify_all()
{
_condition.notify_all();
for (auto o : _others)
o->notify_all();
}
// Wait for notify_all to be called, or a spurious wake-up.
void wait(std::unique_lock<std::mutex>& loc)
{
_condition.wait(loc);
}
// Wait for any of the notify_all()'s in `cvs` to be called, or a spurious wakeup.
static void wait_any(std::unique_lock<std::mutex>& loc, std::vector<std::reference_wrapper<multi_condition_variable>> cvs)
{
std::condition_variable c;
for (multi_condition_variable& cv : cvs)
cv.addOther(&c);
c.wait(loc);
for (multi_condition_variable& cv : cvs)
cv.removeOther(&c);
}
private:
void addOther(std::condition_variable* cv)
{
std::lock_guard<std::mutex> lock(_othersMutex);
_others.insert(cv);
}
void removeOther(std::condition_variable* cv)
{
// Note that *this may have been destroyed at this point.
std::lock_guard<std::mutex> lock(_othersMutex);
_others.erase(cv);
}
// The condition variable.
std::condition_variable _condition;
// When notified, also notify these.
std::unordered_set<std::condition_variable*> _others;
// Mutex to protect access to _others.
std::mutex _othersMutex;
};
// Example use:
//
// multi_condition_variable cond1;
// multi_condition_variable cond2;
//
// void wait_for_data_to_process()
// {
// unique_lock<boost::mutex> lock(mut);
//
// multi_condition_variable::wait_any(lock, {cond1, cond2});
//
// process_data();
// }
I've been using boost::mutex::scoped_lock in this manner:
void ClassName::FunctionName()
{
{
boost::mutex::scoped_lock scopedLock(mutex_);
//do stuff
waitBoolean=true;
}
while(waitBoolean == true ){
sleep(1);
}
//get on with the thread's activities
}
Basically it sets waitBoolean, and the other thread signals that it is done by setting waitBoolean to false;
This doesn't seem to work, however, because the other thread can't get a lock on mutex_ !!
I was assuming that by wrapping the scoped_lock in brackets I would be terminating its lock. This isn't the case? Reading online says that it only gives up the mutex when the destructor is called. Won't it be destroyed when it goes out of that local scope?
Signaling part of code:
while(running_){
boost::mutex::scoped_lock scopedLock(mutex_);
//Run some function that need to be done...
if(waitBoolean){
waitBoolean=false;
}
}
Thanks!
To synchronize two threads use a condition variable. That is the state of the art way to synchronize two threads the way you want :
Using boost, the waiting part is something like :
void BoostSynchronisationPoint::waitSynchronisation()
{
boost::unique_lock<boost::mutex> lock(_mutex);
_synchronisationSent = false;
while(!_synchronisationSent)
{
_condition.wait(lock); // unlock and wait
}
}
The notify part is something like :
void BoostSynchronisationPoint::sendSynchronisation()
{
{
boost::lock_guard<boost::mutex> lock(_mutex);
_synchronisationSent = true;
}
_condition.notify_all();
}
The business with _synchronisationSent is to avoid spurrious wakeups : see wikipedia
The scoped_lock should indeed be released at the end of the scope. However you don't lock the waitBoolean when you're looping on it, suggesting you don't protect it properly other places as well - e.g. where it's set to false, and you'll end up with nasty race conditions.
I'd say you should use a boost::condition_variable to do this sort of things, instead of sleep + thread-unsafe checking.
Also I would suggest to mark as volatile that waitBoolean, however you have to use a condition or even better a barrier.