Locking a mutex in a destructor in C++11 - c++

I have some code which need to be thread safe and exception safe. The code below is a very simplified version of my problem :
#include <mutex>
#include <thread>
std::mutex mutex;
int n=0;
class Counter{
public:
Counter(){
std::lock_guard<std::mutex>guard(mutex);
n++;}
~Counter(){
std::lock_guard<std::mutex>guard(mutex);//How can I protect here the underlying code to mutex.lock() ?
n--;}
};
void doSomething(){
Counter counter;
//Here I could do something meaningful
}
int numberOfThreadInDoSomething(){
std::lock_guard<std::mutex>guard(mutex);
return n;}
I have a mutex that I need to lock in the destructor of an object. The problem is that my destructor should not throw exceptions.
What can I do ?
0) I cannot replace n with an atomic variable (of course it would do the trick here but that is not the point of my question)
1) I could replace my mutex with a spin lock
2) I could try and catch the locking into an infinite loop until I eventualy acquire the lock with no exception raised
None of those solution seems very appealing. Did you have the same problem ? How did you solve it ?

As suggested by Adam H. Peterson, I finally decided to write a no throw mutex :
class NoThrowMutex{
private:
std::mutex mutex;
std::atomic_flag flag;
bool both;
public:
NoThrowMutex();
~NoThrowMutex();
void lock();
void unlock();
};
NoThrowMutex::NoThrowMutex():mutex(),flag(),both(false){
flag.clear(std::memory_order_release);}
NoThrowMutex::~NoThrowMutex(){}
void NoThrowMutex::lock(){
try{
mutex.lock();
while(flag.test_and_set(std::memory_order_acquire));
both=true;}
catch(...){
while(flag.test_and_set(std::memory_order_acquire));
both=false;}}
void NoThrowMutex::unlock(){
if(both){mutex.unlock();}
flag.clear(std::memory_order_release);}
The idea is to have two mutex instead of only one. The real mutex is the spin mutex implemented with an std::atomic_flag. This spin mutex is protected by a std::mutex which could throw.
In a normal situation, the standard mutex is acquired and the flag is set with a cost of only one atomic operation. If the standard mutex cannot be locked immediately, the thread is going to sleep.
If for any reason the standard mutex throws, the mutex will enter its spin mode. The thread where the exception occured will then loop until it can set the flag. As no other thread is aware that this thread bybassed completely the standard mutex, they could spin too.
In the worst case scenario, this locking mechanism degrades to a spin lock. Most of the time it reacts just like a normal mutex.

This is a bad situation to be in. Your destructor is doing something that might fail. If failure to update this counter will irrecoverably corrupt your application, you may want to simply let the destructor throw. This will crash your application with a call to terminate, but if your application is corrupted, it may be better to kill the process and rely on some higher-level recovery scheme (such as a watchdog for a daemon or retrying execution for another utility). If failure to decrement the counter is recoverable, you should absorb the exception with a try{}catch() block and recover (or potentially save information for some other operation to eventually recover). If it's not recoverable, but it's not fatal, you may want to catch and absorb the exception and log the failure (being sure to log in an exception-safe way, of course).
It would be ideal if the code could be restructured such that the destructor doesn't do anything that can't fail. However, if your code is otherwise correct, failure while acquiring a lock is probably rare except in cases of constrained resources, so either absorbing or aborting on failure may very well be acceptable. For some mutexes, lock() is probably a no-throw operation (such as a spinlock using atomic_flag), and if you can use such a mutex, you can expect lock_guard to never throw. Your only worry in that situation would be deadlock.

Related

C++ atomics: how to allow only a single thread to access a function?

I'd like to write a function that is accessible only by a single thread at a time. I don't need busy waits, a brutal 'rejection' is enough if another thread is already running it. This is what I have come up with so far:
std::atomic<bool> busy (false);
bool func()
{
if (m_busy.exchange(true) == true)
return false;
// ... do stuff ...
m_busy.exchange(false);
return true;
}
Is the logic for the atomic exchange correct?
Is it correct to mark the two atomic operations as std::memory_order_acq_rel? As far as I understand a relaxed ordering (std::memory_order_relaxed) wouldn't be enough to prevent reordering.
Your atomic swap implementation might work. But trying to do thread safe programming without a lock is most always fraught with issues and is often harder to maintain.
Unless there's a performance improvement that's needed, then std::mutex with the try_lock() method is all you need, eg:
std::mutex mtx;
bool func()
{
// making use of std::unique_lock so if the code throws an
// exception, the std::mutex will still get unlocked correctly...
std::unique_lock<std::mutex> lck(mtx, std::try_to_lock);
bool gotLock = lck.owns_lock();
if (gotLock)
{
// do stuff
}
return gotLock;
}
Your code looks correct to me, as long as you leave the critical section by falling out, not returning or throwing an exception.
You can unlock with a release store; an RMW (like exchange) is unnecessary. The initial exchange only needs acquire. (But does need to be an atomic RMW like exchange or compare_exchange_strong)
Note that ISO C++ says that taking a std::mutex is an "acquire" operation, and releasing is is a "release" operation, because that's the minimum necessary for keeping the critical section contained between the taking and the releasing.
Your algo is exactly like a spinlock, but without retry if the lock's already taken. (i.e. just a try_lock). All the reasoning about necessary memory-order for locking applies here, too. What you've implemented is logically equivalent to the try_lock / unlock in #selbie's answer, and very likely performance-equivalent, too. If you never use mtx.lock() or whatever, you're never actually blocking i.e. waiting for another thread to do something, so your code is still potentially lock-free in the progress-guarantee sense.
Rolling your own with an atomic<bool> is probably good; using std::mutex here gains you nothing; you want it to be doing only this for try-lock and unlock. That's certainly possible (with some extra function-call overhead), but some implementations might do something more. You're not using any of the functionality beyond that. The one nice thing std::mutex gives you is the comfort of knowing that it safely and correctly implements try_lock and unlock. But if you understand locking and acquire / release, it's easy to get that right yourself.
The usual performance reason to not roll your own locking is that mutex will be tuned for the OS and typical hardware, with stuff like exponential backoff, x86 pause instructions while spinning a few times, then fallback to a system call. And efficient wakeup via system calls like Linux futex. All of this is only beneficial to the blocking behaviour. .try_lock leaves that all unused, and if you never have any thread sleeping then unlock never has any other threads to notify.
There is one advantage to using std::mutex: you can use RAII without having to roll your own wrapper class. std::unique_lock with the std::try_to_lock policy will do this. This will make your function exception-safe, making sure to always unlock before exiting, if it got the lock.

Why can a mutex lock twice without unlock in C++?

int main(){
std::mutex mut;
mut.lock();
cout<<"1111\n";
mut.lock();
cout<<"2222\n";
return 0;
}
Why does this code work and output 2222? Shouldn't it block at the second lock()? In the mutex source code, the operation lock throws an exception. Shouldn't it block and wait? I use try{...}catch(exception& e){} to catch this exception but it doesn't work.
void
lock()
{
int __e = __gthread_mutex_lock(&_M_mutex);
// EINVAL, EAGAIN, EBUSY, EINVAL, EDEADLK(may)
if (__e)
__throw_system_error(__e);
}
It cannot. Your code has undefined behavior.
From cppreference:
If lock is called by a thread that already owns the mutex, the behavior is undefined: for example, the program may deadlock. An implementation that can detect the invalid usage is encouraged to throw a std::system_error with error condition resource_deadlock_would_occur instead of deadlocking.
Reading on, one might get the impression that calling lock on the same thread would trigger an exception always:
Exceptions
Throws std::system_error when errors occur, including errors from the
underlying operating system that would prevent lock from meeting its
specifications. The mutex is not locked in the case of any exception
being thrown.
However, there is not necessarily an exception when you call lock in the same thread. Only if the implementation can detect such invalid use and only if the implementation is kind enough to actually throw an exception.
Looking into the standard we find:
1 The class mutex provides a non-recursive mutex with exclusive
ownership semantics. If one thread owns a mutex object, attempts by
another thread to acquire ownership of that object will fail (for
try_­lock()) or block (for lock()) until the owning thread has
released ownership with a call to unlock().
2 [Note: After a thread A has called unlock(), releasing a mutex, it is possible for another thread B to lock the same mutex, observe that
it is no longer in use, unlock it, and destroy it, before thread A
appears to have returned from its unlock call. Implementations are
required to handle such scenarios correctly, as long as thread A
doesn't access the mutex after the unlock call returns. These cases
typically occur when a reference-counted object contains a mutex that
is used to protect the reference count. — end note ]
3 The class mutex meets all of the mutex requirements ([thread.mutex.requirements]). It is a standard-layout class
([class.prop]).
4 [Note: A program can deadlock if the thread that owns a mutex object calls lock() on that object. If the implementation can detect the
deadlock, a resource_­deadlock_­would_­occur error condition might be
observed. — end note ]
5 The behavior of a program is undefined if it destroys a mutex object owned by any thread or a thread terminates while owning a mutex
object.
It only says "can deadlock" and "might be observed" but otherwise it does not define what happens when you call lock in the same thread.
PS Requiring to throw an excpetion in this case would make calling lock more expensive without any real benefit. Calling lock twice in the same thread is something that you should just not do. Actually you shouldn't call std::mutex::lock directly at all, because manually releasing the lock is not exception safe:
// shared block - bad
{
mut.lock();
// ... code ...
mut.unlock();
}
If ...code.. throws an exception then mut is never unlocked. The way to solve that is RAII and fortunately the standard library offers lots of RAII-helpers for locks, eg std::lock_guard:
// shared block - good
{
std::lock_guard<std::mutex> lock(mut);
// ... code ...
}
TL;DR Don't do it.
If your code relies on that exception then you have more severe problems than not getting that exception.
You are working with a single thread. You should read on what a mutex really does and when/how it is used.
As of why it outputs 2222, it might as well output anything else or even make your neighborhood explode, since:
If lock is called by a thread that already owns the mutex, the behavior is undefined: for example, the program may deadlock. 
https://en.cppreference.com/w/cpp/thread/mutex/lock

How to solve unreleased lock issue in C++

Someone please help me solve deadlock issues in c++ if possible with reference or examples.
Scenario would be like below.
Thread1 is locked by mutex and doing some operation, thread2 and thread3 are in waiting state for thread1 to unlock to access the resource.
Some abort/unexpected thing happened -- thread1 was terminated and didn't get the unlock, thread2 and thread3 are still waiting.
How to save the main thread (mean nothing should happen to main thread) in such situations.
Please throw some light how to solve such issues in c++.
Thanks,
Sheik
Some abort/unexpected thing happened
Use s.th. like std::lock_guard to prevent 'hanging' locks due to exceptions or forgotten/unexpected, but necessary unlock() operations.
The principle is pretty simple and you can easily implement it for any mechanism that uses a pair of methods that correspond together in a 'lock/unlock' manner:
class LockObject // E.g. mutex or alike
{
public:
// ...
void lock();
void unlock();
};
Bind the guard classes constructor to a reference to the lock object's instance and call lock() in the constructor and unlock() in the destructor:
template<typename T>
class LockGuard
{
public:
LockGuard(T& lockObject)
: lockObject_(lockObject)
{
lockObject_.lock();
}
~LockGuard()
{
lockObject_.unlock();
}
private:
T& lockObject_;
};
Use LockGuard like this:
// Some scope providing 'LockObject lockObject'
{ LockGuard<LockObject> lock(lockObject)
// Do s.th. when lockObject is locked
} // Call of lockObject.unlock() is guaranteed at least here, no matter what
// (exception, goto, break, etc.) caused leaving the block's scope.
Generally threads should not terminate unexpectedly. You may try using try/catch blocks. If you still want to free resources when a thread terminates unexpectedly, you may create a monitor thread that waits for the termination of the first thread.
On Windows, you can use something as ::WaitForSingleObject(m_htThread, INFINITE).
Once the 1st thread had been terminated, you may proceed with freeing abandoned locks.
Maybe you'll want to add some flag which indicates if the termination was graceful.
You'll probably also have to remember which thread is locking which object.
As said, I wouldn't recommend using such method, but on extreme cases.
The way to solve deadlocks in any language or platform is always the same.
Always acquire the locks in the same order.
EDIT: However you have misdescribed your problem. This is not a deadlock. A deadlock is a circular chain of locks. This is simply an unreleased lock, i.e. a lock leak. The solution is the same as any other resource leak: don't. In C++ that means releasing resources in destructors, and ensuring that destructors are called. Somehow your thread has terminated without doing that. Find that problem and fix it.

Boost mutex throw error on close for waiting threads

Is there a way to have a boost mutex throw an exception on any waiting threads? I have a problem where an object is deleted but do to the nature of the software library it is possible threads are still waiting at a mutex within the object and a rather nasty exception is thrown when the mutex is closed. I guess I could use a multiple mutex counter but that could cause performance degradation. What I'd like to have happen is the mutex throw an exception on any threads that are waiting when it is closed so that the stack is unrolled. Is there a clean way to do this that is platform-independent?
Such a concept of a mutex that throws when it is destroyed seems innocuous enough, but when it comes time to implement it, it reveals a flaw in how you are thinking about mutexes.
Let's take a look at some example code to get an idea of the pitfalls of such an approach.
Note: Please do not use the code below, it will cause nothing but endless hours of torment and suffering debugging synchronization problems.
class throwing_mutex
{
private:
mutex m_;
condition_variable cv_;
bool destroyed_;
bool locked_;
public:
void lock()
{
std::unique_lock<std::mutex> lock(m_);
cv_.wait(lock, [&]() {return !locked_ || destroyed_;}); // Wait until the mutex is unlocked or destroyed.
if (destroyed_) throw runtime_error("The exception was terminated while waiting.");
locked_ = true;
}
void unlock()
{
std::unique_lock<std::mutex> lock(m_);
locked_ = false;
lock.unlock();
cv_.notify_one();
}
~throwing_mutex()
{
std::unique_lock<std::mutex> lock(m_);
destroyed_ = true;
lock.unlock();
cv_.notify_all(); // Let all waiters know we are dead.
}
};
Upon destruction, everyone waiting on the throwing_mutex will have an exception thrown. But this opens up a pretty big race condition.
We've handled the case where everyone is waiting for the mutex -- they will safely throw. What we have not handled is the case where anyone is on their way to calling lock() but not quite there yet. When they finally get to the point where they can call lock(), the throwing_mutex has already been destroyed. The bug we've just introduced by means of our flawed methodology is called use-after-free. If we are lucky, the error will present itself early and clearly, but sometimes we aren't so lucky and we will be tormented for hours or days. There is no way that our throwing_mutex class can ever solve that problem and any code that would need such a class does not have well thought out ownership semantics.
So, how do we solve this problem if it isn't by means of a mutex that throws? We fix the lifetime of the mutex and the object[s] that are locked by it.
Presumably, this is mutex is a member of a class. If that is the case, it means delaying destruction until everyone who depends on the object is done with it. This is conveyed with the use of a shared_ptr. Without getting into the nitty-gritty of ownership-semantics, that is the best this can be answered. Hopefully I've changed your way of thinking of the problem enough to stray you away from your original plan and toward something that will work more reliably.

Is there a facility in boost to allow for write-biased locking?

If I have the following code:
#include <boost/date_time.hpp>
#include <boost/thread.hpp>
boost::shared_mutex g_sharedMutex;
void reader()
{
boost::shared_lock<boost::shared_mutex> lock(g_sharedMutex);
boost::this_thread::sleep(boost::posix_time::seconds(10));
}
void makeReaders()
{
while (1)
{
boost::thread ar(reader);
boost::this_thread::sleep(boost::posix_time::seconds(3));
}
}
boost::thread mr(makeReaders);
boost::this_thread::sleep(boost::posix_time::seconds(5));
boost::unique_lock<boost::shared_mutex> lock(g_sharedMutex);
...
the unique lock will never be acquired, because there are always going to be readers. I want a unique_lock that, when it starts waiting, prevents any new read locks from gaining access to the mutex (called a write-biased or write-preferred lock, based on my wiki searching). Is there a simple way to do this with boost? Or would I need to write my own?
Note that I won't comment on the win32 implementation because it's way more involved and I don't have the time to go through it in detail. That being said, it's interface is the same as the pthread implementation which means that the following answer should be equally valid.
The relevant pieces of the pthread implementation of boost::shared_mutex as of v1.51.0:
void lock_shared()
{
boost::this_thread::disable_interruption do_not_disturb;
boost::mutex::scoped_lock lk(state_change);
while(state.exclusive || state.exclusive_waiting_blocked)
{
shared_cond.wait(lk);
}
++state.shared_count;
}
void lock()
{
boost::this_thread::disable_interruption do_not_disturb;
boost::mutex::scoped_lock lk(state_change);
while(state.shared_count || state.exclusive)
{
state.exclusive_waiting_blocked=true;
exclusive_cond.wait(lk);
}
state.exclusive=true;
}
The while loop conditions are the most relevant part for you. For the lock_shared function (read lock), notice how the while loop will not terminate as long as there's a thread trying to acquire (state.exclusive_waiting_blocked) or already owns (state.exclusive) the lock. This essentially means that write locks have priority over read locks.
For the lock function (write lock), the while loop will not terminate as long as there's at least one thread that currently owns the read lock (state.shared_count) or another thread owns the write lock (state.exclusive). This essentially gives you the usual mutual exclusion guarantees.
As for deadlocks, well the read lock will always return as long as the write locks are guaranteed to be unlocked once they are acquired. As for the write lock, it's guaranteed to return as long as the read locks and the write locks are always guaranteed to be unlocked once acquired.
In case you're wondering, the state_change mutex is used to ensure that there's no concurrent calls to either of these functions. I'm not going to go through the unlock functions because they're a bit more involved. Feel free to look them over yourself, you have the source after all (boost/thread/pthread/shared_mutex.hpp) :)
All in all, this is pretty much a text book implementation and they've been extensively tested in a wide range of scenarios (libs/thread/test/test_shared_mutex.cpp and massive use across the industry). I wouldn't worry too much as long you use them idiomatically (no recursive locking and always lock using the RAII helpers). If you still don't trust the implementation, then you could write a randomized test that simulates whatever test case you're worried about and let it run overnight on hundreds of thread. That's usually a good way to tease out deadlocks.
Now why would you see that a read lock is acquired after a write lock is requested? Difficult to say without seeing the diagnostic code that you're using. Chances are that the read lock is acquired after your print statement (or whatever you're using) is completed and before state_change lock is acquired in the write thread.