what are the use cases for std::unique_lock::release? - c++

In what situations would one use the release method of std::unique_lock ?
I made the mistake of using the release method instead of the unlock method and it took a while to understand why the following code wasn't working.
#include <mutex>
#include <iostream>
#include <vector>
#include <thread>
#include <chrono>
std::mutex mtx;
void foo()
{
std::unique_lock<std::mutex> lock(mtx);
std::cout << "in critical section\n";
std::this_thread::sleep_for(std::chrono::seconds(1));
lock.release();
}
int main()
{
std::vector<std::thread> threads;
for (int i = 0; i < 5; ++i)
threads.push_back(std::thread(foo));
for (std::thread& t : threads)
t.join();
}

There's a good use for it in this answer where ownership of the locked state is explicitly transferred from a function-local unique_lock to an external entity (a by-reference Lockable parameter).
This concrete example is typical of the use: To transfer ownership of the locked state from one object (or even type) to another.

.release() is useful when you want to keep the mutex locked until some other object/code decides to unlock it... for example, if you were calling into a function that needed the mutex locked and would unlock it itself at a certain point in that function's processing, where that function accepts only a std::mutex& rather than a std::unique_lock<std::mutex>&&. (Conceptually similar to the uses for smart pointer release functions.)

Related

List share between two threads

I would like in c++ to share a list between two threads. I would like very simple not by taking FIFO or Shared memory so i just use mutex and locks.
I tried this way and its working :
#include <string.h>
#include <mutex>
#include <iostream>
#include <thread>
#include <list>
std::list<int> myList;
std::mutex list_mutex;
void client(){
std::lock_guard<std::mutex> guard(list_mutex);
myList.push_back(4);
};
void server(){
std::lock_guard<std::mutex> guard(list_mutex);
myList.push_back(2);
};
void print(std::list<int> const &list)
{
for (auto const& i: list) {
std::cout << i << "\n";
}
};
int main(int ac, char** av)
{
std::mutex list_mutex;
std::thread t1(client);
std::thread t2(server);
t1.join();
t2.join();
print(myList);
std::cout<<"test";
return 0;
}
And it print me this
24test
This is fine it work HOWEVER i'm not sure i'm using the same lock ? My supervisor wants me to have explicit Lock/Unlock in the code. At least if i can use the same mutex?
Thank you very much to help me
Ted's comment is important, what you are working with are threads, not processes. Processes don't share memory (besides using Shared Memory, but you wanted to avoid that). Threads share their entire memory space with each other.
You also mentioned that your supervisor wants you to use unlock/lock sections. You could do this by calling:
list_mutex.lock()
... critical section ...
list_mutx.unlock()
But you already do this implicitly by constructing a lock_guard. The lock_guard locks when you create it and unlocks at the end of the current scope.
As noted by Ted, you need to remove the second declaration of list_mutex (inside main).

how to initialize a unique lock that has already been declared in c++?

I created a class and I declared an array of unique locks and an array of mutexes as private variables.
My question is how do I connect the two of them in the constructor of the class?
header file:
#include <iostream>
#include <mutex>
#include <string>
#define PHILO_NUM 5
class philosophers
{
private:
std::mutex _mu[5];
std::unique_lock<std::mutex> _fork[5], _screen;
std::mutex _screenMutex;
public:
philosophers();
};
c++ file:
#include "philosophers .h"
philosophers::philosophers()
{
for (int i = 0; i < PHILO_NUM; i++)
{
// Somehow connect this->_forks[i] and this->_mu[i]
}
// At the end connect this->_screen and this->_screenMutex
}
It is not easy to say what you should be doing since you do not tell what you want to do. I think you mixed locks and mutexes a bit up. There is no reason to share the locks (as you try to do here). You need to share the mutexes, but one mutex can be associated with arbitrary many std::unique_locks (but only one of them can lock the mutex at the same time).
So, I would implement your class as follows:
#include <mutex>
constexpr size_t PHILO_NUM = 5;
class philosophers
{
private:
std::mutex _mu[PHILO_NUM];
std::mutex _screenMutex;
public:
philosophers() = default; // Nothing to do here
std::unique_lock grab_fork(size_t index) {
return std::unique_lock (_mu[index]);
}
};
So, if someone grabs a fork, they can use it as long as they hold a lock to that fork. An example use would look like this:
philosophers p;
void eat() {
auto lock = p.grab_fork(3);
// Now I can eat
lock.unlock(); // Not really necessary, lock will release the mutex, when it is destroyed at the end of the scope
}

Why does unlocking a unique_lock cause my program to crash?

I am running into some odd behavior regarding unique_lock. After creating it, I try to call unlock, but it crashes my program. I have created a minimal example that consistently crashes on the unlock function (used gdb to confirm).
#include <iostream>
#include <string>
#include <mutex>
#include <thread>
#include <chrono>
std::mutex myMutex;
void lockMe()
{
std::unique_lock lock(myMutex);
std::cout << "Thread\n";
}
int main()
{
std::unique_lock lock(myMutex);
auto c = std::thread(lockMe);
std::this_thread::sleep_for(std::chrono::seconds(1));
std::cout << "Main\n";
myMutex.unlock();
c.join();
return 0;
}
Can anyone explain why this is happening?
By creating std::unique_lock lock(myMutex); you are granting mutex lock / unlock control to the lock object. if you manually unlock the mutex while it is still under control of the lock object you will violate that constraint and lock destructor will perform a double unlock attempt.
It is similar to all the RAII wrappers - once you grant resource control to RAII object you should not interfere with it by manually disposing of the resource.
Note that std::unique_lock offers a method to unlock the locked mutex prior to scope end that won't cause problems:
lock.unlock();

Creating a lock that preserves the order of locking attempts in C++11

Is there a way to ensure that blocked threads get woken up in the same order as they got blocked? I read somewhere that this would be called a "strong lock" but I found no resources on that.
On Mac OS X one can design a FIFO queue that stores all the thread ids of the blocked threads and then use the nifty function pthread_cond_signal_thread_np() to wake up one specific thread - which is obviously non-standard and non-portable.
One way I can think of is to use a similar queue and at the unlock() point send a broadcast() to all threads and have them check which one is the next in line.
But this would induce lots of overhead.
A way around the problem would be to issue packaged_task's to the queue and have it process them in order. But that seems more like a workaround to me than a solution.
Edit:
As pointed out by the comments, this question may sound irrelevant, since there is in principle no guaranteed ordering of locking attempts.
As a clarification:
I have something I call a ConditionLockQueue which is very similar to the NSConditionLock class in the Cocoa library, but it maintains a FIFO queue of blocked threads instead of a more-or-less random pool.
Essentially any thread can "line up" (with or without the requirement of a specific 'condition' - a simple integer value - to be met). The thread is then placed on the queue and blocks until it is the frontmost element in the queue whose condition is met.
This provides a very flexible way of synchronization and I have found it very helpful in my program.
Now what I really would need is a way to wake up a specific thread with a specific id.
But these problems are almost alike.
Its pretty easy to build a lock object that uses numbered tickets to insure that its completely fair (lock is granted in the order threads first tried to acquire it):
#include <mutex>
#include <condition_variable>
class ordered_lock {
std::condition_variable cvar;
std::mutex cvar_lock;
unsigned int next_ticket, counter;
public:
ordered_lock() : next_ticket(0), counter(0) {}
void lock() {
std::unique_lock<std::mutex> acquire(cvar_lock);
unsigned int ticket = next_ticket++;
while (ticket != counter)
cvar.wait(acquire);
}
void unlock() {
std::unique_lock<std::mutex> acquire(cvar_lock);
counter++;
cvar.notify_all();
}
};
edit
To fix Olaf's suggestion:
#include <mutex>
#include <condition_variable>
#include <queue>
class ordered_lock {
std::queue<std::condition_variable *> cvar;
std::mutex cvar_lock;
bool locked;
public:
ordered_lock() : locked(false) {};
void lock() {
std::unique_lock<std::mutex> acquire(cvar_lock);
if (locked) {
std::condition_variable signal;
cvar.emplace(&signal);
signal.wait(acquire);
} else {
locked = true;
}
}
void unlock() {
std::unique_lock<std::mutex> acquire(cvar_lock);
if (cvar.empty()) {
locked = false;
} else {
cvar.front()->notify_one();
cvar.pop();
}
}
};
I tried Chris Dodd solution
https://stackoverflow.com/a/14792685/4834897
but the compiler returned errors because queues allows only standard containers that are capable.
while references (&) are not copyable as you can see in the following answer by Akira Takahashi :
https://stackoverflow.com/a/10475855/4834897
so I corrected the solution using reference_wrapper which allows copyable references.
EDIT: #Parvez Shaikh suggested small alteration to make the code more readable by moving cvar.pop() after signal.wait() in lock() function
#include <mutex>
#include <condition_variable>
#include <queue>
#include <atomic>
#include <vector>
#include <functional> // std::reference_wrapper, std::ref
using namespace std;
class ordered_lock {
queue<reference_wrapper<condition_variable>> cvar;
mutex cvar_lock;
bool locked;
public:
ordered_lock() : locked(false) {}
void lock() {
unique_lock<mutex> acquire(cvar_lock);
if (locked) {
condition_variable signal;
cvar.emplace(std::ref(signal));
signal.wait(acquire);
cvar.pop();
} else {
locked = true;
}
}
void unlock() {
unique_lock<mutex> acquire(cvar_lock);
if (cvar.empty()) {
locked = false;
} else {
cvar.front().get().notify_one();
}
}
};
Another option is to use pointers instead of references, but it seems less safe.
Are we asking the right questions on this thread??? And if so: are they answered correctly???
Or put another way:
Have I completely misunderstood stuff here??
Edit Paragraph: It seems StatementOnOrder (see below) is false. See link1 (C++ threads etc. under Linux are ofen based on pthreads), and link2 (mentions current scheduling policy as the determining factor) -- Thanks to Cubbi from cppreference (ref). See also link, link, link, link. If the statement is false, then the method of pulling an atomic (!) ticket, as shown in the code below, is probably to be preferred!!
Here goes...
StatementOnOrder: "Multiple threads that run into a locked mutex and thus "go to sleep" in a particular order, will afterwards aquire ownership of the mutex and continue on in the same order."
Question: Is StatementOnOrder true or false ???
void myfunction() {
std::lock_guard<std::mutex> lock(mut);
// do something
// ...
// mutex automatically unlocked when leaving funtion.
}
I'm asking this because all code examples on this page to date, seem to be either:
a) a waste (if StatementOnOrder is true)
or
b) seriously wrong (if StatementOnOrder is false).
So why do a say that they might be "seriously wrong", if StatementOnOrder is false?
The reason is that all code examples think they're being super-smart by utilizing std::condition_variable, but are actually using locks before that, which will (if StatementOnOrder is false) mess up the order!!!
Just search this page for std::unique_lock<std::mutex>, to see the irony.
So if StatementOnOrder is really false, you cannot run into a lock, and then handle tickets and condition_variables stuff after that. Instead, you'll have to do something like this: pull an atomic ticket before running into any lock!!!
Why pull a ticket, before running into a lock? Because here we're assuming StatementOnOrder to be false, so any ordering has to be done before the "evil" lock.
#include <mutex>
#include <thread>
#include <limits>
#include <atomic>
#include <cassert>
#include <condition_variable>
#include <map>
std::mutex mut;
std::atomic<unsigned> num_atomic{std::numeric_limits<decltype(num_atomic.load())>::max()};
unsigned num_next{0};
std::map<unsigned, std::condition_variable> mapp;
void function() {
unsigned next = ++num_atomic; // pull an atomic ticket
decltype(mapp)::iterator it;
std::unique_lock<std::mutex> lock(mut);
if (next != num_next) {
auto it = mapp.emplace(std::piecewise_construct,
std::forward_as_tuple(next),
std::forward_as_tuple()).first;
it->second.wait(lock);
mapp.erase(it);
}
// THE FUNCTION'S INTENDED WORK IS NOW DONE
// ...
// ...
// THE FUNCTION'S INDENDED WORK IS NOW FINISHED
++num_next;
it = mapp.find(num_next); // this is not necessarily mapp.begin(), since wrap_around occurs on the unsigned
if (it != mapp.end()) {
lock.unlock();
it->second.notify_one();
}
}
The above function guarantees that the order is executed according to the atomic ticket that is pulled. (Edit: using boost's intrusive map, an keeping condition_variable on the stack (as a local variable), would be a nice optimization, which can be used here, to reduce free-store usage!)
But the main question is:
Is StatementOnOrder true or false???
(If it is true, then my code example above is a also waste, and we can just use a mutex and be done with it.)
I wish somebody like Anthony Williams would check out this page... ;)

Behavior of condition_variable_any when used with a recursive_mutex?

When using condition_variable_any with a recursive_mutex, will the recursive_mutex be generally acquirable from other threads while condition_variable_any::wait is waiting? I'm interested in both Boost and C++11 implementations.
This is the use case I'm mainly concerned about:
void bar();
boost::recursive_mutex mutex;
boost::condition_variable_any condvar;
void foo()
{
boost::lock_guard<boost::recursive_mutex> lock(mutex);
// Ownership level is now one
bar();
}
void bar()
{
boost::unique_lock<boost::recursive_mutex> lock(mutex);
// Ownership level is now two
condvar.wait(lock);
// Does this fully release the recursive mutex,
// so that other threads may acquire it while we're waiting?
// Will the recursive_mutex ownership level
// be restored to two after waiting?
}
By a strict interpretation of the Boost documentation, I concluded that condition_variable_any::wait will not generally result in the recursive_mutex being acquirable by other threads while waiting for notification.
Class condition_variable_any
template<typename lock_type> void wait(lock_type& lock)
Effects:
Atomically call lock.unlock() and blocks the current thread. The thread will unblock when notified by a call to this->notify_one() or
this->notify_all(), or spuriously. When the thread is unblocked (for
whatever reason), the lock is reacquired by invoking lock.lock()
before the call to wait returns. The lock is also reacquired by
invoking lock.lock() if the function exits with an exception.
So condvar.wait(lock) will call lock.unlock, which in turn calls mutex.unlock, which decreases the ownership level by one (and not necessarily down to zero).
I've written a test program that confirms my above conclusion (for both Boost and C++11):
#include <iostream>
#define USE_BOOST 1
#if USE_BOOST
#include <boost/chrono.hpp>
#include <boost/thread.hpp>
#include <boost/thread/condition_variable.hpp>
#include <boost/thread/locks.hpp>
#include <boost/thread/recursive_mutex.hpp>
namespace lib = boost;
#else
#include <chrono>
#include <thread>
#include <condition_variable>
#include <mutex>
namespace lib = std;
#endif
void bar();
lib::recursive_mutex mutex;
lib::condition_variable_any condvar;
int value = 0;
void foo()
{
std::cout << "foo()\n";
lib::lock_guard<lib::recursive_mutex> lock(mutex);
// Ownership level is now one
bar();
}
void bar()
{
std::cout << "bar()\n";
lib::unique_lock<lib::recursive_mutex> lock(mutex);
// Ownership level is now two
condvar.wait(lock); // Does this fully release the recursive mutex?
std::cout << "value = " << value << "\n";
}
void notifier()
{
std::cout << "notifier()\n";
lib::this_thread::sleep_for(lib::chrono::seconds(3));
std::cout << "after sleep\n";
// --- Program deadlocks here ---
lib::lock_guard<lib::recursive_mutex> lock(mutex);
value = 42;
std::cout << "before notify_one\n";
condvar.notify_one();
}
int main()
{
lib::thread t1(&foo); // This results in deadlock
// lib::thread t1(&bar); // This doesn't result in deadlock
lib::thread t2(&notifier);
t1.join();
t2.join();
}
I hope this helps anyone else facing the same dilemma when mixing condition_variable_any and recursive_mutex.
You can fix this design by adding a parameter allowed_unlock_count to every function which operates on the mutex object; there are two types of guarantees that can be made about allowed_unlock_count:
(permit-unlock-depth) allowed_unlock_count represents the depth of permitted unlocking of mutex: the caller allows bar to unlock the mutex allowed_unlock_count times. After such unlocking, no guarantee is made about the state of mutex.
(promise-unlock) allowed_unlock_count represents the depth of locking of mutex: the caller guarantees that unlocking mutex exactly allowed_unlock_count times will allow other threads to grab the mutex object.
These guarantees are pre- and post-conditions of functions.
Here bar depends on (promise-unlock):
// pre: mutex locking depth is allowed_unlock_count
void bar(int allowed_unlock_count)
{
// mutex locking depth is allowed_unlock_count
boost::unique_lock<boost::recursive_mutex> lock(mutex);
// mutex locking depth is allowed_unlock_count+1
// you might want to turn theses loops
// into an a special lock object!
for (int i=0; i<allowed_unlock_count; ++i)
mutex.unlock();
// mutex locking depth is 1
condvar.wait(lock); // other threads can grab mutex
// mutex locking depth is 1
for (int i=0; i<allowed_unlock_count; ++i)
mutex.lock();
// mutex locking depth is allowed_unlock_count+1
}
// post: mutex locking depth is allowed_unlock_count
Called function must explicitly allowed to decrease locking depth by the caller.