I have a problem with mutexes...
This is the general structure of my code:
#include <mutex>
std::mutex m;
While(1){
m.lock();
if(global_variable1==1){
//CODE GOES HERE
if (err==error::eof){
cout<<"error!"<<endl;
//should I put a m.unlock() here??
continue;
}
int something=1;
global_variable2=something;
}
m.unlock();
usleep(100000);
}
Basically, I want to change global variable safely, so I think I need to use mutexes. I should only unlock the mutex after that "if(global_variable1==1)" function, but if there is an error, the mutex won't be unlocked.. Can I unlock it before the "continue"? Or is this going mess up with anything else? Can having two unlocks for the same mutex.lock() have a undesired behaviour?
This is why C++ has separate lock and mutex classes: a lock is a handy RAII class that will make sure that your mutex gets unlocked even when exceptions are thrown or some other idiot programmer adds a new return/break/continue into the program. Here's how this program works with std::unique_lock:
#include <mutex>
std::mutex m;
While(1){
std::unique_lock<std::mutex> lock(m);
if(global_variable1==1){
//CODE GOES HERE
if (err==error::eof){
cout<<"error!"<<endl;
continue;
}
int something=1;
global_variable2=something;
}
lock.unlock();
usleep(100000);
}
Do not lock/unlock mutexes manually! Instead use a guard, e.g., std::lock_guard<std::mutex>: the guard will acquire a lock upon construction an release it upon destruction. To limit the time the lock is held, just use a block:
while (true) {
{
std::lock_guard<std::mutex> cerberos(m);
// ...
}
sleep(n);
}
Related
i am looking at this piece of code:
#include <chrono>
#include <iostream>
#include <map>
#include <mutex>
#include <shared_mutex>
#include <string>
#include <thread>
bool flag;
std::mutex m;
void wait_for_flag() {
// std::cout << &m << std::endl;
// return;
std::unique_lock<std::mutex> lk(m);
while (!flag) {
lk.unlock();
std::cout << "unlocked....." << std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
std::cout << "sleeping....." << std::endl;
lk.lock();
std::cout << "locked by " << std::this_thread::get_id() << "....."
<< std::endl;
}
}
int main(int argc, char const *argv[]) {
std::thread t(wait_for_flag);
std::thread t2(wait_for_flag);
std::thread t3(wait_for_flag);
std::thread t4(wait_for_flag);
std::thread t5(wait_for_flag);
t.join();
t2.join();
t3.join();
t4.join();
t5.join();
return 0;
}
I am new to this, and I thought mutex can only be acquired by one thread. I got two questions:
why there is no deadlock among those threads, e.g. if thread A runs lk.unlock(), then thread B runs lk.lock() and then thread A runs lk.lock().
what does it mean we define a new unique_lock in every thread associating to the same mutex lock (which is called m in here)
Thanks
Because right after acquiring a lock on the mutex each thread calls lk.unlock(); and now other thread can acquire a lock on the mutex. Only if a thread tries to lock an already locked mutex (by a different thread) it has to wait for the mutex to be free. As any thread in your code eventually calls lk.unlock(); there is always a chance for a different thread to get a lock on the mutex and there is no deadlock.
A deadlock would occur for example if you have two mutexes and two threads try to lock them in different order:
// thread A
std::unique_lock<std::mutex> lk1(mutex1);
std::unique_lock<std::mutex> lk2(mutex2); // X
// thread B
std::unique_lock<std::mutex> lk2(mutex2);
std::unique_lock<std::mutex> lk1(mutex1); // X
Here it can happen that thread A locks mutex1, thread B locks mutex2 and then both wait in X for the other thread to release the other mutex, but this will never happen. Its a deadlock.
2.
A lock is merely a slim RAII type. Its only purpose is to call lock on the mutex when created and unlock when destroyed. You can write the same code without the lock, by manually locking / unlocking the mutex, but when there is an exception while a mutex is locked it will never be unlocked.
#SolomonSlow my question is, if we use unique_lock to wrap the mutex in different threads, why there is no deadlock...?
"Deadlock" means that there is some set of threads in which none of the threads can proceed until one of the other members of the set does something. In the simplest possible deadlock, there are just two threads, and there are two mutexes:
Thread A has placed a unique_lock on mutex 1, and it is blocked, waiting to place a lock on mutex 2.
Thread B has placed a lock on mutex 2, and it is blocked, waiting to place a lock on mutex 1.
Thread A can't do anything until thread B does something first, and thread B can't do anything until thread A does something first. Neither thread will ever be able to do anything again. Deadlock.
You can't have a deadlock without at least two different things (e.g., two different mutexes) that the threads wait for. If there's only one mutex, then whichever thread has it locked, that thread will be able to proceed. It's only a deadlock when no thread is able to proceed.
In your example, each of the five threads settles in to a loop:
unlock the mutex,
print, sleep, print,
lock the mutex,
print,
go back to the top of the loop.
Whenever one of your threads locks the mutex, there's nothing to stop it from printing and then going back to the top and unlocking the mutex again so that some other thread can run. There's no deadlock.
This is not an answer. It's just an illustration. I turned your one example into three different examples that all achieve the same result. I hope it may help you to better understand what unique_lock does.
The first way doesn't use unique_lock at all. It only uses the mutex. This is the old-school way—the way we used to do things before RAII was discovered.
std::mutex m;
{
...
while (...) {
do_work_outside_critical_section();
m.lock(); // explicitly put a "lock" on the mutex.
do_work_inside_critical_section();
m.unlock(); // explicitly remove the "lock."
}
}
The old-school way is risky because if do_work_inside_critical_section() throws an exception, it will leave the mutex in a locked state, and any thread that tries to lock it again probably will hang forever.
The second way uses unique_lock, which is an embodiment of RAII.
The RAII pattern ensures that there's no way out of this code block that leaves a lock on mutex m. The unique_lock destructor always will be called, no matter what, and the destructor removes the lock.
std::mutex m;
{
...
while (...) {
do_work_outside_critical_section();
std::unique_lock<std::mutex> lk(m); // constructor puts a "lock" on the mutex.
do_work_inside_critical_section();
} // destructor implicitly removes the "lock."
}
Notice that in this version, a unique_lock is constructed and destructed every time around the loop. That might sound costly, but it really isn't. unique_lock is meant to be used in this way.
The last way is what you did in your example. It only creates and destroys the unique_lock one time, but then it repeatedly locks and unlocks it within the loop. This works, but it's more code lines than the version above, which makes it a little bit harder to read and understand.
std::mutex m;
{
...
std::unique_lock<std::mutex> lk(m); // constructor puts a "lock" on the mutex.
while (...) {
lk.unlock(); // explicitly remove the "lock" from the mutex.
do_work_outside_critical_section();
lk.lock(); // explicitly put a "lock" back on the mutex.
do_work_inside_critical_section();
}
} // destructor implicitly removes the "lock."
I have two threads that work the producer and consumer sides of a std::queue. The queue isn't often full, so I'd like to avoid the consumer grabbing the mutex that is guarding mutating the queue.
Is it okay to call empty() outside the mutex then only grab the mutex if there is something in the queue?
For example:
struct MyData{
int a;
int b;
};
class SpeedyAccess{
public:
void AddDataFromThread1(MyData data){
const std::lock_guard<std::mutex> queueMutexLock(queueAccess);
workQueue.push(data);
}
void CheckFromThread2(){
if(!workQueue.empty()) // Un-protected access...is this dangerous?
{
queueAccess.lock();
MyData data = workQueue.front();
workQueue.pop();
queueAccess.unlock();
ExpensiveComputation(data);
}
}
private:
void ExpensiveComputation(MyData& data);
std::queue<MyData> workQueue;
std::mutex queueAccess;
}
Thread 2 does the check and isn't particularly time-critical, but will get called a lot (500/sec?). Thread 1 is very time critical, a lot of stuff needs to run there, but isn't called as frequently (max 20/sec).
If I add a mutex guard around empty(), if the queue is empty when thread 2 comes, it won't hold the mutex for long, so might not be a big hit. However, since it gets called so frequently, it might occasionally happen at the same time something is trying to get put on the back....will this cause a substantial amount of waiting in thread 1?
As written in the comments above, you should call empty() only under a lock.
But I believe there is a better way to do it.
You can use a std::condition_variable together with a std::mutex, to achieve synchronization of access to the queue, without locking the mutex more than you must.
However - when using std::condition_variable, you must be aware that it suffers from spurious wakeups. You can read about it here: Spurious wakeup - Wikipedia.
You can see some code examples here:
Condition variable examples.
The correct way to use a std::condition_variable is demonstrated below (with some comments).
This is just a minimal example to show the principle.
#include <thread>
#include <mutex>
#include <condition_variable>
#include <queue>
#include <iostream>
using MyData = int;
std::mutex mtx;
std::condition_variable cond_var;
std::queue<MyData> q;
void producer()
{
MyData produced_val = 0;
while (true)
{
std::this_thread::sleep_for(std::chrono::milliseconds(1000)); // simulate some pause between productions
++produced_val;
std::cout << "produced: " << produced_val << std::endl;
{
// Access the Q under the lock:
std::unique_lock<std::mutex> lck(mtx);
q.push(produced_val);
cond_var.notify_all(); // It's not a must to nofity under the lock but it might be more efficient (see #DavidSchwartz's comment below).
}
}
}
void consumer()
{
while (true)
{
MyData consumed_val;
{
// Access the Q under the lock:
std::unique_lock<std::mutex> lck(mtx);
// NOTE: The following call will lock the mutex only when the the condition_varible will cause wakeup
// (due to `notify` or spurious wakeup).
// Then it will check if the Q is empty.
// If empty it will release the lock and continue to wait.
// If not empty, the lock will be kept until out of scope.
// See the documentation for std::condition_variable.
cond_var.wait(lck, []() { return !q.empty(); }); // will loop internally to handle spurious wakeups
consumed_val = q.front();
q.pop();
}
std::cout << "consumed: " << consumed_val << std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(200)); // simulate some calculation
}
}
int main()
{
std::thread p(producer);
std::thread c(consumer);
while(true) {}
p.join(); c.join(); // will never happen in our case but to remind us what is needed.
return 0;
}
Some notes:
In your real code, none of the threads should run forever. You should have some mechanism to notify them to gracefully exit.
The global variables (mtx,q etc.) are better to be members of some context class, or passed to the producer() and consumer() as parameters.
This example assumes for simplicity that the producer's production rate is always low relatively to the consumer's rate. In your real code you can make it more general, by making the consumer extract all elements in the Q each time the condition_variable is signaled.
You can "play" with the sleep_for times for the producer and consumer to test varios timing cases.
I am running into some odd behavior regarding unique_lock. After creating it, I try to call unlock, but it crashes my program. I have created a minimal example that consistently crashes on the unlock function (used gdb to confirm).
#include <iostream>
#include <string>
#include <mutex>
#include <thread>
#include <chrono>
std::mutex myMutex;
void lockMe()
{
std::unique_lock lock(myMutex);
std::cout << "Thread\n";
}
int main()
{
std::unique_lock lock(myMutex);
auto c = std::thread(lockMe);
std::this_thread::sleep_for(std::chrono::seconds(1));
std::cout << "Main\n";
myMutex.unlock();
c.join();
return 0;
}
Can anyone explain why this is happening?
By creating std::unique_lock lock(myMutex); you are granting mutex lock / unlock control to the lock object. if you manually unlock the mutex while it is still under control of the lock object you will violate that constraint and lock destructor will perform a double unlock attempt.
It is similar to all the RAII wrappers - once you grant resource control to RAII object you should not interfere with it by manually disposing of the resource.
Note that std::unique_lock offers a method to unlock the locked mutex prior to scope end that won't cause problems:
lock.unlock();
I started using std::mutexes to stop a thread and wait for another thread to resume it. It works like this:
Thread 1
// Ensures the mutex will be locked
while(myWaitMutex.try_lock());
// Locks it again to pause this thread
myWaitMutex.lock();
Thread 2
// Executed when thread 1 should resume processing:
myWaitMutex.unlock();
However I am not sure if this is correct and will work without problems on all platforms. If this is not correct, what is the correct way to implement this in C++11?
The problems with the code
// Ensures the mutex will be locked
while(myWaitMutex.try_lock());
.try_lock() tries to aquire the lock and returns true if successful, i.e., the code says "if we aquire the lock then retry to lock it again and again until we fail". We can never "fail" as we currently own the lock ourselves that we are waiting on, and so this will be an infinite loop. Also, attempting to lock using a std::mutex that the caller have already aquired a lock on is UB, so this is guaranteed to be UB. If not successful, .try_lock() will return false and the while loop will be exited. In other words, this will not ensure that the mutex will be locked.
The correct way to ensure the mutex will be locked is simply:
myWaitMutex.lock();
This will cause the current thread to block (indefinitely) until it can aquire the lock.
Next, the other thread tries to unlock a mutex it does not have a lock on.
// Executed when thread 1 should resume processing:
myWaitMutex.unlock();
This won't work as it's UB to .unlock() on a std::mutex that you don't already have a lock on.
Using locks
When using mutex locks, it's easier to use a RAII ownership-wrapper object such as std::lock_guard. The usage pattern of std::mutex is always: "Lock -> do something in critical section -> unlock". A std::lock_guard will lock the mutex in its constructor, and unlock it in its destructor. No need to worry about when to lock and unlock and such low-level stuff.
std::mutex m;
{
std::lock_guard<std::mutex> lk{m};
/* We have the lock until we exit scope. */
} // Here 'lk' is destroyed and will release lock.
A simple lock might not be the best tool for the job
If what you want is to be able to signal a thread to wake up, then there's the wait and notify structure using std::condition_variable. The std::condition_variable allows any caller to send a signal to waiting threads without holding any locks.
#include <atomic>
#include <condition_variable>
#include <iostream>
#include <mutex>
#include <thread>
using namespace std::literals;
int main() {
std::mutex m;
std::condition_variable cond;
std::thread t{[&] {
std::cout << "Entering sleep..." << std::endl;
std::unique_lock<std::mutex> lk{m};
cond.wait(lk); // Will block until 'cond' is notified.
std::cout << "Thread is awake!" << std::endl;
}};
std::this_thread::sleep_for(3s);
cond.notify_all(); // Notify all waiting threads.
t.join(); // Remember to join thread before exit.
}
However, to further complicate things there's this thing called spurious wakeups that mean that any waiting threads may wake up at any time for unknown reasons. This is a fact on most systems and has to do with the inner workings of thread scheduling. Also, we probably need to check that waiting is really needed as we're dealing with concurrency. If, for example, the notifying thread happens to notify before we start waiting, then we might wait forever unless we have a way to first check this.
To handle this we need to add a while loop and a predicate that tells when we need to wait and when we're done waiting.
int main() {
std::mutex m;
std::condition_variable cond;
bool done = false; // Flag for indicating when done waiting.
std::thread t{[&] {
std::cout << "Entering sleep..." << std::endl;
std::unique_lock<std::mutex> lk{m};
while (!done) { // Wait inside loop to handle spurious wakeups etc.
cond.wait(lk);
}
std::cout << "Thread is awake!" << std::endl;
}};
std::this_thread::sleep_for(3s);
{ // Aquire lock to avoid data race on 'done'.
std::lock_guard<std::mutex> lk{m};
done = true; // Set 'done' to true before notifying.
}
cond.notify_all();
t.join();
}
There are additional reasons why it's a good idea to wait inside a loop and use a predicate such as "stolen wakeups" as mentioned in the comments by #David Schwartz.
It sounds to me that you are looking for condition variable. In the end there should always be a way to make it work through mutexes, but condition variable is the current C++ idiomatic way to handle the `block and wait until something happens' scenario.
The behavior of a mutex when a thread that holds it attempts to lock it is undefined. The behavior of a mutex when a thread that doesn't hold it attempts to unlock it is undefined. So your code might do anything at all on various platforms.
Instead, use a mutex together with a condition variable and a predicate boolean. In pseudo-code:
To block:
Acquire the mutex.
While the predicate is false, block on the condition variable.
If you want to re-arm here, set the predicate to false.
Release the mutex.
To release:
Acquire the mutex.
Set the predicate to true.
Signal the condition variable.
Release the mutex.
To rearm:
Acquire the mutex.
Set the predicate to false.
Release the mutex.
Please check this code....
std::mutex m_mutex;
std::condition_variable m_cond_var;
void threadOne(){
std::unique_lock<std::mutex> lck(mtx);
while (!ready){
m_cond_var.wait(lck);
}
m_cond_var.notify_all();
}
void threadTwo(){
std::unique_lock<std::mutex> lck(mtx);
read = true;
m_cond_var.notify_all();
}
I hope you will get the solution. And it is very proper!!
I want to create scoped lock, but I want something like:
{
if(lockRequired)
boost::mutex::scoped_lock(Mutex); //After this line we go out of scope
/* Here I also want to have Mutex */
}
if condition is true I want to have lock mutex but in level up scope. I know that I can use simple .lock and in the end of scope use .unlock but I have many return path. I can also create some SynchronizationGuard in scope and whed destructor is called unlock mutex but it's not clean solution. Some advices ?
Best regards.
Use ternary operator.
boost::mutex::scoped_lock lock = lockRequired ?
boost::mutex::scoped_lock(Mutex) : boost::mutex::scoped_lock();
Or just use swap under condition.
boost::mutex::scoped_lock lock;
if (lockRequired)
{
boost::mutex::scoped_lock lock_(Mutex);
lock.swap(lock_);
}
Or just construct lock with defer_lock_t and then call lock function.
boost::mutex::scoped_lock lock(Mutex, boost::defer_lock);
if (lockRequired)
{
lock.lock();
}
You can construct the lock deferred:
#include <boost/thread.hpp>
int main() {
boost::mutex mx;
boost::mutex::scoped_lock sl(mx, boost::defer_lock);
if (condition)
sl.lock();
// sl will unlock on end of scope
}
Also works for std::unique_lock, std::lock_guard and corresponding boost types
Analogously there's the adopt_lock tag type.