How to identify which thread is holding a mutex - c++

Is it possible to identify which thread is holding the mutex? I'm facing an issue in which one of my thread has indefinitely got blocked in acquiring a mutex. I'm using std::lock_guard<std::mutex> lg(mut) syntax to acquire a lock which basically follows RAII pattern.

You can get the current thread id via get_id:
#include <thread>
#include <mutex>
#include <iostream>
void foo() {
{
static std::mutex mut;
std::lock_guard<std::mutex> lg(mut);
std::cout << "thread holding the lock: " << std::this_thread::get_id() << "\n";
std::cout << "Hello \n";
}
std::cout << "thread " << std::this_thread::get_id() << " no longer holds the lock \n";
}
int main() {
std::thread t1(&foo);
std::thread t2(&foo);
t1.join();
t2.join();
}
Output:
thread holding the lock: 140607394023168
Hello
thread 140607394023168 no longer holds the lock
thread holding the lock: 140607385630464
Hello
thread 140607385630464 no longer holds the lock
My next idea would be to wrap the std::mutex into a my_mutex which remembers which thread currently holds the lock. Though, that would require to use another mutex to synchronize access to that information, hence logging seems to be the better way.

Related

c++ std::thread: Is this code guaranteed to deadlock?

The following code is from modernescpp. I understand that when the lock_guard in the main thread holding the mutex causes the deadlock. But since the created thread should start to run once it is initialized. Is there a chance that after line 15 the functions lock_guard on line 11 already grabbed coutMutex so the code runs without any problem? If it is possible, under what circumstance the created thread
will run first?
#include <iostream>
#include <mutex>
#include <thread>
std::mutex coutMutex;
int main(){
std::thread t([]{
std::cout << "Still waiting ..." << std::endl;
std::lock_guard<std::mutex> lockGuard(coutMutex); // Line 11
std::cout << std::this_thread::get_id() << std::endl;
}
);
// Line 15
{
std::lock_guard<std::mutex> lockGuard(coutMutex);
std::cout << std::this_thread::get_id() << std::endl;
t.join();
}
}
Just so the answer will be posted as an answer, not a comment:
No, this code is not guaranteed to deadlock.
Yes, this code is quite likely to deadlock.
In particular, it's possible for the main thread to create the subordinate thread, and then both get suspended. From that point, it's up to the OS scheduler to decide which to run next. Since the main thread was run more recently, there's a decent chance it will select the subordinate thread to run next (assuming it attempts to follow something vaguely like round-robin scheduling in the absence of a difference in priority, or something similar giving it a preference for which thread to schedule).
There are various ways to fix the possibility of deadlock. One obvious possibility would be to move the join to just outside the scope in which the main thread holds the mutex:
#include <iostream>
#include <mutex>
#include <thread>
std::mutex coutMutex;
int main(){
std::thread t([]{
std::cout << "Still waiting ..." << std::endl;
std::lock_guard<std::mutex> lockGuard(coutMutex); // Line 11
std::cout << std::this_thread::get_id() << std::endl;
}
);
// Line 15
{
std::lock_guard<std::mutex> lockGuard(coutMutex);
std::cout << std::this_thread::get_id() << std::endl;
}
t.join();
}
I'd also avoid locking a mutex for the duration of using std::cout. cout is typically slow enough that doing so will make contention over the lock quite likely. It's typically doing to be better to (for only one example) format the data into a buffer, put the buffer into a queue, and have a single thread that reads items from the queue and shoves them out to cout. This way you only have to lock for long enough to add/remove a buffer to/from the queue.

Destroying condition_variable while waiting

First my code to make my explanation more clear:
struct Foo {
std::condition_variable cv;
};
static Foo* foo; // dynamically created object
// Thread1
foo->cv.wait(...);
// Thread2
foo->cv.notify();
delete foo; // thread1 might have not left the wait function yet
I am trying to delete a std::condition_variable while it is in wait. So from my understanding, I have to notify it first to make it leave it's waiting for routine, and then I can delete it. But after calling notify* I can't delete it right away, because it might still be in wait because it needs a few cycles. What is a common way to achieve this?
You can delete it right away.
Quote from C++ standard:
~ condition_variable();
Requires: There shall be no thread blocked on *this. [Note: That is,
all threads shall have been notified; they may subsequently block on
the lock specified in the wait. This relaxes the usual rules, which
would have required all wait calls to happen before destruction. Only
the notification to unblock the wait must happen before destruction.
Basically wait functions are required to perform locking and waiting atomically:
The execution of notify_one and notify_all shall be atomic. The execution
of wait, wait_for, and wait_until shall be performed in three atomic parts:
the release of the mutex and entry into the waiting state;
the unblocking of the wait; and
the reacquisition of the lock.
Once notify wakes a thread, it should be considered "unblocked" and should contest the mutex.
There are similar guarantees about std::mutex: threads are not required to leave unlock before mutex is destroyed.
Quote from C++ standard:
The implementation shall provide lock and unlock operations, as
described below. For purposes of determining the existence of a data
race, these behave as atomic operations. The lock and unlock
operations on a single mutex shall appear to occur in a single total
order.
Later:
Note: After a thread A has called unlock(), releasing a mutex, it is
possible for another thread B to lock the same mutex, observe that it
is no longer in use, unlock it, and destroy it, before thread A
appears to have returned from its unlock call.
Such guarantees are required to avoid issues like this, when mutex inside an object is used to protect object reference counter.
Note that this does not guarantee that your implementation has no bugs in this regard. In the past glibc had multiple bugs related to the destruction of synchronization objects, in particular pthread_mutex_unlock was accessing mutex before returning.
One easy fix: move delete foo into thread 1 after foo->cv.wait(...);.
A better fix would be to change the design to work with std::shared_ptr, no manual delete invocations.
Here is my solution for C++17 (C++11 needs adaptation): notify everybody that we are deleting current instance and make the destructor to wait
Two things to do:
Give to the wait predicate a possibility to exit before object is really deleted
Make the destructor to resync with the lock to make it wait everything is finished (you must be sure all waiting methods are checking the deleting at beginning)
Note: if you have a thread inside this class waiting on the condition_variable, it might be better to join() the thread after notification instead of using the resynchronization lock (see comment in the code)
#include <iostream>
#include <memory>
#include <string>
#include <thread>
#include <condition_variable>
#include <mutex>
#include <shared_mutex>
using namespace std;
chrono::system_clock::time_point startTime;
struct Foo
{
condition_variable cond;
mutex mutex1;
shared_mutex mutexSafe;
bool deleting = false;
~Foo()
{
deleting = true;
cond.notify_all();
// Will make the destructor to wait all the opened shared_lock are released
unique_lock l(mutexSafe);
}
void waitOnThing(const string& name)
{
// Shared lock to make possible several threads are using the method at the same time
shared_lock lSafe(mutexSafe);
cout << chrono::duration_cast<chrono::milliseconds>(chrono::system_clock::now() - startTime).count()
<< " Thread " << name << " -> waitOnThing()" << endl;
unique_lock l(mutex1);
cond.wait(l, [&]()
{
if (deleting)
{
this_thread::sleep_for(chrono::milliseconds(1000)); // Slow down exit process to show destructor is waiting
return true;
}
return false;
});
cout << chrono::duration_cast<chrono::milliseconds>(chrono::system_clock::now() - startTime).count()
<< " Thread " << name << " unlocked" << endl;
}
};
int main()
{
startTime = chrono::system_clock::now();
Foo* foo = new Foo();
cout << chrono::duration_cast<chrono::milliseconds>(chrono::system_clock::now() - startTime).count()
<< " Starting" << endl;
thread t1([&]() { foo->waitOnThing("t1"); });
thread t2([&]() { foo->waitOnThing("t2"); });
thread t3([&]() { foo->waitOnThing("t3"); });
// Wait a bit to be sure thread started and is waiting
this_thread::sleep_for(chrono::milliseconds(100));
cout << chrono::duration_cast<chrono::milliseconds>(chrono::system_clock::now() - startTime).count()
<< " Deleting foo..." << endl;
delete foo;
cout << chrono::duration_cast<chrono::milliseconds>(chrono::system_clock::now() - startTime).count()
<< " Foo deleted" << endl;
// Avoid demo to crash
t1.join();
t2.join();
t3.join();
}
Result:
0 Starting
4 Thread t2 -> waitOnThing()
4 Thread t1 -> waitOnThing()
4 Thread t3 -> waitOnThing()
100 Deleting foo...
1100 Thread t1 unlocked
2100 Thread t2 unlocked
3100 Thread t3 unlocked
3100 Foo deleted
C++11 :
Class shared_mutex is available only since C++17, if you are using C++11 you can do the same by using a regular unique_lock
and making a vector of mutex (one instance per call to the waiting method) and try to lock all of them in the destructor.
Example (not tested):
vector<shared_ptr<mutex>> mutexesSafe;
~Foo()
{
// ...
for(const auto& m : mutexesSafe)
{
unique_lock l(m);
}
}
void waitOnThing(const string& name)
{
auto m = make_shared<mutex>();
mutexesSafe.push_back(m);
unique_lock lSafe(*m);
//...
}

C++: Is a mutex with `std::lock_guard` enough to synchronize two `std::thread`s?

My question is based on below sample of C++ code
#include <chrono>
#include <thread>
#include <mutex>
#include <iostream>
class ClassUtility
{
public:
ClassUtility() {}
~ClassUtility() {}
void do_something() {
std::cout << "do something called" << std::endl;
using namespace std::chrono_literals;
std::this_thread::sleep_for(1s);
}
};
int main (int argc, const char* argv[]) {
ClassUtility g_common_object;
std::mutex g_mutex;
std::thread worker_thread_1([&](){
std::cout << "worker_thread_1 started" << std::endl;
for (;;) {
std::lock_guard<std::mutex> lock(g_mutex);
std::cout << "worker_thread_1 looping" << std::endl;
g_common_object.do_something();
}
});
std::thread worker_thread_2([&](){
std::cout << "worker_thread_2 started" << std::endl;
for (;;) {
std::lock_guard<std::mutex> lock(g_mutex);
std::cout << "worker_thread_2 looping" << std::endl;
g_common_object.do_something();
}
});
worker_thread_1.join();
worker_thread_2.join();
return 0;
}
This is more of a question to get my understanding clear rather & get a sample usage of std::condition_variable iff required.
I have 2 C++ std::threads which start up in main method. Its a console app on osx. So compiling it using clang. Both the threads use a common object of
ClassUtility to call a method do some heavy task. For this sample code to explain the situation, both the threads run an infinite loop & close down only when
the app closes i.e. when I press ctrl+c on the console.
Seek to know:
Is it correct if I jus use a std::lock_guard on std::mutex to synchronize or protect the calls made to the common_obejct of ClassUtility. Somehow, I seem
to be getting into trouble with this "just a mutex approach". None of the threads start if I lock gaurd the loops using mutex. Moreover, I get segfaults sometimes. Is this because they are lambdas ?
assigned to each thread ?
Is it better to use a std::condition_variable between the 2 threads or lambdas to signal & synchronize them ? If yes, then how would the std::condition_variable be used
here between the lambdas ?
Note: As the question is only to seek information, hence the code provided here might not compile. It is just to provide a real scenario
Your code is safe
Remember, the lock_guard just calls .lock() and injects call to .unlock() to the end of the block. So
{
std::lock_guard<std::mutex> lock(g_mutex);
std::cout << "worker_thread_1 looping" << std::endl;
g_common_object.do_something();
}
is basically equivalent to:
{
g_mutex.lock();
std::cout << "worker_thread_1 looping" << std::endl;
g_common_object.do_something();
g_mutex.unlock();
}
except:
the unlock is called even if the block is left via exception and
it ensures you won't forget to call it.
Your code is not parallel
You are mutually excluding all of the loop body in each thread. There is nothing left that both threads could be actually doing in parallel. The main point of using threads is when each can work on separate set of objects (and only read common objects), so they don't have to be locked.
In the example code, you really should be locking only the work on common object; std::cout is thread-safe on it's own. So:
{
std::cout << "worker_thread_1 looping" << std::endl;
{
std::lock_guard<std::mutex> lock(g_mutex);
g_common_object.do_something();
// unlocks here, because lock_guard injects unlock at the end of innermost scope.
}
}
I suppose the actual code you are trying to write does have something to actually do in parallel; just a thing to keep in mind.
Condition variables are not needed
Condition variables are for when you need one thread to wait until another thread does some specific thing. Here you are just making sure the two threads are not modifying the object at the same time and for that mutex is sufficient and appropriate.
Your code never terminates other than that I can't fault it.
As others point out it offers almost not opportunity for parallelism because of the long sleep that takes place with the mutex locked to sleeping thread.
Here's a simple version that terminates by putting arbitrary finite limits on the loops.
Is it maybe that you haven't understood what join() does?
It the current thread (executing join()) until the joined thread ends. But if it doesn't end neither does the current thread.
#include <chrono>
#include <thread>
#include <mutex>
#include <iostream>
class ClassUtility
{
public:
ClassUtility() {}
~ClassUtility() {}
void do_something() {
std::cout << "do something called" << std::endl;
using namespace std::chrono_literals;
std::this_thread::sleep_for(1s);
}
};
int main (int argc, const char* argv[]) {
ClassUtility g_common_object;
std::mutex g_mutex;
std::thread worker_thread_1([&](){
std::cout << "worker_thread_1 started" << std::endl;
for (int i=0;i<10;++i) {
std::lock_guard<std::mutex> lock(g_mutex);
std::cout << "worker_thread_1 looping " << i << std::endl;
g_common_object.do_something();
}
});
std::thread worker_thread_2([&](){
std::cout << "worker_thread_2 started" << std::endl;
for (int i=0;i<10;++i) {
std::lock_guard<std::mutex> lock(g_mutex);
std::cout << "worker_thread_2 looping " << i << std::endl;
g_common_object.do_something();
}
});
worker_thread_1.join();
worker_thread_2.join();
return 0;
}

unique_lock across threads?

I am having some trouble conceptualizing how unique_lock is supposed to operate across threads. I tried to make a quick example to recreate something that I would normally use a condition_variable for.
#include <mutex>
#include <thread>
using namespace std;
mutex m;
unique_lock<mutex>* mLock;
void funcA()
{
//thread 2
mLock->lock();//blocks until unlock?Access violation reading location 0x0000000000000000.
}
int _tmain(int argc, _TCHAR* argv[])
{
//thread 1
mLock = new unique_lock<mutex>(m);
mLock->release();//Allows .lock() to be taken by a different thread?
auto a = std::thread(funcA);
std::chrono::milliseconds dura(1000);//make sure thread is running
std::this_thread::sleep_for(dura);
mLock->unlock();//Unlocks thread 2's lock?
a.join();
return 0;
}
unique_lock should not be accessed from multiple threads at once. It was not designed to be thread-safe in that manner. Instead, multiple unique_locks (local variables) reference the same global mutex. Only the mutex itself is designed to be accessed by multiple threads at once. And even then, my statement excludes ~mutex().
For example, one knows that mutex::lock() can be accessed by multiple threads because its specification includes the following:
Synchronization: Prior unlock() operations on the same object shall synchronize with (4.7) this operation.
where synchronize with is a term of art defined in 4.7 [intro.multithread] (and its subclauses).
That doesn't look at all right. First, release is "disassociates the mutex without unlocking it", which is highly unlikely that it is what you want to do in that place. It basically means that you no longer have a mutex in your unique_lock<mutex> - which will make it pretty useless - and probably the reason you get "access violation".
Edit: After some "massaging" of your code, and convincing g++ 4.6.3 to do what I wanted (hence the #define _GLIBCXX_USE_NANOSLEEP), here's a working example:
#define _GLIBCXX_USE_NANOSLEEP
#include <chrono>
#include <mutex>
#include <thread>
#include <iostream>
using namespace std;
mutex m;
void funcA()
{
cout << "FuncA Before lock" << endl;
unique_lock<mutex> mLock(m);
//thread 2
cout << "FuncA After lock" << endl;
std::chrono::milliseconds dura(500);//make sure thread is running
std::this_thread::sleep_for(dura); //this_thread::sleep_for(dura);
cout << "FuncA After sleep" << endl;
}
int main(int argc, char* argv[])
{
cout << "Main before lock" << endl;
unique_lock<mutex> mLock(m);
auto a = std::thread(funcA);
std::chrono::milliseconds dura(1000);//make sure thread is running
std::this_thread::sleep_for(dura); //this_thread::sleep_for(dura);
mLock.unlock();//Unlocks thread 2's lock?
cout << "Main After unlock" << endl;
a.join();
cout << "Main after a.join" << endl;
return 0;
}
Not sure why you need to use new to create the lock tho'. Surely unique_lock<mutex> mlock(m); should do the trick (and corresponding changes of mLock-> into mLock. of course).
A lock is just an automatic guard that operates a mutex in a safe and sane fashion.
What you really want is this code:
std::mutex m;
void f()
{
std::lock_guard<std::mutex> lock(m);
// ...
}
This effectively "synchronizes" calls to f, since every thread that enters it blocks until it manages to obtain the mutex.
A unique_lock is just a beefed-up version of the lock_guard: It can be constructed unlocked, moved around (thanks, #MikeVine) and it is itself a "lockable object", like the mutex itself, and so it can be used for example in the variadic std::lock(...) to lock multiple things at once in a deadlock-free way, and it can be managed by an std::condition_variable (thanks, #syam).
But unless you have a good reason to use a unique_lock, prefer to use a lock_guard. And once you need to upgrade to a unique_lock, you'll know why.
As a side-note, the above answers skip over the difference between immediate and deferred locking of mutex:
#include<mutex>
::std::mutex(mu);
auto MyFunction()->void
{
std::unique_lock<mutex> lock(mu); //Created instance and immediately locked the mutex
//Do stuff....
}
auto MyOtherFunction()->void
{
std::unique_lock<mutex> lock(mu,std::defer_lock); //Create but not locked the mutex
lock.lock(); //Lock mutex
//Do stuff....
lock.unlock(); //Unlock mutex
}
MyFunction() shows the widely used immediate lock, whilst MyOtherFunction() shows the deferred lock.

Implementing a Semaphore with std::mutex

As a learning exercise, I'm just trying to make a Semaphore class using std::mutex and a few other things provided by the C++ standard. My semaphore should allow as many readLock() as needed, however a writeLock() can only be acquired after all reads are unlocked.
//Semaphore.h
#include <mutex>
#include <condition_variable>
class Semaphore{
public:
Semaphore();
void readLock(); //increments the internal counter
void readUnlock(); //decrements the internal counter
void writeLock(); //obtains sole ownership. must wait for count==0 first
void writeUnlock(); //releases sole ownership.
int count; //public for debugging
private:
std::mutex latch;
std::unique_lock<std::mutex> lk;
std::condition_variable cv;
};
//Semaphore.cpp
#include "Semaphore.h"
#include <condition_variable>
#include <iostream>
using namespace std;
Semaphore::Semaphore() : lk(latch,std::defer_lock) { count=0; }
void Semaphore::readLock(){
latch.lock();
++count;
latch.unlock();
cv.notify_all(); //not sure if this needs to be here?
}
void Semaphore::readUnlock(){
latch.lock();
--count;
latch.unlock();
cv.notify_all(); //not sure if this needs to be here?
}
void Semaphore::writeLock(){
cv.wait(lk,[this](){ return count==0; }); //why can't std::mutex be used here?
}
void Semaphore::writeUnlock(){
lk.unlock();
cv.notify_all();
}
My test program will writeLock() the semaphore, start a bunch of threads, and then release the semaphore. Immediately afterwards, the main thread will attempt to writeLock() the semaphore again. The idea is that when the semaphore becomes unlocked, the threads will readLock() it and prevent the main thread from doing anything until they all finish. When they all finish and release the semaphore, then the main thread can acquire access again. I realize this may not necessarily happen, but it's one of the cases I'm looking for.
//Main.cpp
#include <iostream>
#include <thread>
#include "Semaphore.h"
using namespace std;
Semaphore s;
void foo(int n){
cout << "Thread Start" << endl;
s.readLock();
this_thread::sleep_for(chrono::seconds(n));
cout << "Thread End" << endl;
s.readUnlock();
}
int main(){
std::srand(458279);
cout << "App Launch" << endl;
thread a(foo,rand()%10),b(foo,rand()%10),c(foo,rand()%10),d(foo,rand()%10);
s.writeLock();
cout << "Main has it" << endl;
a.detach();
b.detach();
c.detach();
d.detach();
this_thread::sleep_for(chrono::seconds(2));
cout << "Main released it" << endl;
s.writeUnlock();
s.writeLock();
cout << "Main has it " << s.count << endl;
this_thread::sleep_for(chrono::seconds(2));
cout << "Main released it" << endl;
s.writeUnlock();
cout << "App End" << endl;
system("pause"); //windows, sorry
return 0;
}
The program throws an exception saying "unlock of unowned mutex". I think the error is in writeLock() or writeUnlock(), but I'm not sure. Can anyone point me in the right direction?
EDIT: There was a std::defer_lock missing when initializing lk in the constructor, however it didn't fix the error I was getting. As mentioned in the comment, this isn't a semaphore and I apologize for the confusion. To reiterate the problem, here is the output that I get (things in parenthesis are just my comments and not actually in the output):
App Launch
Thread Start
Thread Start
Main has it
Thread Start
Thread Start
Thread End (what?)
Main released it
f:\dd\vctools\crt_bld\self_x86\crt\src\thr\mutex.c(131): unlock of unowned mutex
Thread End
Thread End
Thread End
This is definitely not a "semaphore".
Your Semaphore constructor acquires the lock on latch right away, then you unlock it twice because writeUnlock() calls lk.unlock() and the next call to writeLock() tries to wait on a condition variable with an unlocked mutex, which is undefined behaviour, then you the next call to writeUnlock() tries to unlock an unlocked mutex, which is also undefined behaviour.
Are you sure the constructor should lock the mutex right away? I think you want to use std::defer_lock in the constructor, and then lock the mutex in writeLock().