Platform Debian 9 | ARM
I have a class that is used to store thread IDs on a std::list and then print the list as part of the shutdown procedure. There are two static methods: AddId() and PrintIds(). Since the static methods are used by several threads, the list access is protected with a std::mutex. I have come across two issues that have me baffled and they may be related. I hope one of you can help explain what I am missing.
Note I first used scoped std::lock_guard<std::mutex>s and have temporarily settled on the std::unique_lock<std::mutex>s that are shown in the code for the reasons explained herein.
Issue 1 When the program starts, all threads call ThreadMgr::AddId(), which adds the thread id to the list along with a string name. That seems to work fine. However, when ThreadMgr::PrintIds() is called during the shutdown procedure, the std::mutex is still locked and the std::lock_guard blocks. While testing, I unlocked the std::mutex in the line preceding the std::lock_guard--which is undefined behavior if it wasn't locked by the calling thread--and then it worked as expected. The only place the std::mutex is used is in these two methods, and with the std::lock_guard being scoped, the std::mutex should have been unlocked with every method call. I went through several iterations of locking and unlocking the std::mutex directly, using std::unique_locks, etc. I eventually found a somewhat workable solution using the std::unique_locks that are in the code provided here. However, that too has me baffled.
The std::unique_locks work if I use std::adopt_lock, but only in ThreadMgr::PrintIds(). If I use std::adopt_lock in ThreadMgr::AddId(), then I get a segmentation fault whenever ThreadMgr::PrintIds() is called.
Issue 2 As previously stated, ThreadMgr::AddId() runs fine with either std::lock_guard or std::unique_lock when all threads call it on startup. However, there are UI sessions where each session spawns a thread. When ThreadMgr::AddId() is called from one of these session threads, both std::lock_guard and std::unique_lock block like the std::mutex is already locked. So, ThreadMgr::AddId() works perfectly fine for every other thread, but not for the session threads that are started later.
If there is any other information I can provide to help get to a solution, let me know.
class ThreadMgr {
public:
ThreadMgr();
~ThreadMgr();
static void AddId(std::string);
static void PrintIds();
private:
static std::list<ThreadId> t_list;
static std::mutex list_mtx;
};
/*Definition*/
ThreadMgr::ThreadMgr() {}
ThreadMgr::~ThreadMgr() {}
/*Static Members*/
void ThreadMgr::AddId(std::string name) {
ThreadId t_id = getThreadId(name);
{
std::unique_lock<std::mutex> lock(list_mtx);
t_list.push_front(t_id);
}
std::lock_guard<std::mutex> lk(cout_mtx);
std::cout << "---Added id: " << t_id.tid << " for the " << name << " thread." << std::endl;
return;
}
void ThreadMgr::PrintIds() {
std::ostringstream oss;
oss << "\n**************************Thread Ids**************************\n";
{
std::unique_lock<std::mutex> lock(list_mtx, std::adopt_lock);
for (std::list<ThreadId>::iterator t = t_list.begin(); t != t_list.end(); t++) {
oss << "---" << t->name << " : " << t->tid << '\n';
}
}
oss << "************************End Thread Ids************************" << '\n';
std::lock_guard<std::mutex> lk(cout_mtx);
std::cout << oss.str() << std::endl;
return;
}
std::mutex ThreadMgr::list_mtx;
std::list<ThreadId> ThreadMgr::t_list;
int Main(){
ThreadMgr::AddId("Main");
std::thread MbServerT(MbServer);
std::thread UiServerT(UiServer);
while(run){
//do stuff
}
ThreadMgr::PrintIds();
if(MbServerT.joinable())
MbServerT.join();
if(UiServerT.joinable())
UiServerT.join();
return 0;
}
Related
Is it possible to identify which thread is holding the mutex? I'm facing an issue in which one of my thread has indefinitely got blocked in acquiring a mutex. I'm using std::lock_guard<std::mutex> lg(mut) syntax to acquire a lock which basically follows RAII pattern.
You can get the current thread id via get_id:
#include <thread>
#include <mutex>
#include <iostream>
void foo() {
{
static std::mutex mut;
std::lock_guard<std::mutex> lg(mut);
std::cout << "thread holding the lock: " << std::this_thread::get_id() << "\n";
std::cout << "Hello \n";
}
std::cout << "thread " << std::this_thread::get_id() << " no longer holds the lock \n";
}
int main() {
std::thread t1(&foo);
std::thread t2(&foo);
t1.join();
t2.join();
}
Output:
thread holding the lock: 140607394023168
Hello
thread 140607394023168 no longer holds the lock
thread holding the lock: 140607385630464
Hello
thread 140607385630464 no longer holds the lock
My next idea would be to wrap the std::mutex into a my_mutex which remembers which thread currently holds the lock. Though, that would require to use another mutex to synchronize access to that information, hence logging seems to be the better way.
I have tried to find a similar problem, but was unable to find it, or my knowledge is not enough to recognize the similarity.
I have a main loop creating an object, whereas this object has an infinite loop to process a matrix and do stuff with this matrix. I call this process function within a separate thread an detach it, so it is able to process the matrix multiple times, while the main loop might just wait for something and does nothing.
After a while the main loop receives a new matrix, while I represented it by just creating a new matrix and passes this new matrix into the object. The idea is that, due to waiting a few seconds before processing in the infinite while loop again, the update function can lock the mutex and the mutex is not (almost) frequently locked.
Below I tried to code a minimal Example.
class Test
{
public:
Test();
~Test();
void process(){
while(1){
boost::mutes::scoped_lock locker(mtx);
std::cout << "A" << std::endl;
// do stuff with Matrix
std::cout << "B" << std::endl;
mtx.unlock();
//wait for few microseconds
}
}
void updateMatrix(matrix MatrixNew){
boost::mutes::scoped_lock locker(mtx);
std::cout << "1" << std::endl;
Matrix = MatrixNew;
std::cout << "2" << std::endl;
}
private:
boost::mutex mtx;
matrix Matrix;
}
int main(){
Test test;
boost::thread thread_;
thread_ = boost::thread(&Test::process,boost::ref(test));
thread_.detach();
while(once_in_a_while){
Matrix MatrixNew;
test.updateMatrix(MatrixNew);
}
}
Unfortunately a race condition occurs. Process and Update have multiple steps within the locked mutex environment, while I print stuff to the console between these steps. I found that, both, the matrix is messed up and Letters/Numbers to occur parallel and not consecutive.
Any ideas why this occurs?
Best wishes and thanks in advance
while(1){
boost::mutes::scoped_lock locker(mtx);
std::cout << "A" << std::endl;
// do stuff with Matrix
std::cout << "B" << std::endl;
mtx.unlock();
//wait for few microseconds
}
Here you manually unlock mtx. Then sometime later the scoped_lock (called locker) also unlocks the mutex in its destructor (which is the point of that class). I dont know what the guarantee's boost::mutex requires but unlocking it more times than you locked it can't lead to anything good.
Instead of
mtx.unlock(); you presumably want locker.unlock();
Edit: A recommendation here would be to avoid using boost for this and use standard c++ instead. threading has been part of the standard since C++11 (8 years!) so presumably all your tools will now support it. Using standardised code/tools gives you better documentation and better help as they're much more well known. I'm not knocking boost (a lot of the standard started in boost) but once something has been consumed into the standard you should strongly think about using it.
The following code is from modernescpp. I understand that when the lock_guard in the main thread holding the mutex causes the deadlock. But since the created thread should start to run once it is initialized. Is there a chance that after line 15 the functions lock_guard on line 11 already grabbed coutMutex so the code runs without any problem? If it is possible, under what circumstance the created thread
will run first?
#include <iostream>
#include <mutex>
#include <thread>
std::mutex coutMutex;
int main(){
std::thread t([]{
std::cout << "Still waiting ..." << std::endl;
std::lock_guard<std::mutex> lockGuard(coutMutex); // Line 11
std::cout << std::this_thread::get_id() << std::endl;
}
);
// Line 15
{
std::lock_guard<std::mutex> lockGuard(coutMutex);
std::cout << std::this_thread::get_id() << std::endl;
t.join();
}
}
Just so the answer will be posted as an answer, not a comment:
No, this code is not guaranteed to deadlock.
Yes, this code is quite likely to deadlock.
In particular, it's possible for the main thread to create the subordinate thread, and then both get suspended. From that point, it's up to the OS scheduler to decide which to run next. Since the main thread was run more recently, there's a decent chance it will select the subordinate thread to run next (assuming it attempts to follow something vaguely like round-robin scheduling in the absence of a difference in priority, or something similar giving it a preference for which thread to schedule).
There are various ways to fix the possibility of deadlock. One obvious possibility would be to move the join to just outside the scope in which the main thread holds the mutex:
#include <iostream>
#include <mutex>
#include <thread>
std::mutex coutMutex;
int main(){
std::thread t([]{
std::cout << "Still waiting ..." << std::endl;
std::lock_guard<std::mutex> lockGuard(coutMutex); // Line 11
std::cout << std::this_thread::get_id() << std::endl;
}
);
// Line 15
{
std::lock_guard<std::mutex> lockGuard(coutMutex);
std::cout << std::this_thread::get_id() << std::endl;
}
t.join();
}
I'd also avoid locking a mutex for the duration of using std::cout. cout is typically slow enough that doing so will make contention over the lock quite likely. It's typically doing to be better to (for only one example) format the data into a buffer, put the buffer into a queue, and have a single thread that reads items from the queue and shoves them out to cout. This way you only have to lock for long enough to add/remove a buffer to/from the queue.
My question is based on below sample of C++ code
#include <chrono>
#include <thread>
#include <mutex>
#include <iostream>
class ClassUtility
{
public:
ClassUtility() {}
~ClassUtility() {}
void do_something() {
std::cout << "do something called" << std::endl;
using namespace std::chrono_literals;
std::this_thread::sleep_for(1s);
}
};
int main (int argc, const char* argv[]) {
ClassUtility g_common_object;
std::mutex g_mutex;
std::thread worker_thread_1([&](){
std::cout << "worker_thread_1 started" << std::endl;
for (;;) {
std::lock_guard<std::mutex> lock(g_mutex);
std::cout << "worker_thread_1 looping" << std::endl;
g_common_object.do_something();
}
});
std::thread worker_thread_2([&](){
std::cout << "worker_thread_2 started" << std::endl;
for (;;) {
std::lock_guard<std::mutex> lock(g_mutex);
std::cout << "worker_thread_2 looping" << std::endl;
g_common_object.do_something();
}
});
worker_thread_1.join();
worker_thread_2.join();
return 0;
}
This is more of a question to get my understanding clear rather & get a sample usage of std::condition_variable iff required.
I have 2 C++ std::threads which start up in main method. Its a console app on osx. So compiling it using clang. Both the threads use a common object of
ClassUtility to call a method do some heavy task. For this sample code to explain the situation, both the threads run an infinite loop & close down only when
the app closes i.e. when I press ctrl+c on the console.
Seek to know:
Is it correct if I jus use a std::lock_guard on std::mutex to synchronize or protect the calls made to the common_obejct of ClassUtility. Somehow, I seem
to be getting into trouble with this "just a mutex approach". None of the threads start if I lock gaurd the loops using mutex. Moreover, I get segfaults sometimes. Is this because they are lambdas ?
assigned to each thread ?
Is it better to use a std::condition_variable between the 2 threads or lambdas to signal & synchronize them ? If yes, then how would the std::condition_variable be used
here between the lambdas ?
Note: As the question is only to seek information, hence the code provided here might not compile. It is just to provide a real scenario
Your code is safe
Remember, the lock_guard just calls .lock() and injects call to .unlock() to the end of the block. So
{
std::lock_guard<std::mutex> lock(g_mutex);
std::cout << "worker_thread_1 looping" << std::endl;
g_common_object.do_something();
}
is basically equivalent to:
{
g_mutex.lock();
std::cout << "worker_thread_1 looping" << std::endl;
g_common_object.do_something();
g_mutex.unlock();
}
except:
the unlock is called even if the block is left via exception and
it ensures you won't forget to call it.
Your code is not parallel
You are mutually excluding all of the loop body in each thread. There is nothing left that both threads could be actually doing in parallel. The main point of using threads is when each can work on separate set of objects (and only read common objects), so they don't have to be locked.
In the example code, you really should be locking only the work on common object; std::cout is thread-safe on it's own. So:
{
std::cout << "worker_thread_1 looping" << std::endl;
{
std::lock_guard<std::mutex> lock(g_mutex);
g_common_object.do_something();
// unlocks here, because lock_guard injects unlock at the end of innermost scope.
}
}
I suppose the actual code you are trying to write does have something to actually do in parallel; just a thing to keep in mind.
Condition variables are not needed
Condition variables are for when you need one thread to wait until another thread does some specific thing. Here you are just making sure the two threads are not modifying the object at the same time and for that mutex is sufficient and appropriate.
Your code never terminates other than that I can't fault it.
As others point out it offers almost not opportunity for parallelism because of the long sleep that takes place with the mutex locked to sleeping thread.
Here's a simple version that terminates by putting arbitrary finite limits on the loops.
Is it maybe that you haven't understood what join() does?
It the current thread (executing join()) until the joined thread ends. But if it doesn't end neither does the current thread.
#include <chrono>
#include <thread>
#include <mutex>
#include <iostream>
class ClassUtility
{
public:
ClassUtility() {}
~ClassUtility() {}
void do_something() {
std::cout << "do something called" << std::endl;
using namespace std::chrono_literals;
std::this_thread::sleep_for(1s);
}
};
int main (int argc, const char* argv[]) {
ClassUtility g_common_object;
std::mutex g_mutex;
std::thread worker_thread_1([&](){
std::cout << "worker_thread_1 started" << std::endl;
for (int i=0;i<10;++i) {
std::lock_guard<std::mutex> lock(g_mutex);
std::cout << "worker_thread_1 looping " << i << std::endl;
g_common_object.do_something();
}
});
std::thread worker_thread_2([&](){
std::cout << "worker_thread_2 started" << std::endl;
for (int i=0;i<10;++i) {
std::lock_guard<std::mutex> lock(g_mutex);
std::cout << "worker_thread_2 looping " << i << std::endl;
g_common_object.do_something();
}
});
worker_thread_1.join();
worker_thread_2.join();
return 0;
}
I am a newcomer to the Boost library, and am trying to implement a simple producer and consumer threads that operate on a shared queue. My example implementation looks like this:
#include <iostream>
#include <deque>
#include <boost/thread.hpp>
boost::mutex mutex;
std::deque<std::string> queue;
void producer()
{
while (true) {
boost::lock_guard<boost::mutex> lock(mutex);
std::cout << "producer() pushing string onto queue" << std::endl;
queue.push_back(std::string("test"));
}
}
void consumer()
{
while (true) {
boost::lock_guard<boost::mutex> lock(mutex);
if (!queue.empty()) {
std::cout << "consumer() popped string " << queue.front() << " from queue" << std::endl;
queue.pop_front();
}
}
}
int main()
{
boost::thread producer_thread(producer);
boost::thread consumer_thread(consumer);
sleep(5);
producer_thread.detach();
consumer_thread.detach();
return 0;
}
This code runs as I expect, but when main exits, I get
/usr/include/boost/thread/pthread/mutex.hpp:45:
boost::mutex::~mutex(): Assertion `!pthread_mutex_destroy(&m)' failed.
consumer() popped string test from queue
Aborted
(I'm not sure if the output from consumer is relevant in that position, but I've left it in.)
Am I doing something wrong in my usage of Boost?
A bit off-topic but relevant imo (...waits for flames in comments).
The consumer model here is very greedy, looping and checking for data on the queue continually. It will be more efficient (waste less CPU cycles) if you have your consumer threads awakened determistically when data is available, using inter-thread signalling rather than this lock-and-peek loop. Think about it this way: while the queue is empty, this is essentially a tight loop only broken by the need to acquire the lock. Not ideal?
void consumer()
{
while (true) {
boost::lock_guard<boost::mutex> lock(mutex);
if (!queue.empty()) {
std::cout << "consumer() popped string " << queue.front() << " from queue" << std::endl;
queue.pop_front();
}
}
}
I understand that you are learning but I would not advise use of this in 'real' code. For learning the library though, it's fine. To your credit, this is a more complex example than necessary to understand how to use the lock_guard, so you are aiming high!
Eventually you will most likely build (or better if available, reuse) code for a queue that signals workers when they are required to do work, and you will then use the lock_guard inside your worker threads to mediate accesses to shared data.
You give your threads (producer & consumer) the mutex object and then detach them. They are supposed to run forever. Then you exit from your program and the mutex object is no longer valid. Nevertheless your threads still try to use it, they don't know that it is no longer valid. If you had used the NDEBUG define you would have got a coredump.
Are you trying to write a daemon application and this is the reason for detaching threads?
When main exits, all the global objects are destroyed. Your threads, however, do continue to run. You therefore end up with problems because the threads are accessing a deleted object.
Bottom line is that you must terminate the threads before exiting. The only what to do this though is to get the main program to wait (by using a boost::thread::join) until the threads have finished running. You may want to provide some way of signaling the threads to finish running to save from waiting too long.
The other issue is that your consumer thread continues to run even when there is not data. You might want to wait on a boost::condition_variable until signaled that there is new data.