Pthreads- 1 lock, 2 unlocks - c++

If I understand correctly, then foo1() fails to unlock &private_value_. As a result, foo2()'s thread_mutex_lock does not work since foo1() never released it.
What are the other consequences?
int main ( ... )
foo1();
foo2();
return 0;
}
foo1()
{
pthread_mutex_lock(&private_value_);
do something
// no unlock!
}
foo2()
{
pthread_mutex_lock(&private_value_)
do something
pthread_mutex_unlock(&private_value_);
}

There seems to be some confusion here between how the program should have been written and how the program, as currently written, will behave.
This code will cause deadlock, and this does not indicate that there is something wrong with how the mutexes are working. They're working exactly how they are supposed to: If you try to re-acquire a non-recursive mutex that is already locked, your code will block until the mutex is unlocked. That's how it's supposed to work.
Since this code is single-threaded, the blocking in foo2 will never end, and so your program will deadlock and not progress. That is most likely not how the program should work (because it's not a very useful program that way). The error is not in how the mutexes are functioning, but in how the programmer chose to employ them. The programmer should have put an unlock call at the end of foo1.

The mutex works fine. It's doing what it is meant to do. The thread will block after foo1() exits until the mutex is obtained by foo2.

This is why people migrate to languages that offer scope-based solutions, like C++ and RAII. The mutex is working as intended, but the writer of the first function forgot a small call, and now the application halts.

It's a bug if a lock is taken on a mutex and nothing ever releases the lock.
However, that doesn't necessarily mean that foo1() has to release the lock, but something must.
There are patterns where one function will acquire a lock, and another will release it. But you need to take special care that these more complex mutex handling patterns are coded correctly. You might be looking at an example of this (and the boiled-down snippet in the question doesn't include that additional complexity).
And as Neil Butterworth mentioned in a comment, there are many situations in C++ where a mutex will be managed by an RAII class, so the lock will be released automatically when the 'lock manager' object gets destroyed (often by going out of scope). In this case, it may not be obvious that the lock is being released, since that is done as a side effect of the variable merely going out of scope.

No, it does not necessarily block. All depends on your private_value_ variable. pthread_mutex_t can show different behavior according to the properties that were set when the variable was initialized. See http://opengroup.org/onlinepubs/007908775/xsh/pthread_mutex_lock.html

To answer the question:
There are no other consequences other than a deadlock (since your example is single threaded and makes the calls synchronously)..

If only foo2 gets called everything will run normal. If foo1 gets called then, if foo2 gets called or any other function that requires the mutex it will lock up forever.

The behavior of this code depends on the type of the mutex:
Default: Undefined behavior (locking an already-locked mutex).
Normal: Deadlock (locking an already-locked mutex).
Recursive: Works, but leaves the mutex locked (since it's locked twice but only unlocked once).
Error-checking: Fully works, and unlocks the mutex at the end (the second call to lock fails with EDEADLK, and the error value returned is ignored by the caller).

Related

Is that OK if I unlock a mutex before I lock it?

I wrote some code before but I realized it has a very bad code style so I changed it.
I changed the A.unlock inside of the if block. I know if the if never run then this thread will unlock the mutex which does not belong to itself and then it will return an undefined behavior.
My question is, if it returns an undefined behavior, will the logic here still work? Because if thread t1 didn't have the lock, t1 unlock the mutex A will return undefine behavior and the mutex will still be held by the thread which holds it right? And it will not affect the other logic in this code.
My old code works as same as I put the unlock part inside of the if block. So that's why I am curious how can this work.
mutex A;
if(something)
{
A.lock();
}
A.unlock();
When calling unlock on a mutex, the mutex must be owned by the current thread or the behavior is undefined. Undefined behavior means anything can happen, including the program appearing to run correctly, the program crashing, or memory elsewhere getting corrupted and a problem not being visible until later.
Normally a mutex is not used directly; one of the other standard classes (like std::unique_lock or std::lock_guard are used to manage it. Then you won't have to worry about unlocking the mutex.
You should consider using std::lock_guard :
mutex A;
if (something)
{
lock_guard<mutex> lock(A);
}
It is an undefined behavior to unlock a mutex you haven't locked. It may work on some cases for so many times but some day it will break and behave differently.
You can use a lock_guard in your case and forget about lock/unlock
std::lock_guard<std::mutex> lock(A);
The ISO C++ standard has this to say on the matter, in [thread.mutex.requirements.mutex] (my emphasis):
The expression m.unlock() shall be well-formed and have the following semantics:
Requires: The calling thread shall own the mutex.
That means what you are doing is a violation of the standard and is therefore undefined. It may work, or it may delete all your files while playing the derisive_maniacal_laughter.mp3 file :-)
Bottom line, don't do it.
I also wouldn't put the unlock inside the loop since modern C++ has higher-level mechanisms for dealing with auto-release of resources. In this case, it's a lock guard:
std::mutex mtxProtectData; // probably defined elsewhere (long-lived)
: : :
if (dataNeedsChanging) {
std::lock_guard<std::mutex> mtxGuard(mtxProtectData);
// Change data, mutex will unlock at brace below
// when mtxGuard goes out of scope.
}

Does this unique_lock in dtor serve any purpose?

Ran across this destructor in a codebase I am debugging.
ManagerImpl::~ManagerImpl() {
// don't go away if some thread is still hitting us
boost::unique_lock<boost::mutex> l(m_mutex);
}
Does it actually serve any useful purpose in a multi-threaded program? It looks like kludge.
I assume the idea is to defer destruction if another thread is calling a function that locks the mutex, but is it even effectively at doing that? ElectricFence segfaults would have me believe otherwise.
It is probably trying to postpone destruction until another thread unlocks the mutex and leaves another member function.
However, this wouldn't prevent another thread calling that function again after the lock in the destructor has been released.
There must be more interaction between threads (which you don't show) to make this code make sense. Still, thought, this doesn't seem to be robust code.

scoped_lock() - an RAII implementation using pthread

I have a socket shared between 4 threads and I wanted to use the RAII principle for acquiring and releasing the mutex.
The ground realities
I am using the pthread library.
I cannot use Boost.
I cannot use anything newer than C++03.
I cannot use exceptions.
The Background
Instead of having to lock the mutex for the socket everytime before using it, and then unlocking the mutex right afterwards, I thought I could write a scoped_lock() which would lock the mutex, and once it goes out of scope, it would automatically unlock the mutex.
So, quite simply I do a lock in the constructor and an unlock in the destructor, as shown here.
ScopedLock::ScopedLock(pthread_mutex_t& mutex, int& errorCode)
: m_Mutex(mutex)
{
errorCode = m_lock();
}
ScopedLock::~ScopedLock()
{
errorCode = m_unlock();
}
where m_lock() and m_unlock() are quite simply two wrapper functions around the pthread_mutex_lock() and the pthread_mutex_unlock() functions respectively, with some additional tracelines/logging.
In this way, I would not have to write at least two unlock statements, one for the good case and one for the bad case (at least one, could be more bad-paths in some situations).
The Problem
The problem that I have bumped into and the thing that I don't like about this scheme is the destructor.
I have diligiently done for every function the error-handling, but from the destructor of this ScopedLock(), I cannot inform the caller about any errors that might be returned my m_unlock().
This is a fundamental problem with RAII, but in this case, you're in luck. pthread_unlock only fails if you set up the mutex wrong (EINVAL) or if you're attempting to unlock an error checking mutex from a thread that doesn't own it (EPERM). These errors are indications of bugs in your code, not runtime errors that you should be taking into account. asserting errorCode==0 is a reasonable strategy in this case.

Why doesn't pthread_mutex_t block, when `g++` compile code [duplicate]

As per this article:
If you try and lock a non-recursive mutex twice from the same thread without unlocking in between, you get undefined behavior.
My very naive mind tells me why don't they just return an error? Is there a reason why this has to be UB?
Because it never happens in a correct program, and making a check for something that never happens is wasteful (and to make that check it needs to store the owning thread ID, which is also wasteful).
Note that it being undefined allows debug implementations to throw an exception, for example, while still allowing release implementations to be as efficient as possible.
Undefined behavior allows implementations to do whatever is fastest/most convenient. For example, an efficient implementation of a non-recursive mutex might be a single bit where the lock operation is implemented with an atomic compare-and-swap instruction in a loop. If the thread that owns the mutex tries to lock it again it will deadlock because it is waiting for the mutex to unlock but since nobody else can unlock it (unless there's some other bug where some thread that doesn't own it unlocks it) the thread will wait forever.

C++ Thread question - setting a value to indicate the thread has finished

Is the following safe?
I am new to threading and I want to delegate a time consuming process to a separate thread in my C++ program.
Using the boost libraries I have written code something like this:
thrd = new boost::thread(boost::bind(&myclass::mymethod, this, &finished_flag);
Where finished_flag is a boolean member of my class. When the thread is finished it sets the value and the main loop of my program checks for a change in that value.
I assume that this is okay because I only ever start one thread, and that thread is the only thing that changes the value (except for when it is initialised before I start the thread)
So is this okay, or am I missing something, and need to use locks and mutexes, etc
You never mentioned the type of finished_flag...
If it's a straight bool, then it might work, but it's certainly bad practice, for several reasons. First, some compilers will cache the reads of the finished_flag variable, since the compiler doesn't always pick up the fact that it's being written to by another thread. You can get around this by declaring the bool volatile, but that's taking us in the wrong direction. Even if reads and writes are happening as you'd expect, there's nothing to stop the OS scheduler from interleaving the two threads half way through a read / write. That might not be such a problem here where you have one read and one write op in separate threads, but it's a good idea to start as you mean to carry on.
If, on the other hand it's a thread-safe type, like a CEvent in MFC (or equivilent in boost) then you should be fine. This is the best approach: use thread-safe synchronization objects for inter-thread communication, even for simple flags.
Instead of using a member variable to signal that the thread is done, why not use a condition? You are already are using the boost libraries, and condition is part of the thread library.
Check it out. It allows the worker thread to 'signal' that is has finished, and the main thread can check during execution if the condition has been signaled and then do whatever it needs to do with the completed work. There are examples in the link.
As a general case I would neve make the assumption that a resource will only be modified by the thread. You might know what it is for, however someone else might not - causing no ends of grief as the main thread thinks that the work is done and tries to access data that is not correct! It might even delete it while the worker thread is still using it, and causing the app to crash. Using a condition will help this.
Looking at the thread documentation, you could also call thread.timed_join in the main thread. timed_join will wait for a specified amount for the thread to 'join' (join means that the thread has finsihed)
I don't mean to be presumptive, but it seems like the purpose of your finished_flag variable is to pause the main thread (at some point) until the thread thrd has completed.
The easiest way to do this is to use boost::thread::join
// launch the thread...
thrd = new boost::thread(boost::bind(&myclass::mymethod, this, &finished_flag);
// ... do other things maybe ...
// wait for the thread to complete
thrd.join();
If you really want to get into the details of communication between threads via shared memory, even declaring a variable volatile won't be enough, even if the compiler does use appropriate access semantics to ensure that it won't get a stale version of data after checking the flag. The CPU can issue reads and writes out of order as long (x86 usually doesn't, but PPC definitely does) and there is nothing in C++9x that allows the compiler to generate code to order memory accesses appropriately.
Herb Sutter's Effective Concurrency series has an extremely in depth look at how the C++ world intersects the multicore/multiprocessor world.
Having the thread set a flag (or signal an event) before it exits is a race condition. The thread has not necessarily returned to the OS yet, and may still be executing.
For example, consider a program that loads a dynamic library (pseudocode):
lib = loadLibrary("someLibrary");
fun = getFunction("someFunction");
fun();
unloadLibrary(lib);
And let's suppose that this library uses your thread:
void someFunction() {
volatile bool finished_flag = false;
thrd = new boost::thread(boost::bind(&myclass::mymethod, this, &finished_flag);
while(!finished_flag) { // ignore the polling loop, it's besides the point
sleep();
}
delete thrd;
}
void myclass::mymethod() {
// do stuff
finished_flag = true;
}
When myclass::mymethod() sets finished_flag to true, myclass::mymethod() hasn't returned yet. At the very least, it still has to execute a "return" instruction of some sort (if not much more: destructors, exception handler management, etc.). If the thread executing myclass::mymethod() gets pre-empted before that point, someFunction() will return to the calling program, and the calling program will unload the library. When the thread executing myclass::mymethod() gets scheduled to run again, the address containing the "return" instruction is no longer valid, and the program crashes.
The solution would be for someFunction() to call thrd->join() before returning. This would ensure that the thread has returned to the OS and is no longer executing.