removing a boost named_mutex - c++

I have the following code:
void Func()
{
boost::interprocess::named_mutex someMutex(boost::interprocess::open_or_create, "MyMutex");
boost::interprocess::scoped_lock<boost::interprocess::named_mutex> lock(someMutex);
// ... some stuff happens here
}
and many applications are calling Func. If there is a crash in Func, the mutex isn't released, so I'm considering removing the mutex, by calling boost::interprocess::named_mutex::remove("MyMutex"), if I detect any of my app crashed.
Due to special circumstance, it's actually safe for two threads or processes to enter the protected area at the same time, because the content of Func() only does things for the first application that runs.
I have two questions:
is what I'm planning to do a good idea?
What happens if:
Process A opens the mutex and locks it (the mutex had been created previously)
Process B dies. We detect that and remove the mutex
Process C creates the mutex and locks it
Process A finishes running Func(), and the scoped_lock destructor releases the mutex
Process C finishes running Func(), and the scoped_lock destructor releases the mutex
now do I have a "double released" named_mutex, or did Process A just didn't do anything in the scoped_lock destructor because the named_mutex it locked had already been removed?

Related

C++ Pthread Mutex Locking

I guess I'm a little unsure of how mutexs work. If a mutex gets locked after some conditional, will it only lock out threads that meet that same condition, or will it lock out all threads regardless until the mutex is unlocked?
Ex:
if (someBoolean)
pthread_mutex_lock(& mut);
someFunction();
pthread_mutex_unlock(& mut);
Will all threads be stopped from running someFunction();, or just those threads that pass through the if-statement?
You need to call pthread_mutex_lock() in order for a thread to be locked. If you call pthread_mutex lock in thread A, but not in thread B, thread B does not get locked (and you have likely defeated the purpose of mutual exclusion in your code, as it's only useful if every thread follows the same locking protocol to protect your code).
The code in question has a few issues outlined below:
if (someBoolean) //if someBoolean is shared among threads, you need to lock
//access to this variable as well.
pthread_mutex_lock(& mut);
someFunction(); //now you have some threads calling someFunction() with the lock
//held, and some calling it without the lock held, which
//defeats the purpose of mutual exclusion.
pthread_mutex_unlock(& mut); //If you did not lock the *mut* above, you must not
//attempt to unlock it.
Will all threads be stopped from running someFunction();, or just those threads that pass through the if-statement?
Only the threads for which someBoolean is true will obtain the lock. Therefore, only those threads will be prevented from calling someFunction() while someone else holds the same lock.
However, in the provided code, all threads will call pthread_mutex_unlock on the mutex, regardless of whether they actually locked it. For mutexes created with default parameters this constitutes undefined behavior and must be fixed.

What happens to a lock on a boost interprocess mutex if a process forks while the lock is acquired?

Let's say we have a process with two threads. One thread does some work on some shared resource and periodically takes out a scoped lock on a boost::interprocess::mutex. The other thread causes a fork/exec, at some random time.
Thread 1
void takeLockDoWork() {
using namespace boost::interprocess;
managed_shared_memory segment(open_only, "xxx");
interprocess_sharable_mutex *mutex = segment.find<interprocess_sharable_mutex>("mymutex").first;
scoped_lock<interprocess_sharable_mutex> lock(*mutex);
// access or do work on a shared resource here
//lock automatically unlocks when scope is left.
}
Let's say Thread 2 forks right after the scoped_lock is taken out. Presumably the child process has the same lock state as the parent.
What happens? Will there now be a race condition with the parent process?
As long as you don't fork from a thread that is holding an interprocess_sharable_mutex or access memory that was being protected by a mutex, you're okay.
The mutex exists in shared memory, meaning that even though you forked, the mutex state wasn't duplicated; it exists in one place, accessible by both processes.
Because forking only maintains the forking thread in the child, only the other thread in the parent thinks it has ownership of the mutex, so there's no problem. Even if you tried to acquire the mutex after forking, you would still be okay; it would just block until the parent releases it.

Multithreading Clarification

I've been trying to learn how to multithread and came up with the following understanding. I was wondering if I'm correct or far off and, if I'm incorrect in any way, if someone could give me advice.
To create a thread, first you need to utilize a library such as <thread> or any alternative (I'm using boost's multithreading library to get cross-platform capabilities). Afterwards, you can create a thread by declaring it as such (for std::thread)
std::thread thread (foo);
Now, you can use thread.join() or thread.detach(). The former will wait until the thread finishes, and then continue; while, the latter will run the thread alongside whatever you plan to do.
If you want to protect something, say a vector std::vector<double> data, from threads accessing simultaneously, you would use a mutex.
Mutex's would be declared as a global variable so that they may access the thread functions (OR, if you're making a class that will be multithreaded, the mutex can be declared as a private/public variable of the class). Afterwards, you can lock and unlock a thread using a mutex.
Let's take a quick look at this example pseudo code:
std::mutex mtx;
std::vector<double> data;
void threadFunction(){
// Do stuff
// ...
// Want to access a global variable
mtx.lock();
data.push_back(3.23);
mtx.unlock();
// Continue
}
In this code, when the mutex locks down on the thread, it only locks the lines of code between it and mtx.unlock(). Thus, other threads will still continue on their merry way until they try accessing data (Note, we would likely through a mutex in the other threads as well). Then they would stop, wait to use data, lock it, push_back, unlock it and continue. Check here for a good description of mutex's.
That's about it on my understanding of multithreading. So, am I horribly wrong or accurate?
Your comments refer to "locking the whole thread". You can't lock part of a thread.
When you lock a mutex, the current thread takes ownership of the mutex. Conceptually, you can think of it as the thread places its mark on the mutex (stores its threadid in the mutex data structure). If any other thread comes along and attempts to acquire the same mutex instance, it sees that the mutex is already "claimed" by somebody else and it waits until the first thread has released the mutex. When the owning thread later releases the mutex, one of the threads that is waiting for the mutex can wake up, acquire the mutex for themselves, and carry on.
In your code example, there is a potential risk that the mutex might not be released once it is acquired. If the call to data.push_back(xxx) throws an exception (out of memory?), then execution will never reach mtx.unlock() and the mutex will remain locked forever. All subsequent threads that attempt to acquire that mutex will drop into a permanent wait state. They'll never wake up because the thread that owns the mutex is toast.
For this reason, acquiring and releasing critical resources like mutexes should be done in a manner that will guarantee they will be released regardless of how execution leaves the current scope. In other languages, this would mean putting the mtx.unlock() in the finally section of a try..finally block:
mtx.lock();
try
{
// do stuff
}
finally
{
mtx.unlock();
}
C++ doesn't have try..finally statements. Instead, C++ leverages its language rules for automatic disposal of locally defined variables. You construct an object in a local variable, the object acquires a mutex lock in its constructor. When execution leaves the current function scope, C++ will make sure that the object is disposed, and the object releases the lock when it is disposed. That's the RAII others have mentioned. RAII just makes use of the existing implicit try..finally block that wraps every C++ function body.

Boost mutex locking on same thread

I'm new to the boost library, and it's such an amazing library! Also, I am new to mutexes, so forgive me if I am making a newbie mistake.
Anyway, I have two functions called FunctionOne and FunctionTwo. FunctionOne and FunctionTwo are called asynchronously by a different thread. So here's what happens: In FunctionOne, I lock a global mutex at the beginning of the function and unlock the global mutex at the end of the function. Same thing for FunctionTwo.
Now here's the problem: at times, FunctionOne and FunctionTwo are called less than a few milliseconds apart (not always though). So, FunctionOne begins to execute and half-way through FunctionTwo executes. When FunctionTwo locks the mutex, the entire thread that FunctionOne and FunctionTwo are on is stopped, so FunctionOne is stuck half-way through and the thread waits on itself in FunctionTwo forever. So, to summarize:
Function 1 locks mutex and begins executing code.
Function 2 is called a few ms later and locks the mutex, freezing the thread func 1 and 2 are on.
Now func 1 is stuck half-way through and the thread is frozen, so func 1 never finishes and the mutex is locked forever, waiting for func 1 to finish.
What does one do in such situations? Here is my code:
boost::mutex g_Mutex;
lua_State* L;
// Function 1 is called from some other thread
void FunctionOne()
{
g_Mutex.lock();
lua_performcalc(L);
g_Mutex.unlock();
}
// Function 2 is called from some other thread a few ms later, freezing the thread
// and Function 1 never finishes
void FunctionTwo()
{
g_Mutex.lock();
lua_performothercalc(L);
g_Mutex.unlock();
}
Are these functions intended to be re-entrant, such that FunctionOne will call itself or FunctionTwo while holding the mutex? Or vice versa, with FunctionTwo locking the mutex and then calling FunctionOne/FunctionTwo while the mutex is locked?
If not, then you should not be calling these two functions from the same thread. If you intend FunctionTwo to block until FunctionOne has completed then it is a mistake to have it called on the same thread. That would happen if lua_performcalc ends up calling FunctionTwo. That'd be the only way they could be called on the same thread.
If so, then you need a recursive_mutex. A regular mutex can only be locked once; locking it again from the same thread is an error. A recursive mutex can be locked multiple times by a single thread and is locked until the thread calls unlock an equal number of times.
In either case, you should avoid calling lock and unlock explicitly. If an exception is thrown the mutex won't get unlocked. It's better to use RAII-style locking, like so:
{
boost::recursive_mutex::scoped_lock lock(mutex);
...critical section code...
// mutex is unlocked when 'lock' goes out of scope
}
your description is incorrect. a mutex cannot be locked twice. you have a different problem.
check for reentrance while the mutex is locked.
check for exceptions
to avoid problems with exceptions you should use boost::mutex::scoped_lock (RAAI)

Multithreaded program thread join issues

I am currently writing a multithreaded program where a thread may sometimes be created depending on certain circumstances. If this thread is created it needs to run independently of all other threads and I cannot afford to block any other threads to wait for it to join. The length of time the spawned thread runs for varies; sometimes it can take up to a few hours.
I have tried spawning the thread and putting a join in the destructor of the class which works fine, however if the code within the spawned thread finishes a long time before the destructor is called (which will be around 99% of the time) I would like the thread to kill itself freeing all its resources etc.
I looked into using detach for this, but you can't rejoin a detached thread and on the off chance the destructor is called before this thread finishes then the spawned thread will not finish and could have disastrous consequences.
Is there any possible solution that ensures the thread finishes before the class is destructed as well as allowing it to join as soon as the thread finishes its work?
I am using boost/c++11 for threading. Any help at all would be greatly appreciated.
Thanks
The thread may detach itself, releasing its resources. If the destructor sees that the thread is joinable, i.e. still running, let it join. If the thread reaches its end, self-detach. Possible race condition: is_joinable() returns true in destructor - thread detaches itself - destructor joins and fails miserably. So use a mutex guarding the thread's decease:
struct ThreadContainer
{
std::mutex threadEndMutex;
std::thread theThread;
ThreadContainer()
: theThread([=]()
{
/* do stuff */
// if the mutex is locked, the destructor is just
// about to join, so we let him.
if (threadEndMutex.try_lock())
theThread.detach();
})
{}
~ThreadContainer()
{
// if the mutex is locked, the thread is just about
// to detach itself, so no need to join.
// if we got the mutex but the thread is not joinable,
// it has detached itself already.
if (threadEndMutex.try_lock() && theThread.is_joinable())
theThread.join();
}
};
PS:
you might not even need the call to is_joinable, because if the thread detached itself, it never unlocked the mutex and try_lock fails.
PPS:
instead of the mutex, you may use std::atomic_flag:
struct ThreadContainer
{
std::atmoic_flag threadEnded;
std::thread theThread;
ThreadContainer()
: threadEnded(ATOMIC_FLAG_INIT)
, theThread([=]()
{
/* do stuff */
if (!threadEnded.test_and_set())
theThread.detach();
})
{}
~ThreadContainer()
{
if (!threadEnded.test_and_set())
theThread.join();
}
};
You could define pauses/steps in your "independent" thread algorithm, and at each step you look at a global variable that helps you decide to cancel calculation and auto destroy, or to continue the calculation in your thread.
If global variable is not sufficient, i.e. if a more precise granularity is needed you should define a functor object for your thread function, this functor having a method kill(). You keep references of the functors after you have launched them as threads. And when you call the MyThreadFunctor::kill() it's sets a boolean field and this field is checked at each steps of your calculation in the functor thread-function itself..