Unlock of unowned mutex - c++

I created the following class which provides an acquire_lock() and release_lock() function
class LockableObject {
public:
void acquire_lock() {
std::unique_lock<std::mutex> local_lock(m_mutex);
m_lock = std::move(local_lock);
}
void release_lock() {
m_lock.unlock();
}
private:
std::mutex m_mutex;
std::unique_lock<std::mutex> m_lock;
};
This class provides an acquire_lock and release_lock function. I have multiple threads accessing the same object and calling the acquire_lock before performing any operations and then calling release_lock once done as below.
void ThreadFunc(int ID, LockableObject* lckbleObj)
{
for (int i = 0; i < 1000; i++)
{
lckbleObj->acquire_lock();
std::cout << "Thread with ID = " << ID << "doing work" << std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(10));
lckbleObj->release_lock();
}
}
void main()
{
const int numThreads = 10;
std::thread workerThreads[numThreads];
LockableObject *testObject = new LockableObject();
for (int i = 0; i < numThreads; i++)
{
workerThreads[i] = std::thread(ThreadFunc, i, testObject);
}
for (int i = 0; i < numThreads; i++)
{
workerThreads[i].join();
}
}
In the acquire_lock function, I first try to lock the underlying mutex (m_mutex) with a local stack std::unique_lock object by passing it ( m_mutex) in the constructor. I assume once the constructor for std::unique_lock returns it has locked the mutex, I then move the unique_lock on the stack to the member variable m_lock.
This program is flawed in some basic way and during the call to release_lock will result in the "unlock of unowned mutex", I seem to be missing something basic about std::unique_lock and am looking for someone to correct my understanding.

See my comment about the lack of std::defer_lock in the constructor. But you also have a race condition in your code.
The acquire_lock function modifies the m_lock under protection of the m_mutex mutex. Thus, to ensure thread safety, no other thread can modify m_lock except while holding m_mutex.
But the release_lock function modifies m_lock while it is releasing that mutex. Thus, you do not have proper synchronization on m_lock.
This is somewhat subtle to understand. This is the problem code:
m_lock.unlock();
Note that when this function is entered, m_mutex is locked but during its execution, it both modifies m_lock and releases m_mutex in no particular guaranteed order. But m_mutex protects m_lock. So this is a race condition and not permitted.
It can be fixed as follows:
void release_lock() {
std::unique_lock<std::mutex> local_lock = std::move(m_lock);
local_lock.unlock();
}
Now, this first line of code modifies m_lock but runs entirely with m_mutex held. This avoids the race condition.
The unlock can be removed if desired. The destructor of local_lock will do it.
By the way, I would suggest changing the API. Instead of offering lock and unlock calls, instead have a way to create an object that owns a lock on this object. You can even use std::unique_lock<LockableObject> if you want. Why create a new API that's worse than the one offered by the standard?

The member function acquire_lock can be changed like below to fix the issue:
void acquire_lock() {
m_lock = std::unique_lock<std::mutex>(m_mutex);
}
The move-assign function of unique_lock will be called to manage mutex object m_mutex in m_lock.

Related

Destructor, when object's dynamic variable is locked by mutex will not free it?

I'm trying to solve some complicated (for me at least) asynchronous scenario at once, but I think it will be better to understand more simple case.
Consider an object, that has allocated memory, carrying by variable:
#include <thread>
#include <mutex>
using namespace std;
mutex mu;
class Object
{
public:
char *var;
Object()
{
var = new char[1]; var[0] = 1;
}
~Object()
{
mu.lock();
delete[]var; // destructor should free all dynamic memory on it's own, as I remember
mu.unlock();
}
}*object = nullptr;
int main()
{
object = new Object();
return 0;
}
What if while, it's var variable in detached, i.e. asynchronous thread, will be used, in another thread this object will be deleted?
void do_something()
{
for(;;)
{
mu.lock();
if(object)
if(object->var[0] < 255)
object->var[0]++;
else
object->var[0] = 0;
mu.unlock();
}
}
int main()
{
object = new Object();
thread th(do_something);
th.detach();
Sleep(1000);
delete object;
object = nullptr;
return 0;
}
Is is it possible that var will not be deleted in destructor?
Do I use mutex with detached threads correctly in code above?
2.1 Do I need cover by mutex::lock and mutex::unlock also delete object line?
I also once again separately point that I need new thread to be asynchronous. I do not need the main thread to be hanged, while new is running. I need two threads at once.
P.S. From a list of commentaries and answers one of most important thing I finally understood - mutex. The biggest mistake I thought is that already locked mutex skips the code between lock and unlock.
Forget about shared variables, mutex itself has noting to do with it. Mutex is just a mechanism for safely pause threads:
mutex mu;
void a()
{
mu.lock();
Sleep(1000);
mu.unlock();
}
int main()
{
thread th(a);
th.detach();
mu.lock(); // hangs here, until mu.unlock from a() will be called
mu.unlock();
return;
}
The concept is extremely simple - mutex object (imagine) has flag isLocked, when (any) thread calls lock method and isLocked is false, it just sets isLocked to true. But if isLocked is true already, mutex somehow on low-level hangs thread that called lock until isLocked will not become false. You can find part of source code of lock method scrolling down this page. Instead of mutex, probably just a bool variable could be used, but it will cause undefined behaviour.
Why is it referred to shared stuff? Because using same variable (memory) simultaneously from multiple threads makes undefined behaviour, so one thread, reaching some variable that currently can be used by another - should wait, until another will finish working with it, that's why mutex is used here.
Why accessing mutex itself from different threads does not make undefined behaviour? I don't know, going to google it.
Do I use mutex with detached threads correctly in code above?
Those are orthogonal concepts. I don't think mutex is used correctly since you only have one thread mutating and accessing the global variable, and you use the mutex to synchronize waits and exits. You should join the thread instead.
Also, detached threads are usually a code smell. There should be a way to wait all threads to finish before exiting the main function.
Do I need cover by mutex::lock and mutex::unlock also delete object line?
No since the destructor will call mu.lock() so you're fine here.
Is is it possible that var will not be deleted in destructor?
No, it will make you main thread to wait though. There are solutions to do this without using a mutex though.
There's usually two way to attack this problem. You can block the main thread until all other thread are done, or use shared ownership so both the main and the thread own the object variable, and only free when all owner are gone.
To block all thread until everyone is done then do cleanup, you can use std::barrier from C++20:
void do_something(std::barrier<std::function<void()>>& sync_point)
{
for(;;)
{
if(object)
if(object->var[0] < 255)
object->var[0]++;
else
object->var[0] = 0;
} // break at a point so the thread exits
sync_point.arrive_and_wait();
}
int main()
{
object = new Object();
auto const on_completion = []{ delete object; };
// 2 is the number of threads. I'm counting the main thread since
// you're using detached threads
std::barrier<std::function<void()>> sync_point(2, on_completion);
thread th(do_something, std::ref(sync_point));
th.detach();
Sleep(1000);
sync_point.arrive_and_wait();
return 0;
}
Live example
This will make all the threads (2 of them) wait until all thread gets to the sync point. Once that sync point is reached by all thread, it will run the on_completion function, which will delete the object once when no one needs it anymore.
The other solution would be to use a std::shared_ptr so anyone can own the pointer and free it only when no one is using it anymore. Note that you will need to remove the object global variable and replace it with a local variable to track the shared ownership:
void do_something(std::shared_ptr<Object> object)
{
for(;;)
{
if(object)
if(object->var[0] < 255)
object->var[0]++;
else
object->var[0] = 0;
}
}
int main()
{
std::shared_ptr<Object> object = std::make_shared<Object>();
// You need to pass it as parameter otherwise it won't be safe
thread th(do_something, object);
th.detach();
Sleep(1000);
// If the thread is done, this line will call delete
// If the thread is not done, the thread will call delete
// when its local `object` variable goes out of scope.
object = nullptr;
return 0;
}
Is is it possible that var will not be deleted in destructor?
With
~Object()
{
mu.lock();
delete[]var; // destructor should free all dynamic memory on it's own, as I remember
mu.unlock();
}
You might have to wait that lock finish, but var would be deleted.
Except that your program exhibits undefined behaviour with non protected concurrent access to object. (delete object isn't protected, and you read it in your another thread), so everything can happen.
Do I use mutex with detached threads correctly in code above?
Detached or not is irrelevant.
And your usage of mutex is wrong/incomplete.
which variable should your mutex be protecting?
It seems to be a mix between object and var.
If it is var, you might reduce scope in do_something (lock only in if-block)
And it misses currently some protection to object.
2.1 Do I need cover by mutex::lock and mutex::unlock also delete object line?
Yes object need protection.
But you cannot use that same mutex for that. std::mutex doesn't allow to lock twice in same thread (a protected delete[]var; inside a protected delete object) (std::recursive_mutex allows that).
So we come back to the question which variable does the mutex protect?
if only object (which is enough in your sample), it would be something like:
#include <thread>
#include <mutex>
using namespace std;
mutex mu;
class Object
{
public:
char *var;
Object()
{
var = new char[1]; var[0] = 1;
}
~Object()
{
delete[]var; // destructor should free all dynamic memory on it's own, as I remember
}
}*object = nullptr;
void do_something()
{
for(;;)
{
mu.lock();
if(object)
if(object->var[0] < 255)
object->var[0]++;
else
object->var[0] = 0;
mu.unlock();
}
}
int main()
{
object = new Object();
thread th(do_something);
th.detach();
Sleep(1000);
mu.lock(); // or const std::lock_guard<std::mutex> lock(mu); and get rid of unlock
delete object;
object = nullptr;
mu.unlock();
return 0;
}
Alternatively, as you don't have to share data between thread, you might do:
int main()
{
Object object;
thread th(do_something);
Sleep(1000);
th.join();
return 0;
}
and get rid of all mutex
Have a look at this, it shows the use of scoped_lock, std::async and managment of lifecycles through scopes (demo here : https://onlinegdb.com/FDw9fG9rS)
#include <future>
#include <mutex>
#include <chrono>
#include <iostream>
// using namespace std; <== dont do this
// mutex mu; avoid global variables.
class Object
{
public:
Object() :
m_var{ 1 }
{
}
~Object()
{
}
void do_something()
{
using namespace std::chrono_literals;
for(std::size_t n = 0; n < 30; ++n)
{
// extra scope to reduce time of the lock
{
std::scoped_lock<std::mutex> lock{ m_mtx };
m_var++;
std::cout << ".";
}
std::this_thread::sleep_for(150ms);
}
}
private:
std::mutex m_mtx;
char m_var;
};
int main()
{
Object object;
// extra scope to manage lifecycle of future
{
// use a lambda function to start the member function of object
auto future = std::async(std::launch::async, [&] {object.do_something(); });
std::cout << "do something started\n";
// destructor of future will synchronize with end of thread;
}
std::cout << "\n work done\n";
// safe to go out of scope now and destroy the object
return 0;
}
All you assumed and asked in your question is right. The variable will always be freed.
But your code has one big problem. Lets look at your example:
int main()
{
object = new Object();
thread th(do_something);
th.detach();
Sleep(1000);
delete object;
object = nullptr;
return 0;
}
You create a thread that will call do_something(). But lets just assume that right after the thread creation the kernel interrupts the thread and does something else, like updating the stackoverflow tab in your web browser with this answer. So do_something() isn't called yet and won't be for a while since we all know how slow browsers are.
Meanwhile the main function sleeps 1 second and then calls delete object;. That calls Object::~Object(), which acquires the mutex and deletes the var and releases the mutex and finally frees the object.
Now assume that right at this point the kernel interrupts the main thread and schedules the other thread. object still has the address of the object that was deleted. So your other thread will acquire the mutex, object is not nullptr so it accesses it and BOOM.
PS: object isn't atomic so calling object = nullptr in main() will also race with if (object).

About shared_mutex and shared_ptr across multiple threads

I implemented code such that multiple instances running on different threads reads other instances' data using reader-writer lock and shared_ptr. It seemed fine, but I am not 100% sure about that and I came up with some questions about usage of those.
Detail
I have multiple instances of a class called Chunk and each instance does some calculations in a dedicated thread. A chunk needs to read neighbour chunks' data as well as its own data, but it doesn't write neighbours' data, so reader-writer lock is used. Also, neighbours can be set at runtime. For example, I might want o set a different neighbour chunk at runtime, sometimes just nullptr. It is possible to delete a chunk at runtime, too. Raw pointers can be used but I thought shared_ptr and weak_ptr are better for this, in order to keep track of the lifetime. Own data in shared_ptr and neighbours' data in weak_ptr.
I provided a simpler version of my code below. ChunkData has data and a mutex for it. I use InitData for data initialization and DoWork function is called in a dedicated thread after that. other functions can be called from main thread.
This seems to work, but I am not so confident. Especially, about use of shared_ptr across multiple threads.
What happens if a thread calls shared_ptr's reset() (in ctor and InitData) and other uses it with weak_ptr's lock (in DoWork)? Does this need a lock dataMutex or chunkMutex?
How about copy(in SetNeighbour)? Do I need locks for this as well?
I think other parts are ok, but please let me know if you find anything dangerous. Appreciate that.
By the way, I considered about storing shared_ptr of Chunk instead of ChunkData, but decided not to use this method because internal code, which I don't manage, has GC system and it can delete a pointer to Chunk when I don't expect it.
class Chunk
{
public:
class ChunkData
{
public:
shared_mutex dataMutex; // mutex to read/write data
int* data;
int size;
ChunkData() : data(nullptr) { }
~ChunkData()
{
if (data)
{
delete[] data;
data = nullptr;
}
}
};
private:
mutex chunkMutex; // mutex to read/write member variables
shared_ptr<ChunkData> chunkData;
weak_ptr<ChunkData> neighbourChunkData;
string result;
public:
Chunk(string _name)
: chunkData(make_shared<ChunkData>())
{
}
~Chunk()
{
EndProcess();
unique_lock lock(chunkMutex); // is this needed?
chunkData.reset();
}
void InitData(int size)
{
ChunkData* NewData = new ChunkData();
NewData->size = size;
NewData->data = new int[size];
{
unique_lock lock(chunkMutex); // is this needed?
chunkData.reset(NewData);
cout << "init chunk " << name << endl;
}
}
// This is executed in other thread. e.g. thread t(&Chunk::DoWork, this);
void DoWork()
{
lock_guard lock(chunkMutex); // we modify some members such as result(string) reading chunk data, so need this.
if (chunkData)
{
shared_lock readLock(chunkData->dataMutex);
if (chunkData->data)
{
// read chunkData->data[i] and modify some members such as result(string)
for (int i = 0; i < chunkData->size; ++i)
{
// Is this fine, or should I write data result outside of readLock scope?
result += to_string(chunkData->data[i]) + " ";
}
}
}
// does this work?
if (shared_ptr<ChunkData> neighbour = neighbourChunkData.lock())
{
shared_lock readLock(neighbour->dataMutex);
if (neighbour->data)
{
// read neighbour->data[i] and modify some members as above
}
}
}
shared_ptr<ChunkData> GetChunkData()
{
unique_lock lock(chunkMutex);
return chunkData;
}
void SetNeighbour(Chunk* neighbourChunk)
{
if (neighbourChunk)
{
// safe?
shared_ptr<ChunkData> newNeighbourData = neighbourChunk->GetChunkData();
unique_lock lock(chunkMutex); // lock for chunk properties
{
shared_lock readLock(newNeighbourData->dataMutex); // not sure if this is needed.
neighbourChunkData = newNeighbourData;
}
}
}
int GetDataAt(int index)
{
shared_lock readLock(chunkData->dataMutex);
if (chunkData->data && 0 <= index && index < chunkData->size)
{
return chunkData->data[index];
}
return 0;
}
void SetDataAt(int index, int element)
{
unique_lock writeLock(chunkData->dataMutex);
if (chunkData->data && 0 <= index && index < chunkData->size)
{
chunkData->data[index] = element;
}
}
};
Edit 1
I added more detail for DoWork function. Chunk data is read and chunk's member variables are edited in the function.
After Homer512's anwer, I came up with other questions.
A) In DoWork function I write a member variable inside a read lock. Should I only read data in a read lock scope and if I need to modify other data based on read data, do I have to do outside of the read lock? For example, copy the whole array to a local variable in a read lock, and modify other members outside of the read lock using the local.
B) I followed Homer512 and modifed GetDataAt/SetDataAt as below. I do read/write lock chunkData->dataMutex before unlocking chunkMutex. I also do this in DoWork function. Should I instead do locks separately? For example, make a local variable shared_ptr and set chunkData to it in a chunkMutex lock, unlock it, then lastly read/write lock that local variable's dataMutex and read/write data.
int GetDataAt(int index)
{
lock_guard chunkLock(chunkMutex);
shared_lock readLock(chunkData->dataMutex);
if (chunkData->data && 0 <= index && index < chunkData->size)
{
return chunkData->data[index];
}
return 0;
}
void SetDataAt(int index, int element)
{
lock_guard chunkLock(chunkMutex);
unique_lock writeLock(chunkData->dataMutex);
if (chunkData->data && 0 <= index && index < chunkData->size)
{
chunkData->data[index] = element;
}
}
I have several remarks:
~ChunkData: You could change your data member from int* to unique_ptr<int[]> to get the same result without an explicit destructor. Your code is correct though, just less convenient.
~Chunk: I don't think you need a lock or call the reset method. By the time the destructor runs, by definition, no one should have a reference to the Chunk object. So the lock can never be contested. And reset is unnecessary because the shared_ptr destructor will handle that.
InitData: Yes, the lock is needed because InitData can race with DoWork. You could avoid this by moving InitData to the constructor but I assume there are reasons for this division. You could also change the shared_ptr to std::atomic<std::shared_ptr<ChunkData> > to avoid the lock.
It is more efficient to write InitData like this:
void InitData(int size)
{
std::shared_ptr<ChunkData> NewData = std::make_shared<ChunkData>();
NewData->size = size;
NewData->data = new int[size]; // or std::make_unique<int[]>(size)
{
std::lock_guard<std::mutex> lock(chunkMutex);
chunkData.swap(NewData);
}
// deletes old chunkData outside locked region if it was initialized before
}
make_shared avoids an additional memory allocation for the reference counter. This also moves all allocations and deallocations out of the critical section.
DoWork: Your comment "ready chunkData->data[i] and modify some members". You only take a shared_lock but say that you modify members. Well, which is it, reading or writing? Or do you mean to say that you modify Chunk but not ChunkData, with Chunk being protected by its own mutex?
SetNeighbour: You need to lock both your own chunkMutex and the neighbour's. You should not lock both at the same time to avoid the dining philosopher's problem (though std::lock solves this).
void SetNeighbour(Chunk* neighbourChunk)
{
if(! neighbourChunk)
return;
std::shared_ptr<ChunkData> newNeighbourData;
{
std::lock_guard<std::mutex> lock(neighbourChunk->chunkMutex);
newNeighbourData = neighbourChunk->chunkData;
}
std::lock_guard<std::mutex> lock(this->chunkMutex);
this->neighbourChunkData = newNeighbourData;
}
GetDataAt and SetDataAt: You need to lock chunkMutex. Otherwise you might race with InitData. There is no need to use std::lock because the order of locks is never swapped around.
EDIT 1:
DoWork: The line if (shared_ptr<ChunkData> neighbour = neighbourChunkData.lock()) doesn't keep the neighbur alive. Move the variable declaration out of the if to keep the reference.
EDIT: Alternative design proposal
What I'm bothered with is that your DoWork may be unable to proceed if InitData is still running or waiting to run. How do you want to deal with this? I suggest you make it possible to wait until the work can be done. Something like this:
class Chunk
{
std::mutex chunkMutex;
std::shared_ptr<ChunkData> chunkData;
std::weak_ptr<ChunkData> neighbourChunkData;
std::condition_variable chunkSet;
void waitForChunk(std::unique_lock<std::mutex>& lock)
{
while(! chunkData)
chunkSet.wait(lock);
}
public:
// modified version of my code above
void InitData(int size)
{
std::shared_ptr<ChunkData> NewData = std::make_shared<ChunkData>();
NewData->size = size;
NewData->data = new int[size]; // or std::make_unique<int[]>(size)
{
std::lock_guard<std::mutex> lock(chunkMutex);
chunkData.swap(NewData);
}
chunkSet.notify_all();
}
void DoWork()
{
std::unique_lock<std::mutex> ownLock(chunkMutex);
waitForChunk(lock); // blocks until other thread finishes InitData
{
shared_lock readLock(chunkData->dataMutex);
...
}
shared_ptr<ChunkData> neighbour = neighbourChunkData.lock();
if(! neighbour)
return;
shared_lock readLock(neighbour->dataMutex);
...
}
void SetNeighbour(Chunk* neighbourChunk)
{
if(! neighbourChunk)
return;
shared_ptr<ChunkData> newNeighbourData;
{
std::unique_lock<std::mutex> lock(neighbourChunk->chunkMutex);
neighbourChunk->waitForChunk(lock); // wait until neighbor has finished InitData
newNeighbourData = neighbourChunk->chunkData;
}
std::lock_guard<std::mutex> ownLock(this->chunkMutex);
this->neighbourChunkData = std::move(newNeighbourData);
}
};
The downside to this is that you could deadlock if InitData is never called or if it failed with an exception. There are ways around this, like using an std::shared_future which knows that it is valid (set when InitData is scheduled) and whether it failed (records exception of associated promise or packaged_task).

Does std::mutex favor the thread that owns it?

I was trying to understand how spinlock mutex works,
so I wrote a simple code (shown below) which measures interleaving of instructions from
different threads under protection of spinlock (or std::) mutex.
Surprisingly, it shows (in gcc at least) that std::mutex (in contrast to spinlock mutex)
seems to favor the thread that owns it, leading to very small instruction interleaving (at best 5%),
unless the instruction in question is very fast (like incrementing a counter).
In that case we can get even 50%. Spinlock mutex gives at least 80% (and typically more than 90%).
Is this a well known fact? Or maybe my code below has a bug?
I mean, I know the rule of thumb saying that mutex should be always locked for smallest amount of time.
But I was convinced that this is so, because we want to reduce serialization of threads,
and not because std::mutex favors the owning thread...
Here is the code:
#include<atomic>
#include<thread>
#include<iostream>
#include<chrono>
#include<mutex>
class SpinLockMutex{
std::atomic_flag m_flag = ATOMIC_FLAG_INIT;
public:
void lock() { while( m_flag.test_and_set(std::memory_order_acquire) ) /*do nothing*/ ; }
void unlock() { m_flag.clear(std::memory_order_release) ; }
};//class SpinLockMutex
// ******************************************
// // // std::mutex vs SpinLockMutex
//SpinLockMutex globalMutex;
std::mutex globalMutex;
// ******************************************
// This class helps to start threads at the same time :
class Starter{
mutable std::mutex m_m;
bool m_ready = false;
public:
bool isReady() const { std::lock_guard<std::mutex> guard(m_m);
return m_ready;
}
void start() { std::this_thread::sleep_for(std::chrono::seconds(3));
std::lock_guard<std::mutex> guard(m_m);
m_ready = true;
}
};//class Starter
constexpr std::size_t LOOP_SIZE = 100;
std::size_t previous_thread_repeated = 0;
Starter starter;
void mainFcnForThread ()
{
static std::thread::id previous_thread_id = std::this_thread::get_id();
while(!starter.isReady())
; //do nothing
for(std::size_t i = 0; i!=LOOP_SIZE ; ++i){
globalMutex.lock();
if(previous_thread_id == std::this_thread::get_id() ) {
++previous_thread_repeated;
std::this_thread::sleep_for(std::chrono::microseconds(100));
}
previous_thread_id = std::this_thread::get_id();
globalMutex.unlock();
}
}//void mainFcnForThread
int main()
{
std::thread t1(mainFcnForThread);
std::thread t2(mainFcnForThread);
starter.start();
t1.join();
t2.join();
std::cout << double(previous_thread_repeated)/(2*LOOP_SIZE) << '\n';
return 0;
}
Mutex makes zero guarantees about fairness.
Unlocking a mutex does not suspend your current thread. Attempting to lock a mutex does not say "wait, someone else has been waiting longer, they should get a go at it".
Blocking on a mutex can sometimes put your thread to sleep.
After you unlock a mutex, you aren't "the owning thread". You are probably a running thread. And mutex can (and apparently does) favor running threads over threads that are suspended.
Implementing "fairness" can be done on top of C++ synchronization primitives, but it isn't free, and C++ aims not to make you pay for anything you don't ask for.

Avoiding deadlock in concurrent waiting object

I've implemented a "Ticket" class which is shared as a shared_ptr between multiple threads.
The program flow is like this:
parallelQuery() is called to start a new query job. A shared instance of Ticket is created.
The query is split into multiple tasks, each task is enqueued on a worker thread (this part is important, otherwise I'd just join threads and done). Each task gets the shared ticket.
ticket.wait() is called to wait for all tasks of the job to complete.
When one task is done it calls the done() method on the ticket.
When all tasks are done the ticket is unlocked, result data from the task aggregated and returned from parallelQuery()
In pseudo code:
std::vector<T> parallelQuery(std::string str) {
auto ticket = std::make_shared<Ticket>(2);
auto task1 = std::make_unique<Query>(ticket, str+"a");
addTaskToWorker(task1);
auto task2 = std::make_unique<Query>(ticket, str+"b");
addTaskToWorker(task2);
ticket->waitUntilDone();
auto result = aggregateData(task1, task2);
return result;
}
My code works. But I wonder if it is theoretically possible that it can lead to a deadlock in case when unlocking the mutex is executed right before it gets locked again by the waiter thread calling waitUntilDone().
Is this a possibility, and how to avoid this trap?
Here is the complete Ticket class, note the execution order example comments related to the problem description above:
#include <mutex>
#include <atomic>
class Ticket {
public:
Ticket(int numTasks = 1) : _numTasks(numTasks), _done(0), _canceled(false) {
_mutex.lock();
}
void waitUntilDone() {
_doneLock.lock();
if (_done != _numTasks) {
_doneLock.unlock(); // Execution order 1: "waiter" thread is here
_mutex.lock(); // Execution order 3: "waiter" thread is now in a dealock?
}
else {
_doneLock.unlock();
}
}
void done() {
_doneLock.lock();
_done++;
if (_done == _numTasks) {
_mutex.unlock(); // Execution order 2: "task1" thread unlocks the mutex
}
_doneLock.unlock();
}
void cancel() {
_canceled = true;
_mutex.unlock();
}
bool wasCanceled() {
return _canceled;
}
bool isDone() {
return _done >= _numTasks;
}
int getNumTasks() {
return _numTasks;
}
private:
std::atomic<int> _numTasks;
std::atomic<int> _done;
std::atomic<bool> _canceled;
// mutex used for caller wait state
std::mutex _mutex;
// mutex used to safeguard done counter with lock condition in waitUntilDone
std::mutex _doneLock;
};
One possible solution which just came to my mind when editing the question is that I can put _done++; before the _doneLock(). Eventually, this should be enough?
Update
I've updated the Ticket class based on the suggestions provided by Tomer and Phil1970. Does the following implementation avoid mentioned pitfalls?
class Ticket {
public:
Ticket(int numTasks = 1) : _numTasks(numTasks), _done(0), _canceled(false) { }
void waitUntilDone() {
std::unique_lock<std::mutex> lock(_mutex);
// loop to avoid spurious wakeups
while (_done != _numTasks && !_canceled) {
_condVar.wait(lock);
}
}
void done() {
std::unique_lock<std::mutex> lock(_mutex);
// just bail out in case we call done more often than needed
if (_done == _numTasks) {
return;
}
_done++;
_condVar.notify_one();
}
void cancel() {
std::unique_lock<std::mutex> lock(_mutex);
_canceled = true;
_condVar.notify_one();
}
const bool wasCanceled() const {
return _canceled;
}
const bool isDone() const {
return _done >= _numTasks;
}
const int getNumTasks() const {
return _numTasks;
}
private:
std::atomic<int> _numTasks;
std::atomic<int> _done;
std::atomic<bool> _canceled;
std::mutex _mutex;
std::condition_variable _condVar;
};
Don't write your own wait methods but use std::condition_variable instead.
https://en.cppreference.com/w/cpp/thread/condition_variable.
Mutexes usage
Generally, a mutex should protect a given region of code. That is, it should lock, do its work and unlock. In your class, you have multiple method where some lock _mutex while other unlock it. This is very error-prone as if you call the method in the wrong order, you might well be in an inconsistant state. What happen if a mutex is lock twice? or unlocked when already unlocked?
The other thing to be aware with mutex is that if you have multiple mutexes, it that you can easily have deadlock if you need to lock both mutexes but don't do it in consistant order. Suppose that thread A lock mutex 1 first and the mutex 2, and thread B lock them in the opposite order (mutex 2 first). There is a possibility that something like this occurs:
Thread A lock mutex 1
Thread B lock mutex 2
Thread A want to lock mutex 2 but cannot as it is already locked.
Thread B want to lock mutex 1 but cannot as it is already locked.
Both thread will wait forever
So in your code, you should at least have some checks to ensure proper usage. For example, you should verify _canceled before unlocking the mutex to ensure cancel is called only once.
Solution
I will just gave some ideas
Declare a mutux and a condition_variable to manage the done condition in your class.
std::mutex doneMutex;
std::condition_variable done_condition;
Then waitUntilDone would look like:
void waitUntilDone()
{
std::unique_lock<std::mutex> lk(doneMutex);
done_condition.wait(lk, []{ return isDone() || wasCancelled();});
}
And done function would look like:
void done()
{
std::lock_guard<std::mutex> lk(doneMutex);
_done++;
if (_done == _numTasks)
{
doneCondition.notify_one();
}
}
And cancel function would become
void done()
{
std::lock_guard<std::mutex> lk(doneMutex);
_cancelled = true;
doneCondition.notify_one();
}
As you can see, you only have one mutex now so you basically eliminate the possibility of a deadlock.
Variable naming
I suggest you to not use lock in the name of you mutex since it is confusing.
std::mutex someMutex;
std::guard_lock<std::mutex> someLock(someMutex); // std::unique_lock when needed
That way, it is far easier to know which variable refer to the mutex and which one to the lock of the mutex.
Good reading
If you are serious about multithreading, then you should buy that book:
C++ Concurrency in Action
Practical Multithreading
Anthony Williams
Code Review (added section)
Essentially same code has beed posted to CODE REVIEW: https://codereview.stackexchange.com/questions/225863/multithreading-ticket-class-to-wait-for-parallel-task-completion/225901#225901.
I have put an answer there that include some extra points.
You not need to use mutex for operate with atomic values
UPD
my answer to mainn question was wrong. I deleted one.
You can use simple (non atomic) int _numTasks; also. And you not need shared pointer - just create Task on the stack and pass pointer
Ticket ticket(2);
auto task1 = std::make_unique<Query>(&ticket, str+"a");
addTaskToWorker(task1);
or unique ptr if you like
auto ticket = std::make_unique<Ticket>(2);
auto task1 = std::make_unique<Query>(ticket.get(), str+"a");
addTaskToWorker(task1);
because shared pointer can be cut by Occam's razor :)

How to use mutex correctly as parameter for a memberfunction in a thread?

My Problem is that I dont know how to properly use the mutex. I understand how it works theoretically but I donĀ“t know why it doesnt work in my code.I thought if I use a mutex on a var it will be blocked until it gets unlocked. Nevertheless it seems I still have a data race.
I tried to define a class mutex and a mutex in the main which I pass by reference. Somehow nothing of this works.
class test {
public:
void dosmth(std::mutex &a);
int getT(){return t;};
private:
int t = 0;
};
void test::dosmth(std::mutex &a) {
for(;;){
a.lock();
t++;
if(t==1000){
t=0;
}
a.unlock();
}
}
int main() {
test te;
std::mutex help;
std::thread t(&test::dosmth, std::addressof(te), std::ref(help));
for(;;){
for (int i = 0; i <te.getT() ; ++i) {
std::cout<<te.getT()<<std::endl;
}
}
}
The result just should be that I get some output so I will now it works.
As Michael mentioned, you should synchronise the reader and writer to avoid undefined behaviour. Instead of passing the mutex as an argument, a common pattern is to make the mutex a member of the object (te), lock and unlock every time (prefer lock_gaurd instead of manually locking and unlocking) you enter a member function (that modifies internal state of the object). Here is some pseudo-code:
class Foo{
std::mutex m; // reader and writer are sync'ed on the same mutex
int data_to_sync;
public:
int read(){
lock_gaurd<mutex> lg(m); //RAII lock
return data_to_sync;
//automagically released upon exit
}
void write(){
lock_gaurd<mutex> lg(m);
data_to_sync++;
}
};
A mutex can only guarantee mutual exclusion if said mutex is used to regulate entry to all critical sections of code where a given object would be accessed concurrently. In your case, you have your second thread modify the value of the object te.t while your main thread is reading the value of the same object. Only one thread, however, is using a mutex to protect the access to te.t. The object te.t is not atomic. Therefore, you have a data race and, thus, undefined behavior [intro.races]/21.
You have to also lock and unlock the mutex in your for(;;) loop in main, e.g.:
for(;;){
help.lock();
for (int i = 0; i <te.getT() ; ++i) {
std::cout<<te.getT()<<std::endl;
}
help.unlock();
}
or better, using std::lock_guard:
for(;;){
std::lock_guard lock(help);
for (int i = 0; i <te.getT() ; ++i) {
std::cout<<te.getT()<<std::endl;
}
}
I thought if I use a mutex on a var...
You don't use a mutex "on a var." Locking a mutex prevents other threads from locking the same mutex at the same time, but it does not stop other threads from accessing any particular variable(s).
If you want to use a mutex to protect a variable (or more typically, several variables) from being accessed by more than one thread at the same time, then it's up to you to ensure that you do not write any code that accesses the variables without locking the mutex first.