Mutex as class member using pthreads - c++

I have three classes, let's call them A, B and HardwareDriver. There is one instance of each of the classes. a and b run in two different threads. They both access the Hardware via an instance of HardwareDriver. Something like:
Class A {
... };
Class B {
... };
Class HardwareDriver {
public:
int accessHardware();
};
A a;
B b;
HardwareDriver hd;
pthread_t aThread;
pthread_t bThread;
main() {
pthread_create(&aThread, NULL, &A::startA, &a);
pthread_create(&bThread, NULL, &B::startB, &b);
while (...) { };
return 0;
}
The hardware can't be accessed by a and b at the same time, so I need to protect the code with a mutex. I'm new to multithreading but intuitively I would lock the mutex in the method of A and B right before it requests the hardware access by calling the method hd.accessHardware().
Now I'm wondering if it's possible, to perform the locking in hd.accessHardware() for more encapsulation. Would this still be thread safe?

Yes you can have a mutex in your HardwareDriver class and have a critical section inside your class method. It would still be safe. Just remember that if you copy the object, you will also have a copy of mutex.

I would lock the mutex in the method of A and B right before it requests the hardware access by calling the method hd.accessHardware().
This creates a risk of forgetting to lock that mutex prior to calling hd.accessHardware().
Now I'm wondering if it's possible, to perform the locking in hd.accessHardware() for more encapsulation. Would this still be thread safe?
That removes the risk of forgetting to lock that mutex and makes your API harder to misuse. And that would still be thread safe.

When doing multithread programming in C/C++ you should ensure that data that is accessed by any of your thread for WRITING is locked when you do any READ or WRITE operation, you can leave lockfree READONLY datas.
The lock operations must have the smaller scope possibile, if TWO objects access a single resource you need a SINGLE semaphore/mutex, using two will expose you to dangerous deadlocks.
So, in your example you should add a mutex inside the HardwareDriver class and lock/unlock it everytime you read/write any class data.
You do not need to lock local data (stack allocated local variables) and you do not need locking on reentrant methods.
Since you are writing C++ and we are in 2017, I suppose you could use, and suggest you to use std::thread and std::mutex instead of pthread directly. In Linux the C++ native thread support is a tiny wrapper over pthreads so the overhead of using them is negligible also in embedded targets.

Related

Should a critical section or mutex be really member variable or when should it be?

I have seen code where mutex or critical section is declared as member variable of the class to make it thread safe something like the following.
class ThreadSafeClass
{
public:
ThreadSafeClass() { x = new int; }
~ThreadSafeClass() {};
void reallocate()
{
std::lock_guard<std::mutex> lock(m);
delete x;
x = new int;
}
int * x;
std::mutex m;
};
But doesn't that make it thread safe only if the same object was being shared by multiple threads? In other words, if each thread was creating its own instance of this class, they will be very much independent and its member variables will never conflict with each other and synchronization will not even be needed in that case!?
It appears to me that defining the mutex as member variable really reduces synchronization to the events when the same object is being shared by multiple threads. It doesn't really make the class any thread safer if each thread has its own copy of the class (for example if the class were to access other global objects). Is this a correct assessment?
If you can guarantee that any given object will only be accessed by one thread then a mutex is an unnecessary expense. It however must be well documented on the class's contract to prevent misuse.
PS: new and delete have their own synchronization mechanisms, so even without a lock they will create contention.
EDIT: The more you keep threads independent from each other the better (because it eliminates the need for locks). However, if your class will work heavily with a shared resource (e.g. database, file, socket, memory, etc ...) then having a per-thread instance is of little advantage so you might as well share an object between threads. Real independence is achieved by having different threads work with separate memory locations or resources.
If you will have potentially long waits on your locks, then it might be a good idea to have a single instance running in its own thread and take "jobs" from a synchronized queue.

Put all database operations in a specific thread using Qt

I have a console application where after a timeout signal, a 2D matrix (15*1200) should be parsed element-by element and inserted to a database. Since the operation is time-consuming, I perform the insertion in a new thread using QConcurrent::run.
However, due to timeout signals, several threads may start before one finished, so multiple accesses to the database may occur.
As a solution, I was trying to buffer all database operations in a specific thread, in other words, assign a specific thread to the database class, but do not know how to do so.
Your problem is a classical concurrent data analysis problem. Have you tried using std::mutex? Here's how you use it:
You create some variable std::mutex (mutex = mutual exclusion) that's accessible by all the relevant threads.
std::mutex myLock;
and then, let's say that the function that processes the data looks like this:
void processData(const Data& myData)
{
ProcessedData d = parseData();
insertToDatabase(d);
}
Now from what I understand, you're afraid that multiple threads will call insertToDatabase(d) simultaneously. Now to solve this issue, simply do the following:
void processData(const Data& myData)
{
ProcessedData d = parseData();
myLock.lock();
insertToDatabase(d);
myLock.unlock();
}
Now with this, if another thread tries to access the same function, it will block until another all other threads are finished. So threads are mutually excluded from accessing the call together.
More about this:
Caveats:
This mutex object must be the same one that all the threads see, otherwise this is useless. So either make it global (bad idea, but will work), or put it in a the class that will do the calls.
Mutex objects are non-copyable. So if you include them in a class, you should either make the mutex object a pointer, or you should reimplement the copy constructor of that class to prevent copying that mutex, or make your class noncopyable using delete:
class MyClass
{
//... stuff
MyClass(const MyClass& src) = delete;
//... other stuff
};
There are way more fancier ways to use std::mutex, including std::lock_guard and std::unique_lock, which take ownership of the mutex and do the lock for you. This are good to use if you know that the call insertToDatabase(d); could throw an exception. In that case, using only the code I wrote will not unlock the mutex, and the program will reach a deadlock.
In the example I provided, here's how you use lock_guard:
void processData(const Data& myData)
{
ProcessedData d = parseData();
std::lock_guard<std::mutex> guard(myLock);
insertToDatabase(d);
//it will unlock automatically at the end of this function, when the object "guard" is destroyed
}
Be aware that calling lock() twice by the same thread has undefined behavior.
Everything I did above is C++11.
If you're going to deal with multiple threads, I recommend that you start reading about data management with multiple threads. This is a good book.
If you insist on using Qt stuff, here's the same thing from Qt... QMutex.

pthread mutex lock and unlock per variable

I'm wondering what the best practice for locking and unlocking mutexes for variables withing an object that is shared between threads.
This is what I have been doing and seems to work just fine so far, just wondering if this is excessive or not though:
class sharedobject
{
private:
bool m_Var1;
pthread_mutex_t var1mutex;
public:
sharedobject()
{
var1mutex = PTHREAD_MUTEX_INITIALIZER;
}
bool GetVar1()
{
pthread_mutex_lock(&var1mutex);
bool temp = m_Var1;
pthread_mutex_unlock(&var1mutex);
return temp;
}
void SetVar1(bool status)
{
pthread_mutex_lock(&var1mutex);
m_Var1 = status;
pthread_mutex_unlock(&var1mutex);
}
};
this isn't my actual code, but it shows how i am using mutexes for every variable that is shared in an object between threads. The reason i don't have a mutex for the entire object is because one thread might take seconds to complete an operation on part of the object, while another thread checks the status of the object, and again another thread gets data from the object
my question is, is it a good practice to create a mutex for every variable within an object that is accessed between threads, then lock and unlock that variable when it is read or written to?
I use trylock for variables i'm checking the status to (so i don't create extra threads while the variable is being processed, and don't make the program wait to get a lock)
I haven't had a lot of experience working with threading. I would like the make the program thread safe, but it also needs to perform as best as possible.
if the members you're protecting are read-write, and may be accessed at any time by more than one thread, then what you're doing is not excessive - it's necessary.
If you can prove that a member will not change (is immutable) then there is no need to protect it with a mutex.
Many people prefer multi-threaded solutions where each thread has an immutable copy of data rather than those in which many threads access the same copy. This eliminates the need for memory barriers and very often improves execution times and code safety.
Your mileage may vary.

C++ Singleton in multithread

I am new to multi thread programming, yet I am studying a big project by someone else. In the code he has a singleton class and it has some public member variable and a member mutex. He used this singleton in different threads like:
singleton::instance()->mutex.lock();
singleton::instance()->value = getval();
singleton::instance()->mutex.release();
Is this the safe way to do it?
If not what is the proper way of read/write the value in singleton?
No it is not safe to do so.
The problem is that the mutex is handed out to the user. There is no guarantee that this lock will be released. For example, what happens if getval() would throw an exception ?
The proper way to do so would be to embed mutex use inside the API of your singleton. For example:
void singleton::setvalue(int val) { // example supposing value is an int
std::lock_guard<std::mutex> mylck (mutex);
value = val;
}
In this example, a local std::lock_guard is used. This object locks the mutex and unlocks it on destruction. This makes sure that in any case the mutex will be unlocked, whenever function returns and even if an exception would be thrown.
Note: If all you are doing is getting a variable like return variable; then it is safe to do even without the lock.
About the code. Assuming the lock is implemented correctly then it is safe to do anything before release is called

When to use recursive mutex?

I understand recursive mutex allows mutex to be locked more than once without getting to a deadlock and should be unlocked the same number of times. But in what specific situations do you need to use a recursive mutex? I'm looking for design/code-level situations.
For example when you have function that calls it recursively, and you want to get synchronized access to it:
void foo() {
... mutex_acquire();
... foo();
... mutex_release();
}
without a recursive mutex you would have to create an "entry point" function first, and this becomes cumbersome when you have a set of functions that are mutually recursive. Without recursive mutex:
void foo_entry() {
mutex_acquire(); foo(); mutex_release(); }
void foo() { ... foo(); ... }
Recursive and non-recursive mutexes have different use cases. No mutex type can easily replace the other. Non-recursive mutexes have less overhead, and recursive mutexes have in some situations useful or even needed semantics and in other situations dangerous or even broken semantics. In most cases, someone can replace any strategy using recursive mutexes with a different safer and more efficient strategy based on the usage of non-recursive mutexes.
If you just want to exclude other threads from using your mutex protected resource, then you could use any mutex type, but might want to use the non-recursive mutex because of its smaller overhead.
If you want to call functions recursively, which lock the same mutex, then they either
have to use one recursive mutex, or
have to unlock and lock the same non-recursive mutex again and again (beware of concurrent threads!) (assuming this is semantically sound, it could still be a performance issue), or
have to somehow annotate which mutexes they already locked (simulating recursive ownership/mutexes).
If you want to lock several mutex-protected objects from a set of such objects, where the sets could have been built by merging, you can choose
to use per object exactly one mutex, allowing more threads to work in parallel, or
to use per object one reference to any possibly shared recursive mutex, to lower the probability of failing to lock all mutexes together, or
to use per object one comparable reference to any possibly shared non-recursive mutex, circumventing the intent to lock multiple times.
If you want to release a lock in a different thread than it has been locked, then you have to use non-recursive locks (or recursive locks which explicitly allow this instead of throwing exceptions).
If you want to use synchronization variables, then you need to be able to explicitly unlock the mutex while waiting on any synchronization variable, so that the resource is allowed to be used in other threads. That is only sanely possible with non-recursive mutexes, because recursive mutexes could already have been locked by the caller of the current function.
I encountered the need for a recursive mutex today, and I think it's maybe the simplest example among the posted answers so far:
This is a class that exposes two API functions, Process(...) and reset().
public void Process(...)
{
acquire_mutex(mMutex);
// Heavy processing
...
reset();
...
release_mutex(mMutex);
}
public void reset()
{
acquire_mutex(mMutex);
// Reset
...
release_mutex(mMutex);
}
Both functions must not run concurrently because they modify internals of the class, so I wanted to use a mutex.
Problem is, Process() calls reset() internally, and it would create a deadlock because mMutex is already acquired.
Locking them with a recursive lock instead fixes the problem.
If you want to see an example of code that uses recursive mutexes, look at the sources for "Electric Fence" for Linux/Unix. 'Twas one of the common Unix tools for finding "bounds checking" read/write overruns and underruns as well as using memory that has been freed, before Valgrind came along.
Just compile and link electric fence with sources (option -g with gcc/g++), and then link it with your software with the link option -lefence, and start stepping through the calls to malloc/free. http://elinux.org/Electric_Fence
It would certainly be a problem if a thread blocked trying to acquire (again) a mutex it already owned...
Is there a reason to not permit a mutex to be acquired multiple times by the same thread?
In general, like everyone here said, it's more about design. A recursive mutex is normally used in a recursive functions.
What others fail to tell you here is that there's actually almost no cost overhead in recursive mutexes.
In general, a simple mutex is a 32 bits key with bits 0-30 containing owner's thread id and bit 31 a flag saying if the mutex has waiters or not. It has a lock method which is a CAS atomic race to claim the mutex with a syscall in case of failure. The details are not important here. It looks like this:
class mutex {
public:
void lock();
void unlock();
protected:
uint32_t key{}; //bits 0-30: thread_handle, bit 31: hasWaiters_flag
};
a recursive_mutex is normally implemented as:
class recursive_mutex : public mutex {
public:
void lock() {
uint32_t handle = current_thread_native_handle(); //obtained from TLS memory in most OS
if ((key & 0x7FFFFFFF) == handle) { // Impossible to return true unless you own the mutex.
uses++; // we own the mutex, just increase uses.
} else {
mutex::lock(); // we don't own the mutex, try to obtain it.
uses = 1;
}
}
void unlock() {
// asserts for debug, we should own the mutex and uses > 0
--uses;
if (uses == 0) {
mutex::unlock();
}
}
private:
uint32_t uses{}; // no need to be atomic, can only be modified in exclusion and only interesting read is on exclusion.
};
As you see it's an entirely user space construct. (base mutex is not though, it MAY fall into a syscall if it fails to obtain the key in an atomic compare and swap on lock and it will do a syscall on unlock if the has_waitersFlag is on).
For a base mutex implementation: https://github.com/switchbrew/libnx/blob/master/nx/source/kernel/mutex.c
If you want to be able to call public methods from different threads inside other public methods of a class and many of these public methods change the state of the object, you should use a recursive mutex. In fact, I make it a habit of using by default a recursive mutex unless there is a good reason (e.g. special performance considerations) not to use it.
It leads to better interfaces, because you don't have to split your implementation among non-locked and locked parts and you are free to use your public methods with peace of mind inside all methods as well.
It leads also in my experience to interfaces that are easier to get right in terms of locking.
Seems no one mentioned it before, but code using recursive_mutex is way easier to debug, since its internal structure contains identifier of a thread holding it.