Protect class instances of multithreading access - c++

I have a C++ class with many instances, and I want to make my application thread safe.
This class has a member and a function that handles it, for example:
class MyCls {
int x;
void f() { x++; }
};
I need to protect this member, so as far as I see it I have 2 options:
Add a global Critical Section and
enter it before touching this member.
Add a Critical Section to the class
so each instance will protect its own member.
Both solutions are a huge overkills:
Two different instances should not be sync at all.
The OS should handle millions of Critical Sections, where there are actually very few collisions.
Is there another solution or multithreading design patters I can use?

Not sure, but I think the problem might be solved by using Software transactional memory mechanism. There's a bunch of implementations for C++.

As for your first concern, each instance should have a member mutex to provide a separate critical section for each.
As for the second, I'm sure that most pthread implementations use a futex implementation for their mutexes. This means that they're pretty fast when there is no contention and only require OS intervention when there is a contention.

If you don't want to use locks or critical sections, then the simplest solution is not to keep the state in the object itself. If you change your class like the following, it will be thread-safe:
// Class has no state, only operations on data. So it
// is thread-safe by nature.
class MyCls
{
int foo(int x = 0)
{
return ++x;
}
};
// Usage
MyCls obj;
int x = obj.foo(); // x = 1
x = obj.foo(x); // x = 2

If you really only need to increment that member (or similar simple arithmetic), you just need atomic operations.
Most (all?) modern CPU support them
with native instructions.
I don't know about other compilers,
but those from the gcc family have
them as builtins.
Unfortunately there seems not to be a standardized interface to these, but the upcoming C standard (C1x) will have them.

Related

Group member functions to all require implicit mutex lock first?

I have a "Device" class representing the connection of a peripheral hardware device. Scores of member functions ("device functions") are called on each Device object by clients.
class Device {
public:
std::timed_mutex mutex_;
void DeviceFunction1();
void DeviceFunction2();
void DeviceFunction3();
void DeviceFunction4();
// void DeviceFunctionXXX(); lots and lots of device functions
// other stuff
// ...
};
The Device class has a member std::timed_mutex mutex_ which must be locked by each of the device functions prior to communicating with the device, to prevent communication with the device simultaneously from concurrent threads.
An obvious but repetitive and cumbersome approach is to copy/paste the mutex_.try_lock() code at the top of the execution of each device function.
void Device::DeviceFunction1() {
mutex_.try_lock(); // this is repeated in ALL functions
// communicate with device
// other stuff
// ...
}
However, I'm wondering if there is a C++ construct or design pattern or paradigm which can be used to "group" these functions in such a way that the mutex_.try_lock() call is "implicit" for all functions in the group.
In other words: in a similar fashion that a derived class can implicitly call common code in a base class constructor, I'd like to do something similar with functions calls (instead of class inheritance).
Any recommendations?
First of all, if the mutex must be locked before you do anything else, then you should call mutex_.lock(), or at least not ignore the fact that try_lock may actually fail to lock the mutex. Also, manually placing calls to lock and unlock a mutex is extremely error-prone and can be much harder to get right than you might think. Don't do it. Use, e.g., an std::lock_guard instead.
The fact that you're using an std::timed_mutex suggests that what's actually going on in your real code may be a bit more involved (what for would you be using an std::timed_mutex otherwise). Assuming that what you're really doing is something more complex than just calling try_lock and ignoring its return value, consider encapsulating your complex locking procedure, whatever it may be, in a custom lock guard type, e.g.:
class the_locking_dance
{
auto do_the_locking_dance(std::timed_mutex& mutex)
{
while (!mutex.try_lock_for(100ms))
/* do whatever it is that you wanna do */;
return std::lock_guard { mutex, std::adopt_lock_t };
}
std::lock_guard<std::timed_mutex> guard;
public:
the_locking_dance(std::timed_mutex& mutex)
: guard(do_the_locking_dance(mutex))
{
}
};
and then create a local variable
the_locking_dance guard(mutex_);
to acquire and hold on to your lock. This will also automatically release the lock upon exit from a block.
Apart from all that, note that what you're doing here is, most likely, not a good idea in general. The real question is: why are there so many different methods that all need to be protected by the same mutex to begin with? Do you really have to support an arbitrary number of threads you know nothing about, which arbitrarily may do arbitrary things with the same device object at arbitrary times in arbitrary order? If not, then why are you building your Device abstraction to support this use case? Is there really no better interface that you could design for your application scenario, knowing about what it actually is the threads are supposed to be doing. Do you really have to do such fine-grained locking? Consider how inefficient it is with your current abstraction to, e.g., call multiple device functions in a row as that requires constantly locking and unlocking and locking and unlocking this mutex again and again all over the place…
All that being said, there may be a way to improve the locking frequency while, at the same time, addressing your original question:
I'm wondering if there is a C++ construct or design pattern or paradigm which can be used to "group" these functions in such a way that the mutex_.try_lock() call is "implicit" for all functions in the group.
You could group these functions by exposing them not as methods of a Device object directly, but as methods of yet another lock guard type, for example
class Device
{
…
void DeviceFunction1();
void DeviceFunction2();
void DeviceFunction3();
void DeviceFunction4();
public:
class DeviceFunctionSet1
{
Device& device;
the_locking_dance guard;
public:
DeviceFunctionSet1(Device& device)
: device(device), guard(device.mutex_)
{
}
void DeviceFunction1() { device.DeviceFunction1(); }
void DeviceFunction2() { device.DeviceFunction2(); }
};
class DeviceFunctionSet2
{
Device& device;
the_locking_dance guard;
public:
DeviceFunctionSet2(Device& device)
: device(device), guard(device.mutex_)
{
}
void DeviceFunction3() { device.DeviceFunction4(); }
void DeviceFunction4() { device.DeviceFunction3(); }
};
};
Now, to get access to the methods of your device within a given block scope, you first acquire the respective DeviceFunctionSet and then you can call the methods:
{
DeviceFunctionSet1 dev(my_device);
dev.DeviceFunction1();
dev.DeviceFunction2();
}
The nice thing about this is that the locking happens once for an entire group of functions (which will, hopefully, somewhat logically belong together as a group of functions used to achieve a particular task with your Device) automatically and you can also never forget to unlock the mutex…
Even with this, however, the most important thing is to not just build a generic "thread-safe Device". These things are usually neither efficient nor really useful. Build an abstraction that reflects the way multiple threads are supposed to cooperate using a Device in your particular application. Everything else is second to that. But without knowing anything about what your application actually is, there's not really anything more that could be said to that…

Mutex as class member using pthreads

I have three classes, let's call them A, B and HardwareDriver. There is one instance of each of the classes. a and b run in two different threads. They both access the Hardware via an instance of HardwareDriver. Something like:
Class A {
... };
Class B {
... };
Class HardwareDriver {
public:
int accessHardware();
};
A a;
B b;
HardwareDriver hd;
pthread_t aThread;
pthread_t bThread;
main() {
pthread_create(&aThread, NULL, &A::startA, &a);
pthread_create(&bThread, NULL, &B::startB, &b);
while (...) { };
return 0;
}
The hardware can't be accessed by a and b at the same time, so I need to protect the code with a mutex. I'm new to multithreading but intuitively I would lock the mutex in the method of A and B right before it requests the hardware access by calling the method hd.accessHardware().
Now I'm wondering if it's possible, to perform the locking in hd.accessHardware() for more encapsulation. Would this still be thread safe?
Yes you can have a mutex in your HardwareDriver class and have a critical section inside your class method. It would still be safe. Just remember that if you copy the object, you will also have a copy of mutex.
I would lock the mutex in the method of A and B right before it requests the hardware access by calling the method hd.accessHardware().
This creates a risk of forgetting to lock that mutex prior to calling hd.accessHardware().
Now I'm wondering if it's possible, to perform the locking in hd.accessHardware() for more encapsulation. Would this still be thread safe?
That removes the risk of forgetting to lock that mutex and makes your API harder to misuse. And that would still be thread safe.
When doing multithread programming in C/C++ you should ensure that data that is accessed by any of your thread for WRITING is locked when you do any READ or WRITE operation, you can leave lockfree READONLY datas.
The lock operations must have the smaller scope possibile, if TWO objects access a single resource you need a SINGLE semaphore/mutex, using two will expose you to dangerous deadlocks.
So, in your example you should add a mutex inside the HardwareDriver class and lock/unlock it everytime you read/write any class data.
You do not need to lock local data (stack allocated local variables) and you do not need locking on reentrant methods.
Since you are writing C++ and we are in 2017, I suppose you could use, and suggest you to use std::thread and std::mutex instead of pthread directly. In Linux the C++ native thread support is a tiny wrapper over pthreads so the overhead of using them is negligible also in embedded targets.

C++ objects in multithreading

I would like to ask about thread safety in C++ (using POSIX threads with a C++ wrapper for ex.) when a single instance/object of a class is shared between different threads. For example the member methods of this single object of class A would be called within different threads. What should/can I do about thread safety?
class A {
private:
int n;
public:
void increment()
{
++n;
}
void decrement()
{
--n;
}
};
Should I protect class member n within increment/decrement methods with a lock or something else? Also static (class variables) members have such a need for lock?
If a member is immutable, I do not have to worry about it, right?
Anything that I cannot foreseen now?
In addition to the scenario with a single object within multithreads, what about multiple object with multiple threads? Each thread owns an instance of a class. Anything special other than static (class variables) members?
These are the things in my mind, but I believe this is a large topic and I would be glad if you have good resources and refer previous discussions about that.
Regards
Suggestion: don't try do it by hand. Use a good multithread library like the one from Boost: http://www.boost.org/doc/libs/1_47_0/doc/html/thread.html
This article from Intel will give you a good overview: http://software.intel.com/en-us/articles/multiple-approaches-to-multithreaded-applications/
It's a really large topic and probably it's impossible to complete the topic in this thread.
The golden rule is "You can't read while somebody else is writing."
So if you have an object that share a variable you have to put a lock in the function that access the shared variable.
There are very few cases when this is not true.
The first case is for integer number you can use the atomic function as showed by c-smile, in this case the CPU will use an hardware lock on the cache, so other cores can't modify the variables.
The second cases are lock free queue, that are special queue that use the compare and excange function to assure the atomicity of the instruction.
All the other cases are MUST be locked...
the first aproach is to lock everything, this can lead to a lot of problem when more object are involved (ObjA try to read from ObjB but, ObjB is using the variable and also is waiting for ObjC that wait ObjA) Where circular lock can lead to indefinite waiting (deadlock).
A better aproach is to minimize the point where thread share variable.
For example if you have and array of data, and you want to parallelize the computation on the data you can launch two thread and thread one will work only on even index while thread two will work on the odd. The thread are working on the same set of data, but as long the data don't overlap you don't have to use lock. (This is called data parallelization)
The other aproch is to organize the application as a set of "work" (function that run on a thread a produce a result) and make the work communicate only with messages. You only have to implement a thread safe message system and a work sheduler you are done. Or you can use libray like intel TBB.
Both approach don't solve deadlock problem but let you isolate the problem and find bugs more easily. Bugs in multithread are really hard to debug and sometime are also difficoult to find.
So, if you are studing I suggest to start with the thery and start with pThread, then whe you are learned the base move to a more user frendly library like boost or if you are using Gcc 4.6 as compiler the C++0x std::thread
yes, you should protect the functions with a lock if they are used in a multithreading environment. You can use boost libraries
and yes, immutable members should not be a concern, since a such a member can not be changed once it has been initialized.
Concerning "multiple object with multiple threads".. that depends very much of what you want to do, in some cases you could use a thread pool which is a mechanism that has a defined number of threads standing by for jobs to come in. But there's no thread concurrency there since each thread does one job.
You have to protect counters. No other options.
On Windows you can do this using these functions:
#if defined(PLATFORM_WIN32_GNU)
typedef long counter_t;
inline long _inc(counter_t& v) { return InterlockedIncrement(&v); }
inline long _dec(counter_t& v) { return InterlockedDecrement(&v); }
inline long _set(counter_t &v, long nv) { return InterlockedExchange(&v, nv); }
#elif defined(WINDOWS) && !defined(_WIN32_WCE) // lets try to keep things for wince simple as much as we can
typedef volatile long counter_t;
inline long _inc(counter_t& v) { return InterlockedIncrement((LPLONG)&v); }
inline long _dec(counter_t& v) { return InterlockedDecrement((LPLONG)&v); }
inline long _set(counter_t& v, long nv) { return InterlockedExchange((LPLONG)&v, nv); }

C++/Boost: Synchronize access to a resource across multiple method (getter) calls

I'm not sure if this is a question regarding programming technique or design but I'm open for suggestions.
The problem: I want to create an abstraction layer between data sources (sensors) and consumers. The idea is that the consumers only "know" the interfaces (abstract base class) of different sensor types. Each of this sensor types usually consists of several individual values which all have their own getter methods.
As an example I will use a simplified GPS sensor.
class IGpsSensor {
public:
virtual float getLongitude() = 0;
virtual float getLatitude() = 0;
virtual float getElevation() = 0;
// Deviations
virtual float getLongitudeDev() = 0;
virtual float getLatitudeDev() = 0;
virtual float getElevationDev() = 0;
virtual int getNumOfSatellites() = 0;
};
Since updates to the sensor are done by a different thread (details are up to the implementation of the interface), synchronizing getters and also the update methods seems like a reasonable approach to ensure consistency.
So far so good. In most cases this level of synchronization should suffice. However, sometimes it might be necessary to aquire more than one value (with consecutive getXXX() calls) and ensure that no update is happening in between. Whether this is necessary or not (and which values are important) is up to the consumer.
Sticking to the example, in a lot of cases it is only important to know longitude and latitude (but hopefully both relating to the same update()). I admit that this could be done be grouping them together into a "Position" class or struct. But a consumer might also use the sensor for a more complicated algorithm and requires the deviation as well.
Now I was wondering, what would be a proper way to do this.
Solutions I could think of:
Group all possible values into a struct (or class) and add an additional (synchronized) getter returning copies of all values at once - seems like a lot of unnecessary overhead to me in case only 2 or 3 out of maybe 10 values are needed.
Add a method returning a reference to the mutex used within the data source to allow locking by the consumer - this doesn't feel like "good design". And since getters are already synchronized, using a recursive mutex is mandatory. However, I assume that there are multiple readers but only one writer and thus I'd rather go with a shared mutex here.
Thanks for your help.
How about exposing a "Reader" interface? To get the reader object, you would do something like this:
const IGpsSensorReader& gps_reader = gps_sensor.getReader();
The IGpsSensorReader class could have access to protected members of the IGpsSensor class. When constructed, it would acquire the lock. Upon destruction, it would release the lock. An accessor could do something like this:
{ //block that accesses attributes
const IGpsSensorReader& gps_reader = gps_sensor.getReader();
//read whatever values from gps_reader it needs
} //closing the scope will destruct gps_reader, causing an unlock
You could also expose a getWriter method to the thread doing the updates. Internally, you could use boost's shared_mutex to mediate access between the readers and the writers.
A technique I've used in some simple projects is to only provide access to a proxy object. This proxy object holds a lock for the duration of its lifetime, and provides the actual interface to my data. This access does no synchronization itself, because it is only available through the proxy which is already locked appropriately. I've never tried expanding this to a full scale project, but it has seemed to work well for my purposes.
Possible solution: derive all your source classes from
class Transaction {
pthread_mutex_t mtx;
// constructor/destructor
public:
void beginTransaction() { pthread_mutex_lock(&mtx); } // ERROR CHECKING MISSING
void endTransaction() { pthread_mutex_unlock(&mtx); } // DO ERROR CHECKING
protected:
// helper method
int getSingle(int *ptr)
{ int v; beginTransaction(); v=*ptr; endTransaction(); return v; }
};
If you need to read out multiple values, use begin/endTransaction methods. To define your getValue functions, just call getSingle with pointer to the appropriate member [this is just a convenience method so that you don't have to call begin/endTransaction in each getValue function.].
You will need to flesh out some details, because if your getValue functions use begin/endTransaction, you won't be able to call them inside a transaction. (A mutex can be locked only once, unless it is configured to be recursive.)

thread safety in thirty part libraries

I want to use a library developed by someone else, of which I only have the library file, not the source code. My question is this: the library provides a class with a number of functionalities. The class itself is not thread safe. I wanted to make it thread safe and I was wondering if this code works
// suppose libobj is the class provided by the library
class my_libobj : public libobj {
// code
};
This only inherits from libobj, which may or may not "work" depending on whether the class was designed for inheritance (has at least a virtual destructor).
In any case, it won't buy you thread-safety for free. The easiest way to get that is to add mutexes to the class and lock those when entering a critical section:
class my_obj {
libobj obj; // inheritance *might* work too
boost::mutex mtx;
void critical_op()
{
boost::unique_lock lck(mtx);
obj.critical_op();
}
};
(This is very coarse-grained design with a single mutex; you may to able to make it more fine-grained if you know the behavior of the various operations. It's also not fool-proof, as #dribeas explains.)
Retrofitting thread safety -- and BTW, there are different level -- in a library which hasn't be designed for is probably impossible without knowing how it has be implemented if you aren't content with just serializing all calls to it, and even then it can be problematic if the interface is bad enough -- see strtok for instance.
This is impossible to answer without knowledge of at least the actual interface of the class. In general the answer would be no.
From the practical C++ point of view, if the class was not designed to be extended, every non-virtual method will not be overriden and as such you might end up with a mixture of some thread-safe and some non-thread safe methods.
Even if you decide to wrap (without inheritance) and force delegation only while holding a lock, the approach is still not valid in all cases. Thread safety requires not only locking, but an interface that can be made thread safe.
Consider a stack implementation as in the STL, by just adding a layer of locking (i.e. making every method thread safe, you will not guarantee thread safety on the container. Consider a few threads adding elements to the stack and two threads pulling information:
if ( !stack.empty() ) { // 1
x = stack.top(); // 2
stack.pop(); // 3
// operate on data
}
There are a number of possible things that can go wrong here: Both threads might perform test [1] when the stack has a single element and then enter sequentially, in which case the second thread will fail in [2] (or obtain the same value and fail in [3]), even in the case where there are multiple objects in the container, both threads could execute [1] and [2] before any of them executing [3], in which case both threads would be consuming the same object, and the second element in the stack would be discarded without processing...
Thread safety requires (in most cases) changes to the API, in the example above, an interface that provides bool pop( T& v ); implemented as:
bool stack::try_pop( T& v ) { // argument by reference, to provide exception safety
std::lock<std:mutex> l(m);
if ( s.empty() ) return false; // someone consumed all data, return failure
v = s.top(); // top and pop are executed "atomically"
s.pop();
return true; // we did consume one datum
}
Of course there are other approaches, you might not return failure but rather wait on a condition in a pop operation that is guaranteed to lock until a datum is ready by making use of a conditional variable or something alike...
The simplest solution is to create a single thread uniquely for that library, and only access the library from that thread, using message queues to pass requests and return parameters.