Could the mutex protect the data relative to the specific pointer? - c++

My Qt app uses QMutex and QMutexLocker to ensure thread-safety.
Does the mutex protect the data or scope of the function?
For example:
class Counter
{
public:
Counter() { *ptr; }
void setObject(MyClass *pr){ptr = pr;}
void increment() { QMutexLocker locker(&mutex); //dowork for ptr; }
void decrement() { QMutexLocker locker(&mutex); //dowork for pointer to MyClass, ptr; }
private:
mutable QMutex mutex;
MyClass *ptr;
};
//Thread...
Counter counter;
MyClass *mclass= new MyClass;
//setting... mclass
counter.setOjbect(mclass);
OtherClass oc; //This `OtherClass` also works for mclass same as the value of Counter.
oc.setObject(mclass); //Counter and OtherClass work for mclass.
//Mutex protect mclass data?
The pointer to MyClass could be used in some other class.
Does QMutexLocker protect the data for ptr or protect only accessing function increment and decrement from the multiple calling?
How can I protect data at ptr?

Mutual exclusion is ensured for all threads using the same QMutex instance e.g. it protects data. So another class cannot synchronize its access to MyClass because it cannot access the mutex (unless you can ensure two thread do not touch the same member fields).
You should guarantee that everyone who accesses MyClass instance uses the same mutex instance. This can be done by:
Moving the mutex to MyClass instance. This is what you most likely need.
Use a single global pool of mutexes and use MyClass instance address to select a mutex for it.
The latter way is shown below:
const std::size_t SIZE = 47; // prime numbers work better here
statuc QMutex g_mtx[SIZE];
QMutex &get_mutex(const void *ptr)
{
return g_mtx[std::uintptr_t(ptr) % SIZE];
}
To guard an instance of MyClass pointer to ptr you should use QMutexLocker(get_mutex(ptr)). This is useful if MyClass is a small object and it exists in large numbers, so keeping a separate mutex for each instance becomes a problem.

Related

Is it possible to make atomic operations on singleton object thread safe automatically

I have singleton object shared between two units operating on seperate threads.
For example,
Thread A
Singleton.getInstace().incCounter();
Thread B
Singleton.getInstance().decCounter();
Is it possible to implement this atomics to be thread safe without bothering the consumers to do it in thread safe way.
Something like
static Singleton& GetInstance() {
std::scoped_lock lock(m_mtx);
static Singleton* singleton = new Singleton();
return *singleton;
}
I guess this will not work as the lock will be released after the return but incCounter and decCounter will be called without the lock.
Is it somehow possible to keep the lock active till atomic operation is completed.
Is doing a lock within the incCounter and decCounter the only solution here or in unit A and unit B only solution.
The current lock accomplishes nothing. A static function-local variable is required by the C++ standard to be initialized in exactly one thread. That is, the compiler will ensure that there can be no race conditions on its initialization. So the lock is protecting against something that can't happen.
You need to put a lock in the increment/decrement functions. And they need to lock the same mutex. Though perhaps they could increment/decrement an atomic variable, in which case you don't need a lock at all.
You could (but probably shouldn't) create a new type LockedSingleton which stores a reference to the singleton and a std::unique_lock. This would be what your GetInstance() returns. LockedSingleton would need to have its own increment/decrement functions which it forwards to its internal singleton reference, as well as any other interface functions.
class LockedSingleton
{
private:
std::unique_lock<std::mutex> lock_;
Singleton &obj_;
private: //Only friends can construct. Also, non-copyable.
LockedSingleton(std::mutex &mtx, Singleton &obj)
: lock_(mtx)
, obj_(obj)
{}
friend Singleton& GetInstance();
public:
void incCounter() {obj.incCounter();}
void decCounter() {obj.decCounter();}
};
static LockedSingleton GetInstance() {
static Singleton* singleton = new Singleton();
return LockedSingleton(m_mtx, *singleton);
}
Note that this only works in C++17 and above, due to guaranteed elision, since LockedSingleton is non-copyable.

What is the best way to lock object by private mutex?

I need to lock object by private mutex in some external functions. What is the best way to do this?
I want something like this
#include <thread>
#include <mutex>
class Test
{
public:
std::lock_guard<std::mutex> lockGuard()
{
return std::lock_guard<std::mutex>(mutex);
}
private:
std::mutex mutex;
};
int main()
{
Test test;
std::lock_guard<std::mutex> lock = test.lockGuard();
//...
}
But lock_guard copy contructor is deleted. How can I do something like this?
Just use std::unique_lock<std::mutex> instead. It is not copyable, but it is movable, allowing the pattern you show.
#include <thread>
#include <mutex>
class Test
{
public:
std::unique_lock<std::mutex> lockGuard()
{
return std::unique_lock<std::mutex>(mutex);
}
private:
std::mutex mutex;
};
int main()
{
Test test;
std::unique_lock<std::mutex> lock = test.lockGuard();
//...
}
std::unique_lock<std::mutex> has a broadened API relative to std::lock_guard including:
Move constructible.
Move assignable.
Swappable.
lock()
unlock()
try_lock()
try_lock_for()
try_lock_until()
release()
owns_lock()
In other words, since you can unlock and move from a unique_lock, it is not guaranteed to hold the lock on the mutex (you can check that it does with owns_lock()). In contrast an invariant of lock_guard is that it always holds the lock on the mutex.
The std::unique_lock<T> has a move constructor defined and can be used as you like, but the approach is not very successful itself.
You should review your locking granularity, usually if you can't provide internal synchronization and ask user to maintain lock while performing operations (or when you need to perform multiple operations) on an object, there is no reason to store the mutex inside the object.
If I had to store the mutex inside object, I would use some wrapper which allows me to do the following:
locking_wrapper<Test> test;
test.do_locked([] (Test & instance) {
/* The following code is guaranteed not to interleave with
* any operations performed on instance from other threads. */
// your code using instance here
});
The locking_wrapper<T> would store store an instance of an object inside and provide a reference to it while maintaining a lock on internal mutex. Relying on the compiler's ability to inline code, such approach should give no overhead above what you're trying to do in your question.
The general idea on implementing the locking_wrapper is as follows:
template<typename T>
class locking_wrapper
{
mutable std::mutex mutex;
// the object which requires external synchronization on access
T instance;
public:
/* Here we define whatever constructors required to construct the
* locking_wrapper (e.g. value-initialize the instance, take an
* instance passed by user or something different) */
locking_wrapper() = default;
locking_wrapper(const T & instance) : instance{instance} {}
// Takes a functor to be performed on instance while maintaining lock
template<typename Functor>
void do_locked(Functor && f) const {
const std::lock_guard<std::mutex> lock{mutex};
f(instance);
}
};
You may pass whatever callable-entity to do_locked as you see fit, however calling it with a lambda-expression as I've suggested previously will give it the best chances to be inlined without any overhead.
Please note that using this approach with references, movable objects or some other kind I have not yet foreseen would require some modifications to the code.

Is shared_ptr destruction safe with multiple threads?

I have two classes similar to this:
class Foo {
public:
void bar() {
std::lock_guard<std::mutex> lock(m_mutex);
m_data.push_back('x');
}
private:
std::string m_data;
std::mutex m_mutex;
};
class Pool {
public:
static std::shared_ptr<Foo> Create(int index) {
std::lock_guard<std::mutex> lock(m_mutex);
if (m_pool.size() > 10) {
m_pool.erase(m_pool.begin());
}
std::shared_ptr<Foo>& ptr = m_pool[index];
if (!ptr) ptr.reset(new Foo);
return ptr;
}
private:
static std::mutex m_mutex;
static std::map<int, std::shared_ptr<Foo>> m_pool;
};
and several threads running this code:
void parallel_function(int index) {
// several threads can get the same index
std::shared_ptr<Foo> foo = Pool::Create(index);
foo->bar();
}
Cppreference says
All member functions (including copy constructor and copy assignment) can be called by multiple threads on different instances of shared_ptr without additional synchronization even if these instances are copies and share ownership of the same object. If multiple threads of execution access the same shared_ptr without synchronization and any of those accesses uses a non-const member function of shared_ptr then a data race will occur; the shared_ptr overloads of atomic functions can be used to prevent the data race.
Two questions:
Since Pool::Create always returns copies of the shared_ptr, I assume that the copy and destruction of each shared_ptr is thread-safe, either if it happens in m_pool.erase or at the end of parallel_function. Is this correct?
I call shared_ptr::operator->, which is a const member function, and the function Foo::bar is thread-safe. Is there a data race here?
To sum up my comments.
Yes, this is thread safe as you operate separate copies of shared_ptrs in different threads. Which is one of the few cases where passing copies of shared_ptrs is actually reasonable.
operator-> is a const member. So basically your code is fine as long as Foo::bar being race-free stands true (which it clearly is now).

C++ - function end and local destruction order

I have following code:
class Class
{
public:
std::string Read()
{
std::lock_guard<std::mutex> lock(mutex_);
return data_;
}
private:
std::mutex mutex_;
std::string data_;
};
What will be executed first - will a local copy (temporary) be created of the data_ string on the stack as a result of the function and then the lock will release the mutex, or will it be other way?
If so, does following line resolve the problem?
return std::string(data_);
Mutex is supposed to protect concurrent read/write of the data_, so that those operations do not interfere.
The function returns the data_ as an rvalue, hence the result here will be calculated from the data_ member before the destructor of lock is executed (as the function exits) and the mutex_ released.
Is a temporary (the return value) calculated before the mutex is released? Yes.
Is return std::string(data_); required? No.

Singleton Synchronization C++

If I have to write a singleton class in C++ I will be using a static variable, private constructor & a public static function that returns an object of class. However in Multithreaded environments the code will have problems. In order to avoid multiple threads access the same variable at the same time, is Boost threads best mechanism to use for synchronization? I mean for setting/unsetting a lock/mutex across the resource. Is there anything else inbuilt in C++ standard library where in I dont have to download boost, build stuff etc? I have heard of C++ Ox but dont know much.
C++98/03 have nothing to support threads at all. If you're using a C++98 or 03 compiler, you're pretty much stuck with using Boost, or something (more or less) OS-specific, such as pthreads or Win32's threading primitives.
C++11 has a reasonably complete thread support library, with mutexes, locks, thread-local storage, etc.
I feel obliged to point out, however, that it may be better to back up and do a bit more thinking about whether you need/want a Singleton at all. To put it nicely, the singleton pattern has fallen out of favor to a large degree.
Edit: Rereading this, I kind of skipped over one thing I'd intended to say: at least when I've used them, any/all singletons were fully initialized before any secondary thread was started. That renders concern over thread safety in their initialization completely moot. I suppose there could be a singleton that you can't initialize before you start up secondary threads so you'd need to deal with this, but at least right off it strikes me as a rather unusual exception that I'd deal with only when/if absolutely necessary.
For me the best way to implement a singleton using c++11 is:
class Singleton
{
public:
static Singleton & Instance()
{
// Since it's a static variable, if the class has already been created,
// It won't be created again.
// And it **is** thread-safe in C++11.
static Singleton myInstance;
// Return a reference to our instance.
return myInstance;
}
// delete copy and move constructors and assign operators
Singleton(Singleton const&) = delete; // Copy construct
Singleton(Singleton&&) = delete; // Move construct
Singleton& operator=(Singleton const&) = delete; // Copy assign
Singleton& operator=(Singleton &&) = delete; // Move assign
// Any other public methods
protected:
Singleton()
{
// Constructor code goes here.
}
~Singleton()
{
// Destructor code goes here.
}
// And any other protected methods.
}
This is a c++11 feature but with this way you can create a thread safe Singleton. According to new standard there is no need to care about this problem any more. Object initialization will be made only by one thread, other threads will wait till it complete. Or you can use std::call_once.
If you want to make a exclusive access to the singleton's resources you have to use a lock at these functions.
The different type of locks:
Using atomic_flg_lck:
class SLock
{
public:
void lock()
{
while (lck.test_and_set(std::memory_order_acquire));
}
void unlock()
{
lck.clear(std::memory_order_release);
}
SLock(){
//lck = ATOMIC_FLAG_INIT;
lck.clear();
}
private:
std::atomic_flag lck;// = ATOMIC_FLAG_INIT;
};
Using atomic:
class SLock
{
public:
void lock()
{
while (lck.exchange(true));
}
void unlock()
{
lck = true;
}
SLock(){
//lck = ATOMIC_FLAG_INIT;
lck = false;
}
private:
std::atomic<bool> lck;
};
Using mutex:
class SLock
{
public:
void lock()
{
lck.lock();
}
void unlock()
{
lck.unlock();
}
private:
std::mutex lck;
};
Just for Windows:
class SLock
{
public:
void lock()
{
EnterCriticalSection(&g_crit_sec);
}
void unlock()
{
LeaveCriticalSection(&g_crit_sec);
}
SLock(){
InitializeCriticalSectionAndSpinCount(&g_crit_sec, 0x80000400);
}
private:
CRITICAL_SECTION g_crit_sec;
};
The atomic and and atomic_flg_lck keep the thread in a spin count. Mutex just sleeps the thread. If the wait time is too long maybe is better sleep the thread. The last one "CRITICAL_SECTION" keeps the thread in a spin count until a time is consumed, then the thread goes to sleep.
How to use these critical sections?
unique_ptr<SLock> raiilock(new SLock());
class Smartlock{
public:
Smartlock(){ raiilock->lock(); }
~Smartlock(){ raiilock->unlock(); }
};
Using the raii idiom. The constructor to lock the critical section and the destructor to unlock it.
Example
class Singleton {
void syncronithedFunction(){
Smartlock lock;
//.....
}
}
This implementation is thread safe and exception safe because the variable lock is saved in the stack so when the function scope is ended (end of function or an exception) the destructor will be called.
I hope that you find this helpful.
Thanks!!