I have a doubt related to Singleton and multithread programming in C++
Following you can see an example code of a Singleton class with a variable named shared.
I create 1000 threads that modify (+1) that variable of my Singleton global instance. The final value of shared is 1000 but I would expect this value to be under 1000 since I am not protecting this variable for concurrency.
Is the code really thread safe because the class is Singleton or it just happened to be lucky and the value is 1000 but it can perfectly be less than 1000?
#include <iostream>
using namespace std;
class Singleton {
private:
Singleton() {shared = 0;};
static Singleton * _instance;
int shared;
public:
static Singleton* Instance();
void increaseShared () { shared++; };
int getSharedValue () { return shared; };
};
// Global static pointer used to ensure a single instance of the class.
Singleton* Singleton::_instance = NULL;
Singleton * Singleton::Instance() {
if (!_instance) {
_instance = new Singleton;
}
return _instance;
}
void * myThreadCode (void * param) {
Singleton * theInstance;
theInstance = Singleton::Instance();
theInstance->increaseShared();
return NULL;
}
int main(int argc, const char * argv[]) {
pthread_t threads[1000];
Singleton * theInstance = Singleton::Instance();
for (int i=0; i<1000; i++) {
pthread_create(&threads[i], NULL, &myThreadCode, NULL);
}
cout << "The shared value is: " << theInstance->getSharedValue() << endl;
return 0;
}
Is the code really thread safe because the class is Singleton or it just happened to be lucky and the value is 1000 but it can perfectly be less than 1000?
You got lucky...
In reality, the most likely issue with what you're observing has to-do with the fact that the time it takes to increment the value of your singleton on your specific machine is less than the time it takes the operating system to allocate the resources to launch an individual pthread. Thus you never ended up with a scenario where two threads contend for the unprotected resources of the singleton.
A much better test would have been to launch all of your pthreads first, have them block on a barrier or condition variable, and then perform the increment on the singleton once the barrier's condition of all the threads being "active" is met ... at that point you would have been much more likely to have seen the sorts of data-races that occur with non-atomic operations like an increment operation.
If you implement your Singleton like this, the singleton creation will be thread safe:
Singleton & Singleton::Instance() {
static Singleton instance;
return instance;
}
Since the instance can never be null, and no memory to manager, a reference is returned instead of a pointer.
The increment operation can be made atomic by using platform specific operations (g++ provides built-ins, e.g. __sync_fetch_and_add), or C++11 atomic from STL, or Boost.Atomic, or with mutex guards.
std::atomic<int> shared;
void increaseShared () { ++shared; };
Related
I have the following code:
MyClass.h:
static MyMutex instanceMutex;
static MyClass* getInstance();
static void deleteInstance();
MyClass.c:
MyMutex MyClass::instanceMutex;
MyClass* MyClass::getInstance()
{
if (theInstance == 0)
{
instanceMutex.acquire();
if (theInstance == 0)
{
theInstance = new MyClass();
}
instanceMutex.release();
}
return theInstance;
}
void MyClass::deleteInstance()
{
if (theInstance != 0)
{
instanceMutex.acquire();
if (theInstance != 0)
{
theInstance->finalize();
delete theInstance;
theInstance = 0;
}
instanceMutex.release();
}
return;
}
I have 2 questions on this:
Is the above code correct and safe?
After I call 'delete theInstance' in MyClass::deleteInstance(), I then call
theInstance = 0;
instanceMutex.release();
But if the instance is deleted than how is that even possible? isn't the memory of the class gone?
If it's a singleton - it is defined to have exactly one instance - if you delete it - this drops to 0
So it seems you should not support delete at all
Here be a problem:
if (theInstance == 0) // <- Some other thread might see a non-null theInstance
{ // before the constructor below returns
instanceMutex.acquire();
if (theInstance == 0)
{
theInstance = new MyClass(); // <- This might set theInstance to something
// before the constructor returns.
}
instanceMutex.release();
}
You may want to implement some sort of reference counting system (like using shared_ptr) and initializing it in a similar manner, though taking care to ensure that its instance pointer is not set before the initialization completes.
If you are feeling adventurous, you could also try:
if (theInstance == 0)
{
instanceMutex.acquire();
if (theInstance == 0)
{
MyClass * volatile ptr = new MyClass();
__FuglyCompilerSpecificFenceHintThatMightNotActuallyDoAnything();
theInstance = ptr; // <- Much hilarity may ensue if this write is not atomic.
}
instanceMutex.release();
}
This might fix it, and it might not. In the second case, it depends on how your compiler handles volatile, and weather or not pointer-sized writes are atomic.
In one project we inherited from some external company I've seen a whole nightmare class of errors caused by someone deleting the singleton. Perhaps in your rare case deleting the singleton doesn't have any side effects for the whole application, as you can be sure there is no instance of this singleton at use, but, generally, it's an excellent example of bad design. Someone could learn from your example and it would be a bad - even harmful - lesson. In case you need to clean up something in the singleton when the application exit, use the pattern ith returning reference to the singleton, like in example:
#include <iostream>
using namespace std;
class singleton {
private:
singleton() {
cout << "construktor\n";
}
~singleton() {
cout << "destructor\n";
}
public:
static singleton &getInstance() {
static singleton instance;
cout << "instance\n";
return instance;
}
void fun() {
cout << "fun\n";
}
};
int main() {
cout << "one\n";
singleton &s = singleton::getInstance();
cout << "two\n";
s.fun();
return 0;
}
I prefer to implement singletons in C++ in the following manner:
class Singleton
{
public:
static Singleton& instance()
{
static Singleton theInstance;
return theInstance;
}
~Singleton()
{
// Free resources that live outside the processes life cycle here,
// if these won't automatically be released when the occupying process
// dies (is killed)!
// Examples you don't have to care of usually are:
// - freeing virtual memory
// - freeing file descriptors (of any kind, plain fd, socket fd, whatever)
// - destroying child processes or threads
}
private:
Singleton()
{
}
// Forbid copies and assignment
Singleton(const Singleton&);
Singleton& operator=(const Singleton&);
};
You can have locking mechanisms also, to prevent concurrent instantiation from multiple threads (will need a static mutex of course!). But deletion is left to the (OS specific) process context here.
Usually you don't need to care about deletion of singleton classes, and how their acquired resources are released, because this is all handled by the OS when a process dies.
Anyway there might be use cases, when you want to have your singleton classes some backup points on program crash situations. Most C++ crt implementations support calling destructors of statically allocated instances, but you should take care not to have ordered dependencies for any of those destructors.
getInstance and delteInstance should be static, so they can only work with static members of the class. The static members don't get destroyed with the instance.
If your code is multithreaded, the code is not safe. There's no way to make sure there's no pointer to the instance held in some running context.
I have the following code:
MyClass.h:
static MyMutex instanceMutex;
static MyClass* getInstance();
static void deleteInstance();
MyClass.c:
MyMutex MyClass::instanceMutex;
MyClass* MyClass::getInstance()
{
if (theInstance == 0)
{
instanceMutex.acquire();
if (theInstance == 0)
{
theInstance = new MyClass();
}
instanceMutex.release();
}
return theInstance;
}
void MyClass::deleteInstance()
{
if (theInstance != 0)
{
instanceMutex.acquire();
if (theInstance != 0)
{
theInstance->finalize();
delete theInstance;
theInstance = 0;
}
instanceMutex.release();
}
return;
}
I have 2 questions on this:
Is the above code correct and safe?
After I call 'delete theInstance' in MyClass::deleteInstance(), I then call
theInstance = 0;
instanceMutex.release();
But if the instance is deleted than how is that even possible? isn't the memory of the class gone?
If it's a singleton - it is defined to have exactly one instance - if you delete it - this drops to 0
So it seems you should not support delete at all
Here be a problem:
if (theInstance == 0) // <- Some other thread might see a non-null theInstance
{ // before the constructor below returns
instanceMutex.acquire();
if (theInstance == 0)
{
theInstance = new MyClass(); // <- This might set theInstance to something
// before the constructor returns.
}
instanceMutex.release();
}
You may want to implement some sort of reference counting system (like using shared_ptr) and initializing it in a similar manner, though taking care to ensure that its instance pointer is not set before the initialization completes.
If you are feeling adventurous, you could also try:
if (theInstance == 0)
{
instanceMutex.acquire();
if (theInstance == 0)
{
MyClass * volatile ptr = new MyClass();
__FuglyCompilerSpecificFenceHintThatMightNotActuallyDoAnything();
theInstance = ptr; // <- Much hilarity may ensue if this write is not atomic.
}
instanceMutex.release();
}
This might fix it, and it might not. In the second case, it depends on how your compiler handles volatile, and weather or not pointer-sized writes are atomic.
In one project we inherited from some external company I've seen a whole nightmare class of errors caused by someone deleting the singleton. Perhaps in your rare case deleting the singleton doesn't have any side effects for the whole application, as you can be sure there is no instance of this singleton at use, but, generally, it's an excellent example of bad design. Someone could learn from your example and it would be a bad - even harmful - lesson. In case you need to clean up something in the singleton when the application exit, use the pattern ith returning reference to the singleton, like in example:
#include <iostream>
using namespace std;
class singleton {
private:
singleton() {
cout << "construktor\n";
}
~singleton() {
cout << "destructor\n";
}
public:
static singleton &getInstance() {
static singleton instance;
cout << "instance\n";
return instance;
}
void fun() {
cout << "fun\n";
}
};
int main() {
cout << "one\n";
singleton &s = singleton::getInstance();
cout << "two\n";
s.fun();
return 0;
}
I prefer to implement singletons in C++ in the following manner:
class Singleton
{
public:
static Singleton& instance()
{
static Singleton theInstance;
return theInstance;
}
~Singleton()
{
// Free resources that live outside the processes life cycle here,
// if these won't automatically be released when the occupying process
// dies (is killed)!
// Examples you don't have to care of usually are:
// - freeing virtual memory
// - freeing file descriptors (of any kind, plain fd, socket fd, whatever)
// - destroying child processes or threads
}
private:
Singleton()
{
}
// Forbid copies and assignment
Singleton(const Singleton&);
Singleton& operator=(const Singleton&);
};
You can have locking mechanisms also, to prevent concurrent instantiation from multiple threads (will need a static mutex of course!). But deletion is left to the (OS specific) process context here.
Usually you don't need to care about deletion of singleton classes, and how their acquired resources are released, because this is all handled by the OS when a process dies.
Anyway there might be use cases, when you want to have your singleton classes some backup points on program crash situations. Most C++ crt implementations support calling destructors of statically allocated instances, but you should take care not to have ordered dependencies for any of those destructors.
getInstance and delteInstance should be static, so they can only work with static members of the class. The static members don't get destroyed with the instance.
If your code is multithreaded, the code is not safe. There's no way to make sure there's no pointer to the instance held in some running context.
I have the following code:
MyClass.h:
static MyMutex instanceMutex;
static MyClass* getInstance();
static void deleteInstance();
MyClass.c:
MyMutex MyClass::instanceMutex;
MyClass* MyClass::getInstance()
{
if (theInstance == 0)
{
instanceMutex.acquire();
if (theInstance == 0)
{
theInstance = new MyClass();
}
instanceMutex.release();
}
return theInstance;
}
void MyClass::deleteInstance()
{
if (theInstance != 0)
{
instanceMutex.acquire();
if (theInstance != 0)
{
theInstance->finalize();
delete theInstance;
theInstance = 0;
}
instanceMutex.release();
}
return;
}
I have 2 questions on this:
Is the above code correct and safe?
After I call 'delete theInstance' in MyClass::deleteInstance(), I then call
theInstance = 0;
instanceMutex.release();
But if the instance is deleted than how is that even possible? isn't the memory of the class gone?
If it's a singleton - it is defined to have exactly one instance - if you delete it - this drops to 0
So it seems you should not support delete at all
Here be a problem:
if (theInstance == 0) // <- Some other thread might see a non-null theInstance
{ // before the constructor below returns
instanceMutex.acquire();
if (theInstance == 0)
{
theInstance = new MyClass(); // <- This might set theInstance to something
// before the constructor returns.
}
instanceMutex.release();
}
You may want to implement some sort of reference counting system (like using shared_ptr) and initializing it in a similar manner, though taking care to ensure that its instance pointer is not set before the initialization completes.
If you are feeling adventurous, you could also try:
if (theInstance == 0)
{
instanceMutex.acquire();
if (theInstance == 0)
{
MyClass * volatile ptr = new MyClass();
__FuglyCompilerSpecificFenceHintThatMightNotActuallyDoAnything();
theInstance = ptr; // <- Much hilarity may ensue if this write is not atomic.
}
instanceMutex.release();
}
This might fix it, and it might not. In the second case, it depends on how your compiler handles volatile, and weather or not pointer-sized writes are atomic.
In one project we inherited from some external company I've seen a whole nightmare class of errors caused by someone deleting the singleton. Perhaps in your rare case deleting the singleton doesn't have any side effects for the whole application, as you can be sure there is no instance of this singleton at use, but, generally, it's an excellent example of bad design. Someone could learn from your example and it would be a bad - even harmful - lesson. In case you need to clean up something in the singleton when the application exit, use the pattern ith returning reference to the singleton, like in example:
#include <iostream>
using namespace std;
class singleton {
private:
singleton() {
cout << "construktor\n";
}
~singleton() {
cout << "destructor\n";
}
public:
static singleton &getInstance() {
static singleton instance;
cout << "instance\n";
return instance;
}
void fun() {
cout << "fun\n";
}
};
int main() {
cout << "one\n";
singleton &s = singleton::getInstance();
cout << "two\n";
s.fun();
return 0;
}
I prefer to implement singletons in C++ in the following manner:
class Singleton
{
public:
static Singleton& instance()
{
static Singleton theInstance;
return theInstance;
}
~Singleton()
{
// Free resources that live outside the processes life cycle here,
// if these won't automatically be released when the occupying process
// dies (is killed)!
// Examples you don't have to care of usually are:
// - freeing virtual memory
// - freeing file descriptors (of any kind, plain fd, socket fd, whatever)
// - destroying child processes or threads
}
private:
Singleton()
{
}
// Forbid copies and assignment
Singleton(const Singleton&);
Singleton& operator=(const Singleton&);
};
You can have locking mechanisms also, to prevent concurrent instantiation from multiple threads (will need a static mutex of course!). But deletion is left to the (OS specific) process context here.
Usually you don't need to care about deletion of singleton classes, and how their acquired resources are released, because this is all handled by the OS when a process dies.
Anyway there might be use cases, when you want to have your singleton classes some backup points on program crash situations. Most C++ crt implementations support calling destructors of statically allocated instances, but you should take care not to have ordered dependencies for any of those destructors.
getInstance and delteInstance should be static, so they can only work with static members of the class. The static members don't get destroyed with the instance.
If your code is multithreaded, the code is not safe. There's no way to make sure there's no pointer to the instance held in some running context.
I have a variable in my function that is static, but I would like it to be static on a per thread basis.
How can I allocate the memory for my C++ class such that each thread has its own copy of the class instance?
AnotherClass::threadSpecificAction()
{
// How to allocate this with thread local storage?
static MyClass *instance = new MyClass();
instance->doSomething();
}
This is on Linux. I'm not using C++0x and this is gcc v3.4.6.
#include <boost/thread/tss.hpp>
static boost::thread_specific_ptr< MyClass> instance;
if( ! instance.get() ) {
// first time called by this thread
// construct test element to be used in all subsequent calls from this thread
instance.reset( new MyClass);
}
instance->doSomething();
It is worth noting that C++11 introduces the thread_local keyword.
Here is an example from Storage duration specifiers:
#include <iostream>
#include <string>
#include <thread>
#include <mutex>
thread_local unsigned int rage = 1;
std::mutex cout_mutex;
void increase_rage(const std::string& thread_name)
{
++rage;
std::lock_guard<std::mutex> lock(cout_mutex);
std::cout << "Rage counter for " << thread_name << ": " << rage << '\n';
}
int main()
{
std::thread a(increase_rage, "a"), b(increase_rage, "b");
increase_rage("main");
a.join();
b.join();
return 0;
}
Possible output:
Rage counter for a: 2
Rage counter for main: 2
Rage counter for b: 2
boost::thread_specific_ptr is the best way as it portable solution.
On Linux & GCC you may use __thread modifier.
So your instance variable will look like:
static __thread MyClass *instance = new MyClass();
If you're using Pthreads you can do the following:
//declare static data members
pthread_key_t AnotherClass::key_value;
pthread_once_t AnotherClass::key_init_once = PTHREAD_ONCE_INIT;
//declare static function
void AnotherClass::init_key()
{
//while you can pass a NULL as the second argument, you
//should pass some valid destrutor function that can properly
//delete a pointer for your MyClass
pthread_key_create(&key_value, NULL);
}
void AnotherClass::threadSpecificAction()
{
//Initialize the key value
pthread_once(&key_init_once, init_key);
//this is where the thread-specific pointer is obtained
//if storage has already been allocated, it won't return NULL
MyClass *instance = NULL;
if ((instance = (MyClass*)pthread_getspecific(key_value)) == NULL)
{
instance = new MyClass;
pthread_setspecific(key_value, (void*)instance);
}
instance->doSomething();
}
If you're working with MSVC++, you can read Thread Local Storage (TLS)
And then you can see this example.
Also, be aware of the Rules and Limitations for TLS
C++11 specifies a thread_local storage type, just use it.
AnotherClass::threadSpecificAction()
{
thread_local MyClass *instance = new MyClass();
instance->doSomething();
}
One optional optimization is to also allocate on thread local storage.
On Windows you can use TlsAlloc and TlsFree to allocate storage in the threads local storage.
To set and retrieve values in with TLS, you can use TlsSetValue and TlsGetValue, respectively
Here you can see an example on how it would be used.
Just a side note...
MSVC++ supports declspec(thread) from VSC++2005
#if (_MSC_VER >= 1400)
#ifndef thread_local
#define thread_local __declspec(thread)
#endif
#endif
Main problem is(which is solved in boost::thread_specific_ptr) variables marked with it can't contain ctor or dtor.
Folly (Facebook Open-source Library) has a portable implementation of Thread Local Storage.
According its authors:
Improved thread local storage for non-trivial types (similar speed as
pthread_getspecific but only consumes a single pthread_key_t, and 4x faster
than boost::thread_specific_ptr).
If your looking for a portable implementation of Local Storage Thread, this library is a good option.
I received a code that is not for multi-threaded app, now I have to modify the code to support for multi-threaded.
I have a Singleton class(MyCenterSigltonClass) that based on instruction in:
http://en.wikipedia.org/wiki/Singleton_pattern
I made it thread-safe
Now I see inside the class that contains 10-12 members, some with getter/setter methods.
Some members are declared as static and are class pointer like:
static Class_A* f_static_member_a;
static Class_B* f_static_member_b;
for these members, I defined a mutex(like mutex_a) INSIDE the class(Class_A) , I didn't add the mutex directly in my MyCenterSigltonClass, the reason is they are one to one association with my MyCenterSigltonClass, I think I have option to define mutex in the class(MyCenterSigltonClass) or (Class_A) for f_static_member_a.
1) Am I right?
Also, my Singleton class(MyCenterSigltonClass) contains some other members like
Class_C f_classC;
for these kind of member variables, should I define a mutex for each of them in MyCenterSigltonClass to make them thread-safe? what would be a good way to handle these cases?
Appreciate for any suggestion.
-Nima
Whether the members are static or not doesn't really matter. How you protect the member variables really depends on how they are accessed from public methods.
You should think about a mutex as a lock that protects some resource from concurrent read/write access. You don't need to think about protecting the internal class objects necessarily, but the resources within them. You also need to consider the scope of the locks you'll be using, especially if the code wasn't originally designed to be multithreaded. Let me give a few simple examples.
class A
{
private:
int mValuesCount;
int* mValues;
public:
A(int count, int* values)
{
mValuesCount = count;
mValues = (count > 0) ? new int[count] : NULL;
if (mValues)
{
memcpy(mValues, values, count * sizeof(int));
}
}
int getValues(int count, int* values) const
{
if (mValues && values)
{
memcpy(values, mValues, (count < mValuesCount) ? count : mValuesCount);
}
return mValuesCount;
}
};
class B
{
private:
A* mA;
public:
B()
{
int values[5] = { 1, 2, 3, 4, 5 };
mA = new A(5, values);
}
const A* getA() const { return mA; }
};
In this code, there's no need to protect mA because there's no chance of conflicting access across multiple threads. None of the threads can modify the state of mA, so all concurrent access just reads from mA. However, if we modify class A:
class A
{
private:
int mValuesCount;
int* mValues;
public:
A(int count, int* values)
{
mValuesCount = 0;
mValues = NULL;
setValues(count, values);
}
int getValues(int count, int* values) const
{
if (mValues && values)
{
memcpy(values, mValues, (count < mValuesCount) ? count : mValuesCount);
}
return mValuesCount;
}
void setValues(int count, int* values)
{
delete [] mValues;
mValuesCount = count;
mValues = (count > 0) ? new int[count] : NULL;
if (mValues)
{
memcpy(mValues, values, count * sizeof(int));
}
}
};
We can now have multiple threads calling B::getA() and one thread can read from mA while another thread writes to mA. Consider the following thread interaction:
Thread A: a->getValues(maxCount, values);
Thread B: a->setValues(newCount, newValues);
It's possible that Thread B will delete mValues while Thread A is in the middle of copying it. In this case, you would need a mutex within class A to protect access to mValues and mValuesCount:
int getValues(int count, int* values) const
{
// TODO: Lock mutex.
if (mValues && values)
{
memcpy(values, mValues, (count < mValuesCount) ? count : mValuesCount);
}
int returnCount = mValuesCount;
// TODO: Unlock mutex.
return returnCount;
}
void setValues(int count, int* values)
{
// TODO: Lock mutex.
delete [] mValues;
mValuesCount = count;
mValues = (count > 0) ? new int[count] : NULL;
if (mValues)
{
memcpy(mValues, values, count * sizeof(int));
}
// TODO: Unlock mutex.
}
This will prevent concurrent read/write on mValues and mValuesCount. Depending on the locking mechanisms available in your environment, you may be able to use a read-only locking mechanism in getValues() to prevent multiple threads from blocking on concurrent read access.
However, you'll also need to understand the scope of the locking you need to implement if class A is more complex:
class A
{
private:
int mValuesCount;
int* mValues;
public:
A(int count, int* values)
{
mValuesCount = 0;
mValues = NULL;
setValues(count, values);
}
int getValueCount() const { return mValuesCount; }
int getValues(int count, int* values) const
{
if (mValues && values)
{
memcpy(values, mValues, (count < mValuesCount) ? count : mValuesCount);
}
return mValuesCount;
}
void setValues(int count, int* values)
{
delete [] mValues;
mValuesCount = count;
mValues = (count > 0) ? new int[count] : NULL;
if (mValues)
{
memcpy(mValues, values, count * sizeof(int));
}
}
};
In this case, you could have the following thread interaction:
Thread A: int maxCount = a->getValueCount();
Thread A: // allocate memory for "maxCount" int values
Thread B: a->setValues(newCount, newValues);
Thread A: a->getValues(maxCount, values);
Thread A has been written as though calls to getValueCount() and getValues() will be an uninterrupted operation, but Thread B has potentially changed the count in the middle of Thread A's operations. Depending on whether the new count is larger or smaller than the original count, it may take a while before you discover this problem. In this case, class A would need to be redesigned or it would need to provide some kind of transaction support so the thread using class A could block/unblock other threads:
Thread A: a->lockValues();
Thread A: int maxCount = a->getValueCount();
Thread A: // allocate memory for "maxCount" int values
Thread B: a->setValues(newCount, newValues); // Blocks until Thread A calls unlockValues()
Thread A: a->getValues(maxCount, values);
Thread A: a->unlockValues();
Thread B: // Completes call to setValues()
Since the code wasn't initially designed to be multithreaded, it's very likely you'll run into these kinds of issues where one method call uses information from an earlier call, but there was never a concern for the state of the object changing between those calls.
Now, begin to imagine what could happen if there are complex state dependencies among the objects within your singleton and multiple threads can modify the state of those internal objects. It can all become very, very messy with a large number of threads and debugging can become very difficult.
So as you try to make your singleton thread-safe, you need to look at several layers of object interactions. Some good questions to ask:
Do any of the methods on the singleton reveal internal state that may change between method calls (as in the last example I mention)?
Are any of the internal objects revealed to clients of the singleton?
If so, do any of the methods on those internal objects reveal internal state that may change between method calls?
If internal objects are revealed, do they share any resources or state dependencies?
You may not need any locking if you're just reading state from internal objects (first example). You may need to provide simple locking to prevent concurrent read/write access (second example). You may need to redesign the classes or provide clients with the ability to lock object state (third example). Or you may need to implement more complex locking where internal objects share state information across threads (e.g. a lock on a resource in class Foo requires a lock on a resource in class Bar, but locking that resource in class Bar doesn't necessarily require a lock on a resource in class Foo).
Implementing thread-safe code can become a complex task depending on how all your objects interact. It can be much more complicated than the examples I've given here. Just be sure you clearly understand how your classes are used and how they interact (and be prepared to spend some time tracking down difficult to reproduce bugs).
If this is the first time you're doing threading, consider not accessing the singleton from the background thread. You can get it right, but you probably won't get it right the first time.
Realize that if your singleton exposes pointers to other objects, these should be made thread safe as well.
You don't have to define a mutex for each member. For example, you could instead use a single mutex to synchronize access each to member, e.g.:
class foo
{
public:
...
void some_op()
{
// acquire "lock_" and release using RAII ...
Lock(lock_);
a++;
}
void set_b(bar * b)
{
// acquire "lock_" and release using RAII ...
Lock(lock_);
b_ = b;
}
private:
int a_;
bar * b_;
mutex lock_;
}
Of course a "one lock" solution may be not suitable in your case. That's up to you to decide. Regardless, simply introducing locks doesn't make the code thread-safe. You have to use them in the right place in the right way to avoid race conditions, deadlocks, etc. There are lots of concurrency issues you could run in to.
Furthermore you don't always need mutexes, or other threading mechanisms like TSS, to make code thread-safe. For example, the following function "func" is thread-safe:
class Foo;
void func (Foo & f)
{
f.some_op(); // Foo::some_op() of course needs to be thread-safe.
}
// Thread 1
Foo a;
func(&a);
// Thread 2
Foo b;
func(&b);
While the func function above is thread-safe the operations it invokes may not be thread-safe. The point is you don't always need to pepper your code with mutexes and other threading mechanisms to make the code thread safe. Sometimes restructuring the code is sufficient.
There's a lot of literature on multithreaded programming. It's definitely not easy to get right so take your time in understanding the nuances, and take advantage of existing frameworks like Boost.Thread to mitigate some of the inherent and accidental complexities that exist in the lower-level multithreading APIs.
I'd really recommend the Interlocked.... Methods to increment, decrement and CompareAndSwap values when using code that needs to be multi-thread-aware. I don't have 1st-hand C++ experience but a quick search for http://www.bing.com/search?q=c%2B%2B+interlocked reveals lots of confirming advice. If you need perf, these will likely be faster than locking.
As stated by #Void a mutex alone is not always the solution to a concurrency problem:
Regardless, simply introducing locks doesn't make the code
thread-safe. You have to use them in the right place in the right way
to avoid race conditions, deadlocks, etc. There are lots of
concurrency issues you could run in to.
I want to add another example:
class MyClass
{
mutex m_mutex;
AnotherClass m_anotherClass;
void setObject(AnotherClass& anotherClass)
{
m_mutex.lock();
m_anotherClass = anotherClass;
m_mutex.unlock();
}
AnotherClass getObject()
{
AnotherClass anotherClass;
m_mutex.lock();
anotherClass = m_anotherClass;
m_mutex.unlock();
return anotherClass;
}
}
In this case the getObject() method is always safe because is protected with mutex and you have a copy of the object which is returned to the caller which may be a different class and thread. This means you are working on a copy which might be old (in the meantime another thread might have changed the m_anotherClass by calling setObject() ).Now what if you turn m_anotherClass to a pointer instead of an object-variable ?
class MyClass
{
mutex m_mutex;
AnotherClass *m_anotherClass;
void setObject(AnotherClass *anotherClass)
{
m_mutex.lock();
m_anotherClass = anotherClass;
m_mutex.unlock();
}
AnotherClass * getObject()
{
AnotherClass *anotherClass;
m_mutex.lock();
anotherClass = m_anotherClass;
m_mutex.unlock();
return anotherClass;
}
}
This is an example where a mutex is not enough to solve all the problems.
With pointers you can have a copy only of the pointer but the pointed object is the same in the both the caller and the method. So even if the pointer was valid at the time that the getObject() was called you don't have any guarantee that the pointed value will exists during the operation you are performing with it. This is simply because you don't have control on the object lifetime. That's why you should use object-variables as much as possible and avoid pointers (if you can).