What is the problem with having static variables (especially within functions) in multithreaded programs?
Thanks.
Initialization is not thread-safe. Two threads can enter the function and both may initialize the function-scope static variable. That's not good. There's no telling what the result might be.
In C++0x, initialization of function-scope static variables will be thread-safe; the first thread to call the function will initialize the variable and any other threads calling that function will need to block until that initialization is complete.
I don't think there are currently any compiler + standard library pairs that fully implement the C++0x concurrency memory model and the thread support and atomics libraries.
To pick an illustrative example at random, take an interface like asctime in the C library. The prototype looks like this:
char *
asctime(const struct tm *timeptr);
This implicitly must have some global buffer to store the characters in the char* returned. The most common and simple way to accomplish this would be something like:
char *
asctime(const struct tm *timeptr)
{
static char buf[MAX_SIZE];
/* TODO: convert timeptr into string */
return buf;
}
This is totally broken in a multi-threaded environment, because buf will be at the same address for each call to asctime(). If two threads call asctime() at the same time, they run the risk of overwriting each other's results. Implicit in the contract of asctime() is that the characters of the string will stick around until the next call to asctime(), and concurrent calls breaks this.
There are some language extensions that work around this particular problem in this particular example via thread-local storage (__thread,__declspec(thread)). I believe this idea made it into C++0x as the thread_local keyword.
Even so I would argue it's a bad design decision to use it this way, for similar reasons as for why it's bad to use global variables. Among other things, it may be thought of as a cleaner interface for the caller to maintain and provide this kind of state, rather than the callee. These are subjective arguments, however.
A static variable usually means multiple invocations of your function would share a state and thus interfere with one another.
Normally you want your functions to be self contained; have local copies of everything they work on and share nothing with the outside world bar parameters and return values. (Which, if you think a certain way, aren't a part of the function anyway.)
Consider:
int add(int x, int y);
definitely thread-safe, local copies of x and y.
void print(const char *text, Printer *printer);
dangerous, someone outside might be doing something with the same printer, e.g. calling another print() on it.
void print(const char *text);
definitely non-thread-safe, two parallel invocations are guaranteed to use the same printer.
Of course, there are ways to secure access to shared resources (search keyword: mutex); this is just how your gut feeling should be.
Unsynchronized parallel writes to a variable are also non-thread-safe most of the time, as are a read and write. (search keywords: synchronization, synchronization primitives [of which mutex is but one], also atomicity/atomic operation for when parallel access is safe.)
Related
I have many overloaded functions in a class. In this case, should I declare the int32_t data as a member variable of the class, so I am not declaring it over and over in each function? The Fill function is always setting a value through reference, so I don't think I should need to declare it every time in every function.
There is about 20 more of these functions not listed here:
void TransmitInfo(TypeA &dp, Id &tc)
{
//do lots of other work here
int32_t data;
while (dp.Fill(data)) //Fill accepts a reference variable, "data" gets populated
{
Transmit(tc, data);
}
}
void TransmitInfo(TypeB &dp, Id &tc)
{
//do lots of other work here
int32_t data;
while (dp.Fill(data))
{
Transmit(tc, data);
}
}
void TransmitInfo(TypeC &dp, Id &tc)
{
//do lots of other work here
int32_t data;
while (dp.Fill(data))
{
Transmit(tc, data);
}
}
Scope is not the only thing to consider when choosing where to declare a variable. Just as important are the lifetime of the variable and when it is created.
When you declare a variable inside a function, it is created whenever that function is called, several times if need be (recursion!). And it's destroyed when that function exits. These creations/destructions are noops for the CPU in the case of simple types as int32_t.
When you declare it inside the class, you get only one copy of the variable per object you create. If one of your function calls another (or itself), they will both use the same variable. You also increase the size of your objects; your variable will consume memory even when it's not used.
So, the bottom line is: Use the different kinds of variables for the purposes they were designed for.
If a function needs to remember something while it runs, it's a function variable.
If a function needs to remember something across its invocations, it's a static function variable.
If an object needs to remember something across its member invocations, it's a member variable.
If a class needs to remember something across all objects, it's a static class variable.
Anything else leads to chaos.
You should refrain from using a member variable for any type of temporary data. The reason for this is that it guarantees that your code is not thread safe, and in this day-and-age of parallel computing, that is a major disadvantage. The cost of allocating an int32_t is extremely small as to be negligible so thus it is often better to allocate inside the function to maintain thread safety. Before a single int allocation becomes noticeable you will have to allocate it well over a million times, and even then the total loss will be in microseconds.
If your experiencing such difficulty with optimization that you have to resort to such a high degree of micro-optimization then you likely should try and rework your algorithm to create a better scaling as opposed to spending massive amounts of time optimizing something that is not a choke point. (You would also be better off using a good concurrent algorithm, as opposed to shaving picoseconds off of a serial algorithm.)
Absolutely do not do this. If it's only a temporary for the life of a function then keep it local.
Else you'll cause more problems than you solve; e.g. Multithreading and serialisation.
Leave such micro-optimisations to the compiler.
Recently a fellow worker showed to me a code like this:
void SomeClass::function()
{
static bool init = false;
if (!init)
{
// hundreds of lines of ugly code
}
init = true;
}
He wants to check if SomeClass is initialized in order to execute some piece of code once per Someclass instance but the fact is that only one instance of SomeClass will exist in all the lifetime of the program.
His question were about the init static variable, about when it's initialized. I've answered that the initialization occurs once, so the value will be false at first call and true the rest of its lifetime. After answering I've added that such use of static variables is bad practice but I haven't been able to explain why.
The reasons that I've been thinking so far are the following:
The behaviour of static bool init into SomeClass::function could be achieved with a non-static member variable.
Other functions in SomeClass couldn't check the static bool init value because it's visibility is limited to the void SomeClass::function() scope.
The static variables aren't OOPish because they define a global state instead of a object state.
This reasons looks poor, unclever and not very concrete to me so I'm asking for more reasons to explain why the use of static variables in function and member-function space are a bad practice.
Thanks!
This is certainly a rare occurrence, at least, in good quality code, because of the narrow case for which it's appropriate. What this basically does is a just-in-time initialization of a global state (to deliver some global functionality). A typical example of this is having a random number generator function that seeds the generator at the first call to it. Another typical use of this is a function that returns the instance of a singleton, initialized on the first call. But other use-case examples are few and far between.
In general terms, global state is not desirable, and having objects that contain self-sufficient states is preferred (for modularity, etc.). But if you need global state (and sometimes you do), you have to implement it somehow. If you need any kind of non-trivial global state, then you should probably go with a singleton class, and one of the preferred ways to deliver that application-wide single instance is through a function that delivers a reference to a local static instance initialized on the first call. If the global state needed is a bit more trivial, then doing the scheme with the local static bool flag is certainly an acceptable way to do it. In other words, I see no fundamental problem with employing that method, but I would naturally question its motivations (requiring a global state) if presented with such code.
As is always the case for global data, multi-threading will cause some problems with a simplistic implementation like this one. Naive introductions of global state are never going to be inherently thread-safe, and this case is no exception, you'd have to take measures to address that specific problem. And that is part of the reasons why global states are not desirable.
The behaviour of static bool init into SomeClass::function could be achieved with a non-static member variable.
If there is an alternative to achieve the same behavior, then the two alternatives have to be judged on the technical issues (like thread-safety). But in this case, the required behavior is the questionable thing, more so than the implementation details, and the existence of alternative implementations doesn't change that.
Second, I don't see how you can replace a just-in-time initialization of a global state by anything that is based on a non-static data member (a static data member, maybe). And even if you can, it would be wasteful (require per-object storage for a one-time-per-program-execution thing), and on that ground alone, wouldn't make it a better alternative.
Other functions in SomeClass couldn't check the static bool init value because it's visibility is limited to the void SomeClass::function() scope.
I would generally put that in the "Pro" column (as in Pro/Con). This is a good thing. This is information hiding or encapsulation. If you can hide away things that shouldn't be a concern to others, then great! But if there are other functions that would need to know that the global state has already been initialized or not, then you probably need something more along the lines of a singleton class.
The static variables aren't OOPish because they define a global state instead of a object state.
OOPish or not, who cares? But yes, the global state is the concern here. Not so much the use of a local static variable to implement its initialization. Global states, especially mutable global states, are bad in general and should never be abused. They hinder modularity (modules are less self-sufficient if they rely on global states), they introduce multi-threading concerns since they are inherently shared data, they make any function that use them non-reentrant (non-pure), they make debugging difficult, etc... the list goes on. But most of these issues are not tied to how you implement it. On the other hand, using a local static variable is a good way to solve the static-initialization-order-fiasco, so, they are good for that reason, one less problem to worry about when introducing a (well-justified) global state into your code.
Think multi-threading. This type of code is problematic when function() can be called concurrently by multiple threads. Without locking, you're open to race conditions; with locking, concurrency can suffer for no real gain.
Global state is probably the worst problem here. Other functions don't have to be concerned with it, so it's not an issue. The fact that it can be achieved without static variable essentially means you made some form of a singleton. Which of course introduces all problems that singleton has, like being totally unsuitable for multithreaded environment, for one.
Adding to what others said, you can't have multiple objects of this class at the same time, or at least would they not behave as expected. The first instance would set the static variable and do the initialization. The ones created later though would not have their own version of init but share it with all other instances. Since the first instance set it to true, all following won't do any initialization, which is most probably not what you want.
What are the C++98 and C++11 memory models for local arrays and interactions with threads?
I'm not referring to the C++11 thread_local keyword, which pertains to global and static variables.
Instead, I'd like to find out what is the guaranteed behavior of threads for arrays that are allocated at compile-time. By compile-time I mean "int array[100]", which is different to allocation using the new[] keyword. I do not mean static variables.
For example, let's say I have the following struct/class:
struct xyz { int array[100]; };
and the following function:
void fn(int x) {
xyz dog;
for(int i=0; i<100; ++i) { dog.array[i] = x; }
// do something else with dog.array, eg. call another function with dog as parameter
}
Is it safe to call fn() from multiple threads?
It seems that the C++ memory model is: all local non-static variables and arrays are allocated on the stack, and that each thread has its own stack. Is this true (ie. is this officially part of the standard) ?
Such variables are allocated on the stack, and since each thread has its own stack, it is perfectly safe to use local arrays. They are not different from e.g. local ints.
C++98 didn't say anything about threads. Programs otherwise written to C++98 but which use threads do not have a meaning that is defined by C++98. Of course, it's sensible for threading extensions to provide stable, private local variables to threads, and they usually do. But there can exist threads for which this is not the case: for instance processes created by vfork on some Unix systems, whereby the parent and child will execute in the same stack frame, since the v in vfork means not to clone the address space, and vfork does not redirect the new process to a different function, either.
In C++11, there is threading support. Local variables in separate activation chains in separate C++11 threads do not interfere. But if you go outside the language and whip out vfork or anything resembling it, then all bets are off, like before.
But here is something. C++ has closures now. What if two threads both invoke the same closure? Then you have two threads sharing the same local variables. A closure is like an object and its captured local variables are like members. If two or more threads call the same closure, then you have a de facto multi-threaded object whose members (i.e. captured lexical variables) are shared.
A thread can also simply pass the address of its local variables to another thread, thereby causing them to become shared.
What are disadvantages of using static variables like in following code:
namespace XXX
{
static int i;
class YYY
{
static m_i;
};
}
Is using static variables only in .cpp (so they are invisible to other code) file OK?
It totally depends on what you need to do. There is no general aversion to using statics. Note that statics are essentially the singleton pattern, so all the pros/cons about singletons apply.
In terms of threads you have to pay attention since the same instance could be access by multiple threads at the same time. If you only need to read the data then you shouldn't have any problems. If you need to modify the data you have to worry about synchronization.
For code reuse and testing singeltons can often pose a problem. You can't for example recreate the object in a test, or have multiple run in parallel. In general when I use singletons/statics I try to ensure that one instance of the entire life of all tests, parallel executions, etc. is totally okay.
Invisible statics as you call them (visible only to the compilation unit) are a good idea. This helps you maintain synchronization between threads and manage them correctly. If they have global visibility then they can be modified at any time (well private variables can't, so they are also good).
Also note that atomic variables can be safely read/written from various threads. For simple counters this is often a good solution: using atomic increment. In C++0x you can use "atomic", previously use your OS/compiler function that does it. As with the singleton pattern, you can easily design classes where each function is synchronized so the singleton user doesn't have to worry about it. That is to say, statics aren't inherently thread-safe even when doing writing.
Various things allowed by C++ represent choices and compromises in software design. There is nothing inherently, absolutely evil about any of them (except perhaps throw specifications, which Prof. Stroustrop deems a broken facility ;-P).
Whether this particular code is a good choice in a particular situation depends on many factors. For example:
there'll only be one copy shared by the entire program
may be ideal, e.g. you want to have some program-wide modal state
terrible: code setting/using the value may find other code has modified it meanwhile (this gets increasingly likely as the program gets larger and more complex)
terrible: in multi-threaded code you must use locking and/or atomic operations to avoid data corruption and consequent erroneous behaviour and/or crashes
being static:
other translation units won't have access to it
it won't pollute the symbol table exposed by a good IDE etc.
In short, if you realise that namespaces and static just reduce identifier clashes, they've got all the other pros/cons associated with global variables....
Also you can have some problems with static class members while working with runtime shared libs. For example: if you have:
class A {
//...
static int staticInt;
//...
};
And you link this class to main executable AND to runtime shared lib, which is used by this executable (via dlopen) in several cases you can receive segmentation faults due to reinitialization of main executable's copy of static member by loaded shared lib. You can find more details concerning this issue here and there
For one it's not thread safe. You might want to consider the singleton pattern with some kind of locking to protect it.
One disadvantage you must be aware of is the so-called "static initialization order fiasco." Here is an example:
// foo.cpp
class foo {
public:
static bar m_bar;
};
bar foo::m_bar;
// baz.cpp
class baz {
public:
baz() {
/* some code */
foo::m_bar.someMethod();
}
};
baz g_baz; //TROUBLE!!
The C++ standard makes no guarantee about which will be initialized first of m_bar and g_baz. If you're lucky and m_bar gets initialized first, g_baz will be constructed without a hitch. If not, your program will likely segfault, or worse.
Replacing m_bar with a method returning a static pointer constructed on first use circumvents this problem (NB: it's still not thread-safe):
class foo {
public:
static bar &getBar() { if(!m_barinst) m_barinst = new bar; return *m_barinst; }
private:
static bar *m_barinst;
};
bar *foo::m_barinst = NULL;
Is there a way to implement a singleton object in C++ that is:
Lazily constructed in a thread safe manner (two threads might simultaneously be the first user of the singleton - it should still only be constructed once).
Doesn't rely on static variables being constructed beforehand (so the singleton object is itself safe to use during the construction of static variables).
(I don't know my C++ well enough, but is it the case that integral and constant static variables are initialized before any code is executed (ie, even before static constructors are executed - their values may already be "initialized" in the program image)? If so - perhaps this can be exploited to implement a singleton mutex - which can in turn be used to guard the creation of the real singleton..)
Excellent, it seems that I have a couple of good answers now (shame I can't mark 2 or 3 as being the answer). There appears to be two broad solutions:
Use static initialisation (as opposed to dynamic initialisation) of a POD static variable, and implementing my own mutex with that using the builtin atomic instructions. This was the type of solution I was hinting at in my question, and I believe I knew already.
Use some other library function like pthread_once or boost::call_once. These I certainly didn't know about - and am very grateful for the answers posted.
Basically, you're asking for synchronized creation of a singleton, without using any synchronization (previously-constructed variables). In general, no, this is not possible. You need something available for synchronization.
As for your other question, yes, static variables which can be statically initialized (i.e. no runtime code necessary) are guaranteed to be initialized before other code is executed. This makes it possible to use a statically-initialized mutex to synchronize creation of the singleton.
From the 2003 revision of the C++ standard:
Objects with static storage duration (3.7.1) shall be zero-initialized (8.5) before any other initialization takes place. Zero-initialization and initialization with a constant expression are collectively called static initialization; all other initialization is dynamic initialization. Objects of POD types (3.9) with static storage duration initialized with constant expressions (5.19) shall be initialized before any dynamic initialization takes place. Objects with static storage duration defined in namespace scope in the same translation unit and dynamically initialized shall be initialized in the order in which their definition appears in the translation unit.
If you know that you will be using this singleton during the initialization of other static objects, I think you'll find that synchronization is a non-issue. To the best of my knowledge, all major compilers initialize static objects in a single thread, so thread-safety during static initialization. You can declare your singleton pointer to be NULL, and then check to see if it's been initialized before you use it.
However, this assumes that you know that you'll use this singleton during static initialization. This is also not guaranteed by the standard, so if you want to be completely safe, use a statically-initialized mutex.
Edit: Chris's suggestion to use an atomic compare-and-swap would certainly work. If portability is not an issue (and creating additional temporary singletons is not a problem), then it is a slightly lower overhead solution.
Unfortunately, Matt's answer features what's called double-checked locking which isn't supported by the C/C++ memory model. (It is supported by the Java 1.5 and later — and I think .NET — memory model.) This means that between the time when the pObj == NULL check takes place and when the lock (mutex) is acquired, pObj may have already been assigned on another thread. Thread switching happens whenever the OS wants it to, not between "lines" of a program (which have no meaning post-compilation in most languages).
Furthermore, as Matt acknowledges, he uses an int as a lock rather than an OS primitive. Don't do that. Proper locks require the use of memory barrier instructions, potentially cache-line flushes, and so on; use your operating system's primitives for locking. This is especially important because the primitives used can change between the individual CPU lines that your operating system runs on; what works on a CPU Foo might not work on CPU Foo2. Most operating systems either natively support POSIX threads (pthreads) or offer them as a wrapper for the OS threading package, so it's often best to illustrate examples using them.
If your operating system offers appropriate primitives, and if you absolutely need it for performance, instead of doing this type of locking/initialization you can use an atomic compare and swap operation to initialize a shared global variable. Essentially, what you write will look like this:
MySingleton *MySingleton::GetSingleton() {
if (pObj == NULL) {
// create a temporary instance of the singleton
MySingleton *temp = new MySingleton();
if (OSAtomicCompareAndSwapPtrBarrier(NULL, temp, &pObj) == false) {
// if the swap didn't take place, delete the temporary instance
delete temp;
}
}
return pObj;
}
This only works if it's safe to create multiple instances of your singleton (one per thread that happens to invoke GetSingleton() simultaneously), and then throw extras away. The OSAtomicCompareAndSwapPtrBarrier function provided on Mac OS X — most operating systems provide a similar primitive — checks whether pObj is NULL and only actually sets it to temp to it if it is. This uses hardware support to really, literally only perform the swap once and tell whether it happened.
Another facility to leverage if your OS offers it that's in between these two extremes is pthread_once. This lets you set up a function that's run only once - basically by doing all of the locking/barrier/etc. trickery for you - no matter how many times it's invoked or on how many threads it's invoked.
Here's a very simple lazily constructed singleton getter:
Singleton *Singleton::self() {
static Singleton instance;
return &instance;
}
This is lazy, and the next C++ standard (C++0x) requires it to be thread safe. In fact, I believe that at least g++ implements this in a thread safe manner. So if that's your target compiler or if you use a compiler which also implements this in a thread safe manner (maybe newer Visual Studio compilers do? I don't know), then this might be all you need.
Also see http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2513.html on this topic.
You can't do it without any static variables, however if you are willing to tolerate one, you can use Boost.Thread for this purpose. Read the "one-time initialisation" section for more info.
Then in your singleton accessor function, use boost::call_once to construct the object, and return it.
For gcc, this is rather easy:
LazyType* GetMyLazyGlobal() {
static const LazyType* instance = new LazyType();
return instance;
}
GCC will make sure that the initialization is atomic. For VC++, this is not the case. :-(
One major issue with this mechanism is the lack of testability: if you need to reset the LazyType to a new one between tests, or want to change the LazyType* to a MockLazyType*, you won't be able to. Given this, it's usually best to use a static mutex + static pointer.
Also, possibly an aside: It's best to always avoid static non-POD types. (Pointers to PODs are OK.) The reasons for this are many: as you mention, initialization order isn't defined -- neither is the order in which destructors are called though. Because of this, programs will end up crashing when they try to exit; often not a big deal, but sometimes a showstopper when the profiler you are trying to use requires a clean exit.
While this question has already been answered, I think there are some other points to mention:
If you want lazy-instantiation of the singleton while using a pointer to a dynamically allocated instance, you'll have to make sure you clean it up at the right point.
You could use Matt's solution, but you'd need to use a proper mutex/critical section for locking, and by checking "pObj == NULL" both before and after the lock. Of course, pObj would also have to be static ;)
.
A mutex would be unnecessarily heavy in this case, you'd be better going with a critical section.
But as already stated, you can't guarantee threadsafe lazy-initialisation without using at least one synchronisation primitive.
Edit: Yup Derek, you're right. My bad. :)
You could use Matt's solution, but you'd need to use a proper mutex/critical section for locking, and by checking "pObj == NULL" both before and after the lock. Of course, pObj would also have to be static ;) . A mutex would be unnecessarily heavy in this case, you'd be better going with a critical section.
OJ, that doesn't work. As Chris pointed out, that's double-check locking, which is not guaranteed to work in the current C++ standard. See: C++ and the Perils of Double-Checked Locking
Edit: No problem, OJ. It's really nice in languages where it does work. I expect it will work in C++0x (though I'm not certain), because it's such a convenient idiom.
read on weak memory model. It can break double-checked locks and spinlocks. Intel is strong memory model (yet), so on Intel it's easier
carefully use "volatile" to avoid caching of parts the object in registers, otherwise you'll have initialized the object pointer, but not the object itself, and the other thread will crash
the order of static variables initialization versus shared code loading is sometimes not trivial. I've seen cases when the code to destruct an object was already unloaded, so the program crashed on exit
such objects are hard to destroy properly
In general singletons are hard to do right and hard to debug. It's better to avoid them altogether.
I suppose saying don't do this because it's not safe and will probably break more often than just initializing this stuff in main() isn't going to be that popular.
(And yes, I know that suggesting that means you shouldn't attempt to do interesting stuff in constructors of global objects. That's the point.)