Do global variable constructors/destructors need thread protection? - c++

If I have a class whose sole purpose is to have global static instances (to ensure the code in it's constructor is run before main) and it uses a class static variable, does access to this variable need to be protected via mutex?
An example will help:
class WinSock
{
public:
WinSock()
{
if(!(inst++))
//winsock init
}
~WinSock()
{
if(!--inst)
//winsock deactivate
}
private:
static int inst = 0;
}
static WinSock unusedWinSockVar;
This is all in a header that is included by any file using winsock. Does access to inst need to be protected, or is it impossible for this code to be run from multiple threads since threads will be created only once main runs and destroyed before main returns?

Firstly, I don't think that private: static int inst = 0; is a valid construct, my compilers complains loudly - if you omitted that you have something like int WinSock::inst = 0 in some .cpp file in your project for simplicity, then it's ok. If not and your project compiles at all, there is a good chance that all translation units will use a different variable, and therefore result in incorrect behavior.
Secondly, if any of the static-object constructors creates a new thread, then you need to make your code thread safe. From C++ standard p. 3.6.2:
If a program starts a thread (30.3), the subsequent initialization of
a variable is unsequenced with respect to the initialization of a
variable defined in a different translation unit. Otherwise, the
initialization of a variable is indeterminately sequenced with respect
to the initialization of a variable defined in a different translation
unit.
Indeterminate sequencing means that initialization will not have any particular ordering, but it will not overlap, so you don't need any additional safeguards. No ordering means that constructors in different compilation unis might overlap, and therefore thread safety is required.
Thirdly, do you even need it done like this? Do you have other static objects that use winsock in their constructors? I really cannot think of any other reason to do it like that.

Given the specific scenario that you describe, this is fine without adding synchronization.
Your concern is that Winsock is initialized (and de-initialized) before (after) main runs, this is guaranteed to be the case. The code is guaranteed to be only called once from one thread, too. This (the fact that there's only one thread) makes synchronization useless.
Assuming that other static global objects use Winsock (whether or not they spawn threads), that would of course be unsafe, but it wouldn't be any safer with a mutex either. The initialization takes place at an implementation-defined point in time before main.
Therefore, no static global object can use Winsock in a safe, well-defined way using this construct, since either way you don't know whether initialization occurred first. Synchronizing it doesn't change a thing for that detail.
Note: the initialization of inst inside the class declaration isn't allowed as it is.

Related

Do i need a mutex for a static function?

I have a C++ class with a static function:
class Foo
{
public:
static void bar(int &a)
{
a++;
}
}
EDIT:
The variable passed as argument is only used within the calling scope. So it is not accessed by another thread.
Do i have to use a mutex when i call this function from a seperate thread?
Thanks.
Calling this function requires only thread-local resources, the stack of thread. Therefore the answer is no. If the int variable is accessible by more than the calling thread, you will need a mutex for the variable
Whether a function is static has no bearing on whether calls to it need to be synchronised.
The determining factor is whether the function is re-entrant, and what you do with the data. In this case, the function is re-entrant (as it has no non-local state of its own, or in fact any state at all), and the data is owned/managed by the calling scope, so you will have to decide within the calling scope whether that integer requires protection.
But this is true whether bar is a static member, a non-static member, a free function, a macro, a cat, a black hole, or Jon Skeet's tumble dryer.
I'd like to mention that mutex is not the only thread synchronization primitive available, and in some scenarios far from most approriate one.
Provided synchronization is needed (see two other answers on why it might be needed depending on usage) don't jump into mutex world. For something as straitghtforward as a counter (this is what I see in the code) atomic variables (or, in absence of those, atomic operations on non-atomic types) often provide a better performance and more straightforward code.
In this particular case, incrementing a variable can be easily done in thread-safe way with following C++11 code:
static void bar(std::atomic<int>& a)
{
a.fetch_add(1, std::memory_order_relaxed);
}
memory_order_relaxed used here is really far-fetched, and not neccessarily applicable (however, often good for counters). Used here mostly for the example.

Race conditions in static local variables

I am currently reading Effective C++. There is a section about using static local variables and it says that if multiple threads access a static variable, there may be a race condition during initialization of that variable.
At least that is my interpretation. Is this true? In C# for example, initialization of a class static variable will never have a race condition.
For example, can this code have a race condition during the static variable initialization?
FileSystem& tfs()
{
static FileSystem fs;
return fs;
}
Below is the except from the book.
Here's the technique applied to both tfs and tempDir:
class FileSystem { ... }; // as before
FileSystem& tfs() // this replaces the tfs object; it could static in the FileSystem class
{
static FileSystem fs; // define and initialize a local static object
return fs; // return a reference to it
}
.
class Directory { ... }; // as before
Directory::Directory( params ) // as before, except references to tfs are now to tfs()
{
...
std::size_t disks = tfs().numDisks();
...
}
Directory& tempDir() // this replaces the tempDir object; it could be static in the Directory class
{
static Directory td; // define/initialize local static object
return td; // return reference to it
}
Clients of this modified system program exactly as they used to,
except they now refer to tfs() and tempDir() instead of tfs and
tempDir. That is, they use functions returning references to objects
instead of using the objects themselves.
The reference-returning functions dictated by this scheme are always
simple: define and initialize a local static object on line 1, return
it on line 2. This simplicity makes them excellent candidates for
inlining, especially if they're called frequently (see Item 30). On
the other hand, the fact that these functions contain static objects
makes them problematic in multithreaded systems. Then again, any kind
of non-const static object — local or non-local — is trouble waiting
to happen in the presence of multiple threads. One way to deal with
such trouble is to manually invoke all the reference-returning
functions during the single-threaded startup portion of the program.
This eliminates initialization-related race conditions.
This section is outdated. The C++03 standard had no mention of threads, so when C++ implementations added them, they did whatever they wanted to with regards to the thread-safety of language constructs. A common choice was to not ensure thread-safe initialisation of static local variables.
In C++11, local static variables are guaranteed to be initialised exactly once, the first time that the program's control flow passes through their declaration, even if this happens concurrently on multiple threads 6.7/4:
An implementation is permitted to perform early initialization of other block-scope variables with static or thread storage duration under the same conditions that an implementation is permitted to statically initialize a variable with static or thread storage duration in namespace scope (3.6.2). Otherwise such a variable is initialized the first time control passes through its declaration; such a variable is considered initialized upon the completion of its initialization. If the initialization exits by throwing an exception, the initialization is not complete, so it will be tried again the next time control enters the declaration. If control enters the declaration concurrently while the variable is being initialized, the concurrent execution shall wait for completion of the initialization.
Even so, this only ensures that initialisation is safe. If you plan to use the returned FileSystem from multiple threads simultaneously, FileSystem must itself provide threadsafe operations.

Is this way of creating static instance thread safe?

I have the following sample C++ code:
class Factory
{
public:
static Factory& createInstance()
{
static Factory fac;
return fac;
}
private:
Factory()
{
//Does something non-trivial
}
};
Let's assume that createInstance is called by two threads at the same time. So will the resulting object be created properly? What happens if the second thread enters the createInstance call when the first thread is in the constructor of Factory?
C++11 and above: local static creation is thread-safe.
The standard guarantees that:
The creation is synchronized.
Should the creation throws an exception, the next time the flow of execution passes the variable definition point, creation will be attempted again.
It is generally implemented with double-checking:
first a thread-local flag is checked, and if set, then the variable is accessed.
if not yet set, then a more expensive synchronized path is taken, and if the variable is created afterward, the thread-local flag is set.
C++03 and C++98: the standard knows no thread.
There are no threads as far as the Standard is concerned, and therefore there is no provision in the Standard regarding synchronization across threads.
However some compilers implement more than the standard mandates, either in the form of extensions or by giving stronger guarantees, so check out for the compilers you're interested in. If they are good quality ones, chances are that they will guarantee it.
Finally, it might not be necessary for it to be thread-safe. If you call this method before creating any thread, then you ensures that it will be correctly initialized before the real multi-threading comes into play, and you'll neatly side-step the issue.
Looking at this page, I'd say that this is not thread-safe, because the constructor could get called multiple times before the variable is finally assigned. An InterlockedCompareExchange() might be needed, where you create a local copy of the variable, then atomically assign the pointer to a static field via the interlocked function, if the static variable is null.
Of course it's thread safe! Unless you are a complete lunatic and spawn threads from constructors of static objects, you won't have any threads until after main() is called, and the createInstance method is just returning a reference to an already constructed object, there's no way this can fail. ISO C++ guarantees that the object will be constructed before the first use after main() is called: there's no assurance that will be before main is called, but is has to be before the first use, and so all systems will perform the initialisation before main() is called. Of course ISO C++ doesn't define behaviour in the presence of threads or dynamic loading, but all compilers for host level machines provide this support and will try to preserve the semantics specified for singly threaded statically linked code where possible.
The instantiation (first call) itself is threadsafe.
However, subsequent access will not be, in general. For instance, suppose after instantiation, one thread calls a mutable Factory method and another calls some accessor method in Factory, then you will be in trouble.
For example, if your factory keeps a count of the number of instances created, you will be in trouble without some kind of mutex around that variable.
However, if Factory is truly a class with no state (no member variables), then you will be okay.

Access violation in a multithreaded application, C++

I am not very good in multithreading programming so I would like to ask for some help/advice.
In my application I have two threads trying to access a shared object.
One can think about two tasks trying to call functions from within another object. For clarity I will show some parts of the program which may not be very relevant but hopefully can help to get my problem better.
Please take a look at the sample code below:
//DataLinkLayer.h
class DataLinkLayer: public iDataLinkLayer {
public:
DataLinkLayer(void);
~DataLinkLayer(void);
};
Where iDataLinkLayer is an interface (abstract class without any implementation) containing pure virtual functions and a reference (pointer) declaration to the isntance of DataLinkLayer object (dataLinkLayer).
// DataLinkLayer.cpp
#include "DataLinkLayer.h"
DataLinkLayer::DataLinkLayer(void) {
/* In reality task constructors takes bunch of other parameters
but they are not relevant (I believe) at this stage. */
dll_task_1* task1 = new dll_task_1(this);
dll_task_2* task2 = new dll_task_2(this);
/* Start multithreading */
task1->start(); // task1 extends thread class
task2->start(); // task2 also extends thread class
}
/* sample stub functions for testing */
void DataLinkLayer::from_task_1() {
printf("Test data Task 1");
}
void DataLinkLayer::from_task_2() {
printf("Test data Task 2");
}
Implementation of task 1 is below. The dataLinLayer interface (iDataLinkLayer) pointer is passed to the class cosntructor in order to be able to access necessary functions from within the dataLinkLayer isntance.
//data_task_1.cpp
#include "iDataLinkLayer.h" // interface to DataLinkLayer
#include "data_task_1.h"
dll_task_1::dll_task_1(iDataLinkLayer* pDataLinkLayer) {
this->dataLinkLayer = pDataLinkLayer; // dataLinkLayer declared in dll_task_1.h
}
// Run method - executes the thread
void dll_task_1::run() {
// program reaches this point and prints the stuff
this->datalinkLayer->from_task_1();
}
// more stuff following - not relevant to the problem
...
And task 2 looks simialrly:
//data_task_2.cpp
#include "iDataLinkLayer.h" // interface to DataLinkLayer
#include "data_task_2.h"
dll_task_2::dll_task_2(iDataLinkLayer* pDataLinkLayer){
this->dataLinkLayer = pDataLinkLayer; // dataLinkLayer declared in dll_task_2.h
}
// // Run method - executes the thread
void dll_task_2::run() {
// ERROR: 'Access violation reading location 0xcdcdcdd9' is signalled at this point
this->datalinkLayer->from_task_2();
}
// more stuff following - not relevant to the problem
...
So as I understand correctly I access the shared pointer from two different threads (tasks) and it is not allowed.
Frankly I thought that I will be able to access the object nevertheless however the results might be unexpected.
It seems that something goes terribly wrong at the point when dll_task_2 tries to call the function using pointer to the DataLinkLayer. dll_task_2 has lower priority hence it is started afterwards. I don't understand why i still cannot at least access the object...
I can use the mutex to lock the variable but I thought that the primary reason for this is to protect the variable/object.
I am using Microsoft Visual C++ 2010 Express.
I don't know much about multithreading so maybe you can suggest a better solution to this problem as well as explain the reason of the problem.
The address of the access violation is a very small positive offset from 0xcdcdcdcd
Wikipedia says:
CDCDCDCD Used by Microsoft's C++ debugging runtime library to mark uninitialised heap memory
Here is the relevant MSDN page.
The corresponding value after free is 0xdddddddd, so it's likely to be incomplete initialization rather than use-after-free.
EDIT: James asked how optimization could mess up virtual function calls. Basically, it's because the currently standardized C++ memory model makes no guarantees about threading. The C++ standard defines that virtual calls made from within a constructor will use the declaring type of the constructor currently being run, not the final dynamic type of the object. So this means that, from the perspective of the C++ sequential execution memory model, the virtual call mechanism (practically speaking, a v-table pointer) must be set up before the constructor starts running (I believe the specific point is after base subobject construction in the ctor-initializer-list and before member subobject construction).
Now, two things can happen to make the observable behavior different in a threaded scenario:
First, the compiler is free to perform any optimization that would, in the C++ sequential execution model, act as-if the rules were being followed. For example, if the compiler can prove that no virtual calls are made inside the constructor, it could wait and set the v-table pointer at the end of the constructor body instead of the beginning. If the constructor doesn't give out the this pointer, since the caller of the constructor also hasn't received its copy of the pointer yet, then none of the functions called by the constructor can call back (virtually or statically) to the object under construction. But the constructor DOES give away the this pointer.
We have to look closer. If the function to which the this pointer is given is visible to the compiler (i.e. included in the current compilation unit), the the compiler can include its behavior in the analysis. We weren't given that function in this question (the constructor and member functions of class task), but it seems likely that the only thing that happens is that said pointer is stored in a subobject which is also not reachable from outside the constructor.
"Foul!", you cry, "I passed the address of that task subobject to a library CreateThread function, therefore it is reachable and through it, the main object is reachable." Ah, but you do not comprehend the mysteries of the "strict aliasing rules". That library function does not accept a parameter of type task *, now does it? And being a parameter whose type is perhaps intptr_t, but definitely neither task * nor char *, the compiler is permitted to assume, for purposes of as-if optimization, that it does not point to a task object (even if it clearly does). And if it does not point to a task object, and the only place our this pointer got stored is in a task member subobject, then it cannot be used to make virtual calls to this, so the compiler may legitimately delay setting up the virtual call mechanism.
But that's not all. Even if the compiler does set up the virtual call mechanism on schedule, the CPU memory model only guarantees that the change is visible to the current CPU core. Writes may become visible to other CPU cores in a completely different order. Now, the library create thread function ought to introduce a memory barrier that constrains CPU write reordering, but that fact that Koz's answer introducing a critical section (which definitely includes a memory barrier) changes the behavior suggests that perhaps no memory barrier was present in the original code.
And, CPU write reordering can not only delay the v-table pointer, but the storage of the this pointer into the task subobject.
I hope you have enjoyed this guided tour of one small corner of the cave of "multithreaded programming is hard".
printf is not, afaik, thread safe. Try surrounding the printf with a critical section.
To do this you InitializeCriticalSection inside iDataLinkLayer class. Then around the printfs you need an EnterCriticalSection and a LeaveCriticalSection. This will prevent both functions entering the printf simultaneously.
Edit: Try changing this code:
dll_task_1* task1 = new task(this);
dll_task_2* task2 = new task(this);
to
dll_task_1* task1 = new dll_task_1(this);
dll_task_2* task2 = new dll_task_2(this);
Im guessing that task is in fact the base class of dll_task_1 and dll_task_2 ... so, more than anything, im surprised it compiles ....
I think it's not always safe to use 'this' (i.e. to call a member function) before the end of the constructor. It could be that task are calling member function of DataLinkLayer before the end of DataLinkLayer constructor. Especially if this member function is virtual:
http://www.parashift.com/c++-faq-lite/ctors.html#faq-10.7
I wanted to comment on the creation of the DataLinkLayer.
When I call the DataLinkLayer constructor from main:
int main () {
DataLinkLayer* dataLinkLayer = new DataLinkLayer();
while(true); // to keep the main thread running
}
I, of coruse, do not destruct the object, this is first. Now, inside the DataLinkLayer cosntructor I initialize many (not only these two tasks) other objects isntances and pass to most of them dataLinkLayer pointer (using this). This is legal, as far as I am concerned. Put it further - it compiles and runs as expected.
What I became curious about is the overall design idea that I am following (if any :) ).
The DataLinkLayer is a parent class that is accessed by several tasks which try to modify it parameters or perform some other processing. Since I want that everything remain as decoupled as possible I provide only interfaces for the accessors and encapsulate the data so that I don't have any global variables, friend functions etc.
It would have been a pretty easy task to do if only multithreading would not be there. I beleive I will encounter many other pitfalls on my way.
Feel free to discuss it please and merci for your generous comments!
UPD:
Speaking of passing the iDataLinkLayer interface pointer to the tasks - is this a good way to do it? In Java it would be pretty usual thing to realize a containment or so called strategy pattern to make things decoupled and stuff. However I am not 100% sure whether it is a good solution in c++... Any suggestions/commnets on it?

What exactly is a reentrant function?

Most of the times, the definition of reentrance is quoted from Wikipedia:
A computer program or routine is
described as reentrant if it can be
safely called again before its
previous invocation has been completed
(i.e it can be safely executed
concurrently). To be reentrant, a
computer program or routine:
Must hold no static (or global)
non-constant data.
Must not return the address to
static (or global) non-constant
data.
Must work only on the data provided
to it by the caller.
Must not rely on locks to singleton
resources.
Must not modify its own code (unless
executing in its own unique thread
storage)
Must not call non-reentrant computer
programs or routines.
How is safely defined?
If a program can be safely executed concurrently, does it always mean that it is reentrant?
What exactly is the common thread between the six points mentioned that I should keep in mind while checking my code for reentrant capabilities?
Also,
Are all recursive functions reentrant?
Are all thread-safe functions reentrant?
Are all recursive and thread-safe functions reentrant?
While writing this question, one thing comes to mind:
Are the terms like reentrance and thread safety absolute at all i.e. do they have fixed concrete definitions? For, if they are not, this question is not very meaningful.
1. How is safely defined?
Semantically. In this case, this is not a hard-defined term. It just mean "You can do that, without risk".
2. If a program can be safely executed concurrently, does it always mean that it is reentrant?
No.
For example, let's have a C++ function that takes both a lock, and a callback as a parameter:
#include <mutex>
typedef void (*callback)();
std::mutex m;
void foo(callback f)
{
m.lock();
// use the resource protected by the mutex
if (f) {
f();
}
// use the resource protected by the mutex
m.unlock();
}
Another function could well need to lock the same mutex:
void bar()
{
foo(nullptr);
}
At first sight, everything seems ok… But wait:
int main()
{
foo(bar);
return 0;
}
If the lock on mutex is not recursive, then here's what will happen, in the main thread:
main will call foo.
foo will acquire the lock.
foo will call bar, which will call foo.
the 2nd foo will try to acquire the lock, fail and wait for it to be released.
Deadlock.
Oops…
Ok, I cheated, using the callback thing. But it's easy to imagine more complex pieces of code having a similar effect.
3. What exactly is the common thread between the six points mentioned that I should keep in mind while checking my code for reentrant capabilities?
You can smell a problem if your function has/gives access to a modifiable persistent resource, or has/gives access to a function that smells.
(Ok, 99% of our code should smell, then… See last section to handle that…)
So, studying your code, one of those points should alert you:
The function has a state (i.e. access a global variable, or even a class member variable)
This function can be called by multiple threads, or could appear twice in the stack while the process is executing (i.e. the function could call itself, directly or indirectly). Function taking callbacks as parameters smell a lot.
Note that non-reentrancy is viral : A function that could call a possible non-reentrant function cannot be considered reentrant.
Note, too, that C++ methods smell because they have access to this, so you should study the code to be sure they have no funny interaction.
4.1. Are all recursive functions reentrant?
No.
In multithreaded cases, a recursive function accessing a shared resource could be called by multiple threads at the same moment, resulting in bad/corrupted data.
In singlethreaded cases, a recursive function could use a non-reentrant function (like the infamous strtok), or use global data without handling the fact the data is already in use. So your function is recursive because it calls itself directly or indirectly, but it can still be recursive-unsafe.
4.2. Are all thread-safe functions reentrant?
In the example above, I showed how an apparently threadsafe function was not reentrant. OK, I cheated because of the callback parameter. But then, there are multiple ways to deadlock a thread by having it acquire twice a non-recursive lock.
4.3. Are all recursive and thread-safe functions reentrant?
I would say "yes" if by "recursive" you mean "recursive-safe".
If you can guarantee that a function can be called simultaneously by multiple threads, and can call itself, directly or indirectly, without problems, then it is reentrant.
The problem is evaluating this guarantee… ^_^
5. Are the terms like reentrance and thread safety absolute at all, i.e. do they have fixed concrete definitions?
I believe they do, but then, evaluating a function is thread-safe or reentrant can be difficult. This is why I used the term smell above: You can find a function is not reentrant, but it could be difficult to be sure a complex piece of code is reentrant
6. An example
Let's say you have an object, with one method that needs to use a resource:
struct MyStruct
{
P * p;
void foo()
{
if (this->p == nullptr)
{
this->p = new P();
}
// lots of code, some using this->p
if (this->p != nullptr)
{
delete this->p;
this->p = nullptr;
}
}
};
The first problem is that if somehow this function is called recursively (i.e. this function calls itself, directly or indirectly), the code will probably crash, because this->p will be deleted at the end of the last call, and still probably be used before the end of the first call.
Thus, this code is not recursive-safe.
We could use a reference counter to correct this:
struct MyStruct
{
size_t c;
P * p;
void foo()
{
if (c == 0)
{
this->p = new P();
}
++c;
// lots of code, some using this->p
--c;
if (c == 0)
{
delete this->p;
this->p = nullptr;
}
}
};
This way, the code becomes recursive-safe… But it is still not reentrant because of multithreading issues: We must be sure the modifications of c and of p will be done atomically, using a recursive mutex (not all mutexes are recursive):
#include <mutex>
struct MyStruct
{
std::recursive_mutex m;
size_t c;
P * p;
void foo()
{
m.lock();
if (c == 0)
{
this->p = new P();
}
++c;
m.unlock();
// lots of code, some using this->p
m.lock();
--c;
if (c == 0)
{
delete this->p;
this->p = nullptr;
}
m.unlock();
}
};
And of course, this all assumes the lots of code is itself reentrant, including the use of p.
And the code above is not even remotely exception-safe, but this is another story… ^_^
7. Hey 99% of our code is not reentrant!
It is quite true for spaghetti code. But if you partition correctly your code, you will avoid reentrancy problems.
7.1. Make sure all functions have NO state
They must only use the parameters, their own local variables, other functions without state, and return copies of the data if they return at all.
7.2. Make sure your object is "recursive-safe"
An object method has access to this, so it shares a state with all the methods of the same instance of the object.
So, make sure the object can be used at one point in the stack (i.e. calling method A), and then, at another point (i.e. calling method B), without corrupting the whole object. Design your object to make sure that upon exiting a method, the object is stable and correct (no dangling pointers, no contradicting member variables, etc.).
7.3. Make sure all your objects are correctly encapsulated
No one else should have access to their internal data:
// bad
int & MyObject::getCounter()
{
return this->counter;
}
// good
int MyObject::getCounter()
{
return this->counter;
}
// good, too
void MyObject::getCounter(int & p_counter)
{
p_counter = this->counter;
}
Even returning a const reference could be dangerous if the user retrieves the address of the data, as some other portion of the code could modify it without the code holding the const reference being told.
7.4. Make sure the user knows your object is not thread-safe
Thus, the user is responsible to use mutexes to use an object shared between threads.
The objects from the STL are designed to be not thread-safe (because of performance issues), and thus, if a user want to share a std::string between two threads, the user must protect its access with concurrency primitives;
7.5. Make sure your thread-safe code is recursive-safe
This means using recursive mutexes if you believe the same resource can be used twice by the same thread.
"Safely" is defined exactly as the common sense dictates - it means "doing its thing correctly without interfering with other things". The six points you cite quite clearly express the requirements to achieve that.
The answers to your 3 questions is 3× "no".
Are all recursive functions reentrant?
NO!
Two simultaneous invocations of a recursive function can easily screw up each other, if
they access the same global/static data, for example.
Are all thread-safe functions reentrant?
NO!
A function is thread-safe if it doesn't malfunction if called concurrently. But this can be achieved e.g. by using a mutex to block the execution of the second invocation until the first finishes, so only one invocation works at a time. Reentrancy means executing concurrently without interfering with other invocations.
Are all recursive and thread-safe functions reentrant?
NO!
See above.
The common thread:
Is the behavior well defined if the routine is called while it is interrupted?
If you have a function like this:
int add( int a , int b ) {
return a + b;
}
Then it is not dependent upon any external state. The behavior is well defined.
If you have a function like this:
int add_to_global( int a ) {
return gValue += a;
}
The result is not well defined on multiple threads. Information could be lost if the timing was just wrong.
The simplest form of a reentrant function is something that operates exclusively on the arguments passed and constant values. Anything else takes special handling or, often, is not reentrant. And of course the arguments must not reference mutable globals.
Now I have to elaborate on my previous comment. #paercebal answer is incorrect. In the example code didn't anyone notice that the mutex which as supposed to be parameter wasn't actually passed in?
I dispute the conclusion, I assert: for a function to be safe in the presence of concurrency it must be re-entrant. Therefore concurrent-safe (usually written thread-safe) implies re-entrant.
Neither thread safe nor re-entrant have anything to say about arguments: we're talking about concurrent execution of the function, which can still be unsafe if inappropriate parameters are used.
For example, memcpy() is thread-safe and re-entrant (usually). Obviously it will not work as expected if called with pointers to the same targets from two different threads. That's the point of the SGI definition, placing the onus on the client to ensure accesses to the same data structure are synchronised by the client.
It is important to understand that in general it is nonsense to have thread-safe operation include the parameters. If you've done any database programming you will understand. The concept of what is "atomic" and might be protected by a mutex or some other technique is necessarily a user concept: processing a transaction on a database can require multiple un-interrupted modifications. Who can say which ones need to be kept in sync but the client programmer?
The point is that "corruption" doesn't have to be messing up the memory on your computer with unserialised writes: corruption can still occur even if all individual operations are serialised. It follows that when you're asking if a function is thread-safe, or re-entrant, the question means for all appropriately separated arguments: using coupled arguments does not constitute a counter-example.
There are many programming systems out there: Ocaml is one, and I think Python as well, which have lots of non-reentrant code in them, but which uses a global lock to interleave thread acesss. These systems are not re-entrant and they're not thread-safe or concurrent-safe, they operate safely simply because they prevent concurrency globally.
A good example is malloc. It is not re-entrant and not thread-safe. This is because it has to access a global resource (the heap). Using locks doesn't make it safe: it's definitely not re-entrant. If the interface to malloc had be design properly it would be possible to make it re-entrant and thread-safe:
malloc(heap*, size_t);
Now it can be safe because it transfers the responsibility for serialising shared access to a single heap to the client. In particular no work is required if there are separate heap objects. If a common heap is used, the client has to serialise access. Using a lock inside the function is not enough: just consider a malloc locking a heap* and then a signal comes along and calls malloc on the same pointer: deadlock: the signal can't proceed, and the client can't either because it is interrupted.
Generally speaking, locks do not make things thread-safe .. they actually destroy safety by inappropriately trying to manage a resource that is owned by the client. Locking has to be done by the object manufacturer, thats the only code that knows how many objects are created and how they will be used.
The "common thread" (pun intended!?) amongst the points listed is that the function must not do anything that would affect the behaviour of any recursive or concurrent calls to the same function.
So for example static data is an issue because it is owned by all threads; if one call modifies a static variable the all threads use the modified data thus affecting their behaviour. Self modifying code (although rarely encountered, and in some cases prevented) would be a problem, because although there are multiple thread, there is only one copy of the code; the code is essential static data too.
Essentially to be re-entrant, each thread must be able to use the function as if it were the only user, and that is not the case if one thread can affect the behaviour of another in a non-deterministic manner. Primarily this involves each thread having either separate or constant data that the function works on.
All that said, point (1) is not necessarily true; for example, you might legitimately and by design use a static variable to retain a recursion count to guard against excessive recursion or to profile an algorithm.
A thread-safe function need not be reentrant; it may achieve thread safety by specifically preventing reentrancy with a lock, and point (6) says that such a function is not reentrant. Regarding point (6), a function that calls a thread-safe function that locks is not safe for use in recursion (it will dead-lock), and is therefore not said to be reentrant, though it may nonetheless safe for concurrency, and would still be re-entrant in the sense that multiple threads can have their program-counters in such a function simultaneously (just not with the locked region). May be this helps to distinguish thread-safety from reentarncy (or maybe adds to your confusion!).
The answers your "Also" questions are "No", "No" and "No". Just because a function is recursive and/or thread safe it doesn't make it re-entrant.
Each of these type of function can fail on all the points you quote. (Though I'm not 100% certain of point 5).
non reentrant function means that there will be a static context, maintained by function. when first time entering, there will be create new context for you. and next entering, you don't send more parameter for that, for convenient to token analyze, . e.g. strtok in c. if you have not clear the context, there might be some errors.
/* strtok example */
#include <stdio.h>
#include <string.h>
int main ()
{
char str[] ="- This, a sample string.";
char * pch;
printf ("Splitting string \"%s\" into tokens:\n",str);
pch = strtok (str," ,.-");
while (pch != NULL)
{
printf ("%s\n",pch);
pch = strtok (NULL, " ,.-");
}
return 0;
}
on the contrary of non-reentrant, reentrant function means calling function in anytime will get the same result without side effect. because there is none of context.
in the view of thread safe, it just means there is only one modification for public variable in current time, in current process. so you should add lock guard to ensure just one change for public field in one time.
so thread safety and reentrant are two different things in different views.reentrant function safety says you should clear context before next time for context analyze. thread safety says you should keep visit public field order.
The terms "Thread-safe" and "re-entrant" mean only and exactly what their definitions say. "Safe" in this context means only what the definition you quote below it says.
"Safe" here certainly doesn't mean safe in the broader sense that calling a given function in a given context won't totally hose your application. Altogether, a function might reliably produce a desired effect in your multi-threaded application but not qualify as either re-entrant or thread-safe according to the definitions. Oppositely, you can call re-entrant functions in ways that will produce a variety of undesired, unexpected and/or unpredictable effects in your multi-threaded application.
Recursive function can be anything and Re-entrant has a stronger definition than thread-safe so the answers to your numbered questions are all no.
Reading the definition of re-entrant, one might summarize it as meaning a function which will not modify any anything beyond what you call it to modify. But you shouldn't rely on only the summary.
Multi-threaded programming is just extremely difficult in the general case. Knowing which part of one's code re-entrant is only a part of this challenge. Thread safety is not additive. Rather than trying to piece together re-entrant functions, it's better to use an overall thread-safe design pattern and use this pattern to guide your use of every thread and shared resources in the your program.