When writing multithreaded code, I often need to read / write to shared memory. To prevent data races, the go - to solution would be to use something like lock_guard. However recently, I came across the concept of "synchronised values" which are usually implemented something in the lines of :
template <typename T>
class SynchronizedValue {
T value;
std::mutex lock;
/* Public helper functions to read/write to a value, making sure the lock is locked when the value is written to*/
};
This class Synchronised value will have a method SetValueTo which will lock the mutex, write to the value, and unlock the mutex, making sure that you can write to a value safely without any data races.
This makes writing multithreaded code so much easier! However, are there any drawbacks / performance overhead of using these synchronised values in contrast to mutexes / lock_guard?
are there any drawbacks / performance overhead of using these SynchronisedValues...?
Before you ask whether there is any drawback, You first ought to ask whether there is any benefit. The standard C++ library already defines std::atomic<T>. You didn't say what /* public helper functions...*/ you had in mind, but if they're just getters and setters for value, then what does your SynchronizedValues<T> class offer that you don't already get from std::atomic<T> ?
There's an important reason why "atomic" variables don't eliminate the need for mutexes, B.T.W. Mutexes aren't just about ensuring "visibility" of memory updates: The most important way to think about mutexes is that they can protect relationships between data in a program.
E.g., Imagine a program that has multiple containers for some class of object, imagine that the program needs to move objects from container to container, and imagine that it is important for some thread to occasionally count all of the objects, and be guaranteed to get an accurate count.
The program can use a mutex to make that possible. It just has to obey two simple rules; (1) No thread may remove an object from any container unless it has the mutex locked, and (2) no thread may release the mutex until every object is in a container. If all of the threads obey those two rules, then the thread that counts the objects can be guaranteed to find all of them if it locks the mutex before it starts counting.
The thing is, you can't guarantee that just by making all of the variables atomic, because atomic doesn't protect any relationship between the variable in question and any other variable. At most, it only protects relationships between the value of the variable before and after some "atomic" operation such as an atomic increment.
When there's more than one variable participating in the relationship, then you must have a mutex (or something equivalent to a mutex.)
If you look under the hood at what is actually happening in each case you just find different ways of saying and doing the same thing.
My application's configuration is a const object that is shared with multiple threads. The configuration is stored in a centralized location and any thread may reach it. I've tried building a lockfree implementation that would allow me to load new configuration while still allowing other threads to read the last known configuration.
My current implementation has a race between updating the shared_ptr and reading from it.
template<typename T>
class ConfigurationHolder
{
public:
typedef std::shared_ptr<T> SPtr;
typedef std::shared_ptr<const T> CSPtr;
ConfigurationHolder() : m_active(new T()) {}
CSPtr get() const { return m_active; } // RACE - read
template<typename Reloader>
bool reload(Reloader reloader)
{
SPtr tmp(new T());
if (!tmp)
return false;
if (!reloader(tmp))
return false;
m_active=tmp; // RACE - write
return true;
}
private:
CSPtr m_active;
};
I can add a shared_mutex for the problematic read/write access to the shared_ptr, but I'm looking for a solution that will keep the implementation lockfree.
EDIT: My version of GCC does not support atomic_exchange on shared_ptr
EDIT2: Requirements Clarification: I have multiple readers and may have multiple reloaders (although this is less common). Readers need to hold a configuration object and that it would not change while they are reading it. Old configuration objects must be freed when the last reader is done with them.
You should just update your compiler to get atomic shared pointer ops.
Failing that, wrap it in a shared_timed_mutex. Then test how much it costs you.
Both of these are going to be less work than correctly writing your own lock-free shared pointer system.
If you have to:
This is a hack, but it might work. It is a read-copy-update style on the pointer itself.
Have a std::vector<std::unique_ptr<std::shared_ptr<T>>>. Have a std::atomic<std::shared_ptr<T> const*> "current" pointer, and a std::atomic<std::size_t> active_readers.
The vector stores your still living shared_ptrs. When you want to change, push a new one on the back. Keep a copy of this shared_ptr.
Now swap the "current" pointer for the new one. Busy-wait until active_readers hits zero, or until you get bored.
If active_readers hit zero, filter your vector for shared_ptrs with a use-count of 1. Remove them from the vector.
Regardless, now drop the extra shared_ptr you ketp to the state you created. And done writing.
If you need more than one writer, lock this process using a separate mutex.
On the reader side, increment active_readers. Now atomically load the "current" pointer, make a local copy of the pointed-to shared_ptr, then decrement active_readers.
However, I just wrote this. So it probably contains bugs. Proving concurrent design correct is hard.
By far the easiest way to make this reliable is to upgrade your compiler and get atomic operations on a shared_ptr.
This is probably overly complex and I think we can set it up so that the T are cleaned up when the last reader goes away, but I aimed for correctness rather than efficiency on recycling T.
With minimial sync on the readers, you could use a condition variable to signal that a reader is done with a given T; but that involves a tiny bit of contention with the writer thread.
Practically, lock-free algorithms are often slower than lock based, because the overhead of a mutex isn't as high as you fear.
A shared_timed_mutex wrapping a shared_ptr, where the writer is simply overwriting the variable, is going to be pretty darn fast. Existing readers are going to keep their old shared_ptr just fine.
I have seen some people hate on recursive_mutex:
http://www.zaval.org/resources/library/butenhof1.html
But when thinking about how to implement a class that is thread safe (mutex protected), it seems to me excruciatingly hard to prove that every method that should be mutex protected is mutex protected and that mutex is locked at most once.
So for object oriented design, should std::recursive_mutex be default and std::mutex considered as an performance optimization in general case unless it is used only in one place (to protect only one resource)?
To make things clear, I'm talking about one private nonstatic mutex. So each class instance has only one mutex.
At the beginning of each public method:
{
std::scoped_lock<std::recursive_mutex> sl;
Most of the time, if you think you need a recursive mutex then your design is wrong, so it definitely should not be the default.
For a class with a single mutex protecting the data members, then the mutex should be locked in all the public member functions, and all the private member functions should assume the mutex is already locked.
If a public member function needs to call another public member function, then split the second one in two: a private implementation function that does the work, and a public member function that just locks the mutex and calls the private one. The first member function can then also call the implementation function without having to worry about recursive locking.
e.g.
class X {
std::mutex m;
int data;
int const max=50;
void increment_data() {
if (data >= max)
throw std::runtime_error("too big");
++data;
}
public:
X():data(0){}
int fetch_count() {
std::lock_guard<std::mutex> guard(m);
return data;
}
void increase_count() {
std::lock_guard<std::mutex> guard(m);
increment_data();
}
int increase_count_and_return() {
std::lock_guard<std::mutex> guard(m);
increment_data();
return data;
}
};
This is of course a trivial contrived example, but the increment_data function is shared between two public member functions, each of which locks the mutex. In single-threaded code, it could be inlined into increase_count, and increase_count_and_return could call that, but we can't do that in multithreaded code.
This is just an application of good design principles: the public member functions take responsibility for locking the mutex, and delegate the responsibility for doing the work to the private member function.
This has the benefit that the public member functions only have to deal with being called when the class is in a consistent state: the mutex is unlocked, and once it is locked then all invariants hold. If you call public member functions from each other then they have to handle the case that the mutex is already locked, and that the invariants don't necessarily hold.
It also means that things like condition variable waits will work: if you pass a lock on a recursive mutex to a condition variable then (a) you need to use std::condition_variable_any because std::condition_variable won't work, and (b) only one level of lock is released, so you may still hold the lock, and thus deadlock because the thread that would trigger the predicate and do the notify cannot acquire the lock.
I struggle to think of a scenario where a recursive mutex is required.
should std::recursive_mutex be default and std::mutex considered as an performance optimization?
Not really, no. The advantage of using non-recursive locks is not just a performance optimization, it means that your code is self-checking that leaf-level atomic operations really are leaf-level, they aren't calling something else that uses the lock.
There's a reasonably common situation where you have:
a function that implements some operation that needs to be serialized, so it takes the mutex and does it.
another function that implements a larger serialized operation, and wants to call the first function to do one step of it, while it is holding the lock for the larger operation.
For the sake of a concrete example, perhaps the first function atomically removes a node from a list, while the second function atomically removes two nodes from a list (and you never want another thread to see the list with only one of the two nodes taken out).
You don't need recursive mutexes for this. For example you could refactor the first function as a public function that takes the lock and calls a private function that does the operation "unsafely". The second function can then call the same private function.
However, sometimes it's convenient to use a recursive mutex instead. There's still an issue with this design: remove_two_nodes calls remove_one_node at a point where a class invariant doesn't hold (the second time it calls it, the list is in precisely the state we don't want to expose). But assuming we know that remove_one_node doesn't rely on that invariant this isn't a killer fault in the design, it's just that we've made our rules a little more complex than the ideal "all class invariants always hold whenever any public function is entered".
So, the trick is occasionally useful and I don't hate recursive mutexes to quite the extent that article does. I don't have the historical knowledge to argue that the reason for their inclusion in Posix is different from what the article says, "to demonstrate mutex attributes and thread extensons". I certainly don't consider them the default, though.
I think it's safe to say that if in your design you're uncertain whether you need a recursive lock or not, then your design is incomplete. You will later regret the fact that you're writing code and you don't know something so fundamentally important as whether the lock is allowed to be already held or not. So don't put in a recursive lock "just in case".
If you know that you need one, use one. If you know that you don't need one, then using a non-recursive lock isn't just an optimization, it's helping to enforce a constraint of the design. It's more useful for the second lock to fail, than for it to succeed and conceal the fact that you've accidentally done something that your design says should never happen. But if you follow your design, and never double-lock the mutex, then you'll never find out whether it's recursive or not, and so a recursive mutex isn't directly harmful.
This analogy might fail, but here's another way to look at it. Imagine you had a choice between two kinds of pointer: one that aborts the program with a stacktrace when you dereference a null pointer, and another one that returns 0 (or to extend it to more types: behaves as if the pointer refers to a value-initialized object). A non-recursive mutex is a bit like the one that aborts, and a recursive mutex is a bit like the one that returns 0. They both potentially have their uses -- people sometimes go to some lengths to implement a "quiet not-a-value" value. But in the case where your code is designed to never dereference a null pointer, you don't want to use by default the version that silently allows that to happen.
I'm not going to directly weigh in on the mutex versus recursive_mutex debate, but I thought it would be good to share a scenario where recursive_mutex'es are absolutely critical to the design.
When working with Boost::asio, Boost::coroutine (and probably things like NT Fibers although I'm less familiar with them), it is absolutely essential that your mutexes be recursive even without the design problem of re-entrancy.
The reason is because the coroutine based approach by its very design will suspend execution inside a routine and then subsequently resume it. This means that two top level methods of a class might "be being called at the same time on the same thread" without any sub calls being made.
I have an unordered map which stores a pointer of objects. I am not sure whether I am doing the correct thing to maintain the thread safety.
typedef std::unordered_map<string, classA*>MAP1;
MAP1 map1;
pthread_mutex_lock(&mutexA)
if(map1.find(id) != map1.end())
{
pthread_mutex_unlock(&mutexA); //already exist, not adding items
}
else
{
classA* obj1 = new classA;
map1[id] = obj1;
obj1->obtainMutex(); //Should I create a mutex for each object so that I could obtain mutex when I am going to update fields for obj1?
pthread_mutex_unlock(&mutexA); //release mutex for unordered_map so that other threads could access other object
obj1->field1 = 1;
performOperation(obj1); //takes some time
obj1->releaseMutex(); //release mutex after updating obj1
}
Several thoughts.
If you do have one mutex per stored object, then you should try to create that mutex in the constructor for the stored object. In other words, to maintain encapsulation, you should avoid having external code manipulate that mutex. I would convert "field1" into a setter "SetField1" that handles the mutex internally.
Next, I agree with the comment that you could move pthread_mutex_unlock(&mutexA); to occur before obj1->obtainMutex();
Finally, I don't think you need obtainMutex at all. Your code looks as if only one thread will ever be allowed to create an object, and therefore only one thread will manipulate the contents during object creation. So if I consider only what little code you've shown here, it does not seem that mutex-per-object is needed at all.
One problem I see with the code is that it will lead to problems especially when exceptions occur.
obj1->obtainMutex(); //Should I create a mutex for each object so that I could obtain mutex when I am going to update fields for obj1?
pthread_mutex_unlock(&mutexA); //release mutex for unordered_map so that other threads could access other object
obj1->field1 = 1;
performOperation(obj1);
If performOperation throws an exception then obj1->releaseMutex(); will never get called thus leaving the object locked and potentially leading to deadlocks sometime in the future.
And even if you do not use exceptions yourself some library code you use in performOperation might. Or you might mistakenly sometime in the future insert a return and forget to unlock all owned locks before and so on...
The same goes for the pthread_mutex_lock and pthread_mutex_unlock calls.
I would recommend using RAII for locking / unlocking.
I.e. the code could look like this:
typedef std::unordered_map<string, classA*>MAP1;
MAP1 map1;
Lock mapLock(&mutexA); //automatci variable. The destructor of the Lock class
//automatically calls pthread_mutex_unlock in its destructor if it "owns" the
//mutex
if(map1.find(id) == map1.end())
{
classA* obj1 = new classA;
map1[id] = obj1;
Lock objLock(obj);
mapLock.release(); //we explicitly release mapLock here
obj1->field1 = 1;
performOperation(obj1); //takes some time
}
I.e. for a reference for some minimalistic RAAI threading support please refer to "Modern C++ design: generic programming and design patterns applied" by Andrei Alexandrescu (see here). Other resources also exist (here)
I will try to describe in the end one other problem I see with the code. More exactly, the problem I see with having the obtainMutex and releaseMutex as methods and calling them explicitly. Let's imagine thread 1 locks the map, creates an object calls obtainMutex and unlocks the map. Another thread (lets call it Thread 2) gets scheduled for execution locks the map obtains an iterator to the map1[id] of the object and calls releaseMutex() on the pObject (i.e. let's say due to a bug the code does not attempt to call obtainMutex first). Now Thread 1 gets scheduled and calls at some point releaseMutex() also. So the object got locked once but released twice. What I am trying to say is that it's going to be hard work making sure the calls are always correctly paired in the face of exceptions, potential early returns that do not unlock and incorrect usage of the object locking interface. Also Thread 2 might just delete the pObject it obtained from the map and erase it from the map. thread 1 will then go on an work with an already deleted object.
When used judiciously RAII would make the code simpler to understand (even shorter if you compare our versions) and also help a lot with some of the problems I enumerated above.
Thought of combining my comments into an answer:
1) When you are adding an entry, and therefore are modifying the container, you should not allow read access from other threads, as the container may be in a transition between legal states. Complementary, you should not modify the container when other threads are reading it. This calls for the use of read-write lock. The pseudo-code is something like:
set read lock
search container
if found
release read lock
operate on the found object
else
set write lock
release read lock
add entry
release write lock
endif
(it's been some time since I've done multi-threaded programming, so I may be rusty on details)
2) When I worked on MSVC some years ago we used the multi-threaded (i.e. thread-safe) version of the standard libraries. It could save you all this trouble. Didn't bother (yet) to check if thread-safe std exists also on gcc/Linux.
Most of the times, the definition of reentrance is quoted from Wikipedia:
A computer program or routine is
described as reentrant if it can be
safely called again before its
previous invocation has been completed
(i.e it can be safely executed
concurrently). To be reentrant, a
computer program or routine:
Must hold no static (or global)
non-constant data.
Must not return the address to
static (or global) non-constant
data.
Must work only on the data provided
to it by the caller.
Must not rely on locks to singleton
resources.
Must not modify its own code (unless
executing in its own unique thread
storage)
Must not call non-reentrant computer
programs or routines.
How is safely defined?
If a program can be safely executed concurrently, does it always mean that it is reentrant?
What exactly is the common thread between the six points mentioned that I should keep in mind while checking my code for reentrant capabilities?
Also,
Are all recursive functions reentrant?
Are all thread-safe functions reentrant?
Are all recursive and thread-safe functions reentrant?
While writing this question, one thing comes to mind:
Are the terms like reentrance and thread safety absolute at all i.e. do they have fixed concrete definitions? For, if they are not, this question is not very meaningful.
1. How is safely defined?
Semantically. In this case, this is not a hard-defined term. It just mean "You can do that, without risk".
2. If a program can be safely executed concurrently, does it always mean that it is reentrant?
No.
For example, let's have a C++ function that takes both a lock, and a callback as a parameter:
#include <mutex>
typedef void (*callback)();
std::mutex m;
void foo(callback f)
{
m.lock();
// use the resource protected by the mutex
if (f) {
f();
}
// use the resource protected by the mutex
m.unlock();
}
Another function could well need to lock the same mutex:
void bar()
{
foo(nullptr);
}
At first sight, everything seems ok… But wait:
int main()
{
foo(bar);
return 0;
}
If the lock on mutex is not recursive, then here's what will happen, in the main thread:
main will call foo.
foo will acquire the lock.
foo will call bar, which will call foo.
the 2nd foo will try to acquire the lock, fail and wait for it to be released.
Deadlock.
Oops…
Ok, I cheated, using the callback thing. But it's easy to imagine more complex pieces of code having a similar effect.
3. What exactly is the common thread between the six points mentioned that I should keep in mind while checking my code for reentrant capabilities?
You can smell a problem if your function has/gives access to a modifiable persistent resource, or has/gives access to a function that smells.
(Ok, 99% of our code should smell, then… See last section to handle that…)
So, studying your code, one of those points should alert you:
The function has a state (i.e. access a global variable, or even a class member variable)
This function can be called by multiple threads, or could appear twice in the stack while the process is executing (i.e. the function could call itself, directly or indirectly). Function taking callbacks as parameters smell a lot.
Note that non-reentrancy is viral : A function that could call a possible non-reentrant function cannot be considered reentrant.
Note, too, that C++ methods smell because they have access to this, so you should study the code to be sure they have no funny interaction.
4.1. Are all recursive functions reentrant?
No.
In multithreaded cases, a recursive function accessing a shared resource could be called by multiple threads at the same moment, resulting in bad/corrupted data.
In singlethreaded cases, a recursive function could use a non-reentrant function (like the infamous strtok), or use global data without handling the fact the data is already in use. So your function is recursive because it calls itself directly or indirectly, but it can still be recursive-unsafe.
4.2. Are all thread-safe functions reentrant?
In the example above, I showed how an apparently threadsafe function was not reentrant. OK, I cheated because of the callback parameter. But then, there are multiple ways to deadlock a thread by having it acquire twice a non-recursive lock.
4.3. Are all recursive and thread-safe functions reentrant?
I would say "yes" if by "recursive" you mean "recursive-safe".
If you can guarantee that a function can be called simultaneously by multiple threads, and can call itself, directly or indirectly, without problems, then it is reentrant.
The problem is evaluating this guarantee… ^_^
5. Are the terms like reentrance and thread safety absolute at all, i.e. do they have fixed concrete definitions?
I believe they do, but then, evaluating a function is thread-safe or reentrant can be difficult. This is why I used the term smell above: You can find a function is not reentrant, but it could be difficult to be sure a complex piece of code is reentrant
6. An example
Let's say you have an object, with one method that needs to use a resource:
struct MyStruct
{
P * p;
void foo()
{
if (this->p == nullptr)
{
this->p = new P();
}
// lots of code, some using this->p
if (this->p != nullptr)
{
delete this->p;
this->p = nullptr;
}
}
};
The first problem is that if somehow this function is called recursively (i.e. this function calls itself, directly or indirectly), the code will probably crash, because this->p will be deleted at the end of the last call, and still probably be used before the end of the first call.
Thus, this code is not recursive-safe.
We could use a reference counter to correct this:
struct MyStruct
{
size_t c;
P * p;
void foo()
{
if (c == 0)
{
this->p = new P();
}
++c;
// lots of code, some using this->p
--c;
if (c == 0)
{
delete this->p;
this->p = nullptr;
}
}
};
This way, the code becomes recursive-safe… But it is still not reentrant because of multithreading issues: We must be sure the modifications of c and of p will be done atomically, using a recursive mutex (not all mutexes are recursive):
#include <mutex>
struct MyStruct
{
std::recursive_mutex m;
size_t c;
P * p;
void foo()
{
m.lock();
if (c == 0)
{
this->p = new P();
}
++c;
m.unlock();
// lots of code, some using this->p
m.lock();
--c;
if (c == 0)
{
delete this->p;
this->p = nullptr;
}
m.unlock();
}
};
And of course, this all assumes the lots of code is itself reentrant, including the use of p.
And the code above is not even remotely exception-safe, but this is another story… ^_^
7. Hey 99% of our code is not reentrant!
It is quite true for spaghetti code. But if you partition correctly your code, you will avoid reentrancy problems.
7.1. Make sure all functions have NO state
They must only use the parameters, their own local variables, other functions without state, and return copies of the data if they return at all.
7.2. Make sure your object is "recursive-safe"
An object method has access to this, so it shares a state with all the methods of the same instance of the object.
So, make sure the object can be used at one point in the stack (i.e. calling method A), and then, at another point (i.e. calling method B), without corrupting the whole object. Design your object to make sure that upon exiting a method, the object is stable and correct (no dangling pointers, no contradicting member variables, etc.).
7.3. Make sure all your objects are correctly encapsulated
No one else should have access to their internal data:
// bad
int & MyObject::getCounter()
{
return this->counter;
}
// good
int MyObject::getCounter()
{
return this->counter;
}
// good, too
void MyObject::getCounter(int & p_counter)
{
p_counter = this->counter;
}
Even returning a const reference could be dangerous if the user retrieves the address of the data, as some other portion of the code could modify it without the code holding the const reference being told.
7.4. Make sure the user knows your object is not thread-safe
Thus, the user is responsible to use mutexes to use an object shared between threads.
The objects from the STL are designed to be not thread-safe (because of performance issues), and thus, if a user want to share a std::string between two threads, the user must protect its access with concurrency primitives;
7.5. Make sure your thread-safe code is recursive-safe
This means using recursive mutexes if you believe the same resource can be used twice by the same thread.
"Safely" is defined exactly as the common sense dictates - it means "doing its thing correctly without interfering with other things". The six points you cite quite clearly express the requirements to achieve that.
The answers to your 3 questions is 3× "no".
Are all recursive functions reentrant?
NO!
Two simultaneous invocations of a recursive function can easily screw up each other, if
they access the same global/static data, for example.
Are all thread-safe functions reentrant?
NO!
A function is thread-safe if it doesn't malfunction if called concurrently. But this can be achieved e.g. by using a mutex to block the execution of the second invocation until the first finishes, so only one invocation works at a time. Reentrancy means executing concurrently without interfering with other invocations.
Are all recursive and thread-safe functions reentrant?
NO!
See above.
The common thread:
Is the behavior well defined if the routine is called while it is interrupted?
If you have a function like this:
int add( int a , int b ) {
return a + b;
}
Then it is not dependent upon any external state. The behavior is well defined.
If you have a function like this:
int add_to_global( int a ) {
return gValue += a;
}
The result is not well defined on multiple threads. Information could be lost if the timing was just wrong.
The simplest form of a reentrant function is something that operates exclusively on the arguments passed and constant values. Anything else takes special handling or, often, is not reentrant. And of course the arguments must not reference mutable globals.
Now I have to elaborate on my previous comment. #paercebal answer is incorrect. In the example code didn't anyone notice that the mutex which as supposed to be parameter wasn't actually passed in?
I dispute the conclusion, I assert: for a function to be safe in the presence of concurrency it must be re-entrant. Therefore concurrent-safe (usually written thread-safe) implies re-entrant.
Neither thread safe nor re-entrant have anything to say about arguments: we're talking about concurrent execution of the function, which can still be unsafe if inappropriate parameters are used.
For example, memcpy() is thread-safe and re-entrant (usually). Obviously it will not work as expected if called with pointers to the same targets from two different threads. That's the point of the SGI definition, placing the onus on the client to ensure accesses to the same data structure are synchronised by the client.
It is important to understand that in general it is nonsense to have thread-safe operation include the parameters. If you've done any database programming you will understand. The concept of what is "atomic" and might be protected by a mutex or some other technique is necessarily a user concept: processing a transaction on a database can require multiple un-interrupted modifications. Who can say which ones need to be kept in sync but the client programmer?
The point is that "corruption" doesn't have to be messing up the memory on your computer with unserialised writes: corruption can still occur even if all individual operations are serialised. It follows that when you're asking if a function is thread-safe, or re-entrant, the question means for all appropriately separated arguments: using coupled arguments does not constitute a counter-example.
There are many programming systems out there: Ocaml is one, and I think Python as well, which have lots of non-reentrant code in them, but which uses a global lock to interleave thread acesss. These systems are not re-entrant and they're not thread-safe or concurrent-safe, they operate safely simply because they prevent concurrency globally.
A good example is malloc. It is not re-entrant and not thread-safe. This is because it has to access a global resource (the heap). Using locks doesn't make it safe: it's definitely not re-entrant. If the interface to malloc had be design properly it would be possible to make it re-entrant and thread-safe:
malloc(heap*, size_t);
Now it can be safe because it transfers the responsibility for serialising shared access to a single heap to the client. In particular no work is required if there are separate heap objects. If a common heap is used, the client has to serialise access. Using a lock inside the function is not enough: just consider a malloc locking a heap* and then a signal comes along and calls malloc on the same pointer: deadlock: the signal can't proceed, and the client can't either because it is interrupted.
Generally speaking, locks do not make things thread-safe .. they actually destroy safety by inappropriately trying to manage a resource that is owned by the client. Locking has to be done by the object manufacturer, thats the only code that knows how many objects are created and how they will be used.
The "common thread" (pun intended!?) amongst the points listed is that the function must not do anything that would affect the behaviour of any recursive or concurrent calls to the same function.
So for example static data is an issue because it is owned by all threads; if one call modifies a static variable the all threads use the modified data thus affecting their behaviour. Self modifying code (although rarely encountered, and in some cases prevented) would be a problem, because although there are multiple thread, there is only one copy of the code; the code is essential static data too.
Essentially to be re-entrant, each thread must be able to use the function as if it were the only user, and that is not the case if one thread can affect the behaviour of another in a non-deterministic manner. Primarily this involves each thread having either separate or constant data that the function works on.
All that said, point (1) is not necessarily true; for example, you might legitimately and by design use a static variable to retain a recursion count to guard against excessive recursion or to profile an algorithm.
A thread-safe function need not be reentrant; it may achieve thread safety by specifically preventing reentrancy with a lock, and point (6) says that such a function is not reentrant. Regarding point (6), a function that calls a thread-safe function that locks is not safe for use in recursion (it will dead-lock), and is therefore not said to be reentrant, though it may nonetheless safe for concurrency, and would still be re-entrant in the sense that multiple threads can have their program-counters in such a function simultaneously (just not with the locked region). May be this helps to distinguish thread-safety from reentarncy (or maybe adds to your confusion!).
The answers your "Also" questions are "No", "No" and "No". Just because a function is recursive and/or thread safe it doesn't make it re-entrant.
Each of these type of function can fail on all the points you quote. (Though I'm not 100% certain of point 5).
non reentrant function means that there will be a static context, maintained by function. when first time entering, there will be create new context for you. and next entering, you don't send more parameter for that, for convenient to token analyze, . e.g. strtok in c. if you have not clear the context, there might be some errors.
/* strtok example */
#include <stdio.h>
#include <string.h>
int main ()
{
char str[] ="- This, a sample string.";
char * pch;
printf ("Splitting string \"%s\" into tokens:\n",str);
pch = strtok (str," ,.-");
while (pch != NULL)
{
printf ("%s\n",pch);
pch = strtok (NULL, " ,.-");
}
return 0;
}
on the contrary of non-reentrant, reentrant function means calling function in anytime will get the same result without side effect. because there is none of context.
in the view of thread safe, it just means there is only one modification for public variable in current time, in current process. so you should add lock guard to ensure just one change for public field in one time.
so thread safety and reentrant are two different things in different views.reentrant function safety says you should clear context before next time for context analyze. thread safety says you should keep visit public field order.
The terms "Thread-safe" and "re-entrant" mean only and exactly what their definitions say. "Safe" in this context means only what the definition you quote below it says.
"Safe" here certainly doesn't mean safe in the broader sense that calling a given function in a given context won't totally hose your application. Altogether, a function might reliably produce a desired effect in your multi-threaded application but not qualify as either re-entrant or thread-safe according to the definitions. Oppositely, you can call re-entrant functions in ways that will produce a variety of undesired, unexpected and/or unpredictable effects in your multi-threaded application.
Recursive function can be anything and Re-entrant has a stronger definition than thread-safe so the answers to your numbered questions are all no.
Reading the definition of re-entrant, one might summarize it as meaning a function which will not modify any anything beyond what you call it to modify. But you shouldn't rely on only the summary.
Multi-threaded programming is just extremely difficult in the general case. Knowing which part of one's code re-entrant is only a part of this challenge. Thread safety is not additive. Rather than trying to piece together re-entrant functions, it's better to use an overall thread-safe design pattern and use this pattern to guide your use of every thread and shared resources in the your program.