Why do thread creation methods take an argument? - c++

All thread create methods like pthread_create() or CreateThread() in Windows expect the caller to provide a pointer to the arg for the thread. Isn't this inherently unsafe?
This can work 'safely' only if the arg is in the heap, and then again creating a heap variable
adds to the overhead of cleaning the allocated memory up. If a stack variable is provided as the arg then the result is at best unpredictable.
This looks like a half-cooked solution to me, or am I missing some subtle aspect of the APIs?

Context.
Many C APIs provide an extra void * argument so that you can pass context through third party APIs. Typically you might pack some information into a struct and point this variable at the struct, so that when the thread initializes and begins executing it has more information than the particular function that its started with. There's no necessity to keep this information at the location given. For instance you might have several fields that tell the newly created thread what it will be working on, and where it can find the data it will need. Furthermore there's no requirement that the void * actually be used as a pointer, its a typeless argument with the most appropriate width on a given architecture (pointer width), that anything can be made available to the new thread. For instance you might pass an int directly if sizeof(int) <= sizeof(void *): (void *)3.
As a related example of this style: A FUSE filesystem I'm currently working on starts by opening a filesystem instance, say struct MyFS. When running FUSE in multithreaded mode, threads arrive onto a series of FUSE-defined calls for handling open, read, stat, etc. Naturally these can have no advance knowledge of the actual specifics of my filesystem, so this is passed in the fuse_main function void * argument intended for this purpose. struct MyFS *blah = myfs_init(); fuse_main(..., blah);. Now when the threads arrive at the FUSE calls mentioned above, the void * received is converted back into struct MyFS * so that the call can be handled within the context of the intended MyFS instance.

Isn't this inherently unsafe?
No. It is a pointer. Since you (as the developer) have created both the function that will be executed by the thread and the argument that will be passed to the thread you are in full control. Remember this is a C API (not a C++ one) so it is as safe as you can get.
This can work 'safely' only if the arg is in the heap,
No. It is safe as long as its lifespan in the parent thread is as long as the lifetime that it can be used in the child thread. There are many ways to make sure that it lives long enough.
and then again creating a heap variable adds to the overhead of cleaning the allocated memory up.
Seriously. That's an argument? Since this is basically how it is done for all threads unless you are passing something much more simple like an integer (see below).
If a stack variable is provided as the arg then the result is at best unpredictable.
Its as predictable as you (the developer) make it. You created both the thread and the argument. It is your responsibility to make sure that the lifetime of the argument is appropriate. Nobody said it would be easy.
This looks like a half-cooked solution to me, or am i missing some subtle aspects of the APIs?
You are missing that this is the most basic of threading API. It is designed to be as flexible as possible so that safer systems can be developed with as few strings as possible. So we now hove boost::threads which if I guess is build on-top of these basic threading facilities but provide a much safer and easier to use infrastructure (but at some extra cost).
If you want RAW unfettered speed and flexibility use the C API (with some danger).
If you want a slightly safer use a higher level API like boost:thread (but slightly more costly)
Thread specific storage with no dynamic allocation (Example)
#include <pthread.h>
#include <iostream>
struct ThreadData
{
// Stuff for my thread.
};
ThreadData threadData[5];
extern "C" void* threadStart(void* data);
void* threadStart(void* data)
{
intptr_t id = reinterpret_cast<intptr_t>(data);
ThreadData& tData = threadData[id];
// Do Stuff
return NULL;
}
int main()
{
for(intptr_t loop = 0;loop < 5; ++loop)
{
pthread_t threadInfo; // Not good just makes the example quick to write.
pthread_create(&threadInfo, NULL, threadStart, reinterpret_cast<void*>(loop));
}
// You should wait here for threads to finish before exiting.
}

Allocation on the heap does not add a lot of overhead.
Besides the heap and the stack, global variable space is another option. Also, it's possible to use a stack frame that will last as long as the child thread. Consider, for example, local variables of main.
I favor putting the arguments to the thread in the same structure as the pthread_t object itself. So wherever you put the pthread record, put its arguments as well. Problem solved :v) .

This is a common idiom in all C programs that use function pointers, not just for creating threads.
Think about it. Suppose your function void f(void (*fn)()) simply calls into another function. There's very little you can actually do with that. Typically a function pointer has to operate on some data. Passing in that data as a parameter is a clean way to accomplish this, without, say, the use of global variables. Since the function f() doesn't know what the purpose of that data might be, it uses the ever-generic void * parameter, and relies on you the programmer to make sense of it.
If you're more comfortable with thinking in terms of object-oriented programming, you can also think of it like calling a method on a class. In this analogy, the function pointer is the method and the extra void * parameter is the equivalent of what C++ would call the this pointer: it provides you some instance variables to operate on.

The pointer is a pointer to the data that you intend to use in the function. Windows style APIs require that you give them a static or global function.
Often this is a pointer to the class you are intending to use a pointer to this or pThis if you will and the intention is that you will delete the pThis after the ending of the thread.
Its a very procedural approach, however it has a very big advantage which is often overlooked, the CreateThread C style API is binary compatible so that when you wrap this API with a C++ class (or almost any other language) you can do this actually do this. If the parameter was typed, you wouldn't be able to access this from another language as easily.
So yes, this is unsafe but there's a good reason for it.

Related

How to make SAFEARRAYs threadsafe if locking is unsafe?

I'm writing a C++ DLL that takes SAFEARRAYs passed in from Excel VBA. The DLL has multiple threads sharing the SAFEARRAYs (both reading from and writing to them). While trying to figure out how to do the sharing safely, I came across this bit of MSDN documentation:
For example, consider an application that uses the SafeArrayLock and SafeArrayUnlock functions. If these functions are called concurrently from different threads on the same SAFEARRAY data type instance, an inconsistent lock count may be created. This will eventually cause the SafeArrayUnlock function to return E_UNEXPECTED. You can prevent this by providing your own synchronization code.
It's confusing me because I thought the whole point of locks was to ensure thread safety and clearly this locking functionality is not intended for that. But why would you need locking in a single-threaded application?
The documentation for SafeArrayLock also says that the function "places a pointer to the array data in pvData of the array descriptor" but by my tests, the pvData pointer is valid even when SafeArrayLock has never been called (and the lock count is 0). For example, this function:
void __declspec(dllexport) __stdcall testfun(VARIANT& vararr) {
if (vararr.parray->cLocks != 0) throw -1;
else {
double* data = (double*) vararr.parray->pvData;
data[5] = 4.1;
}
}
effectively writes to the array stored in vararr and the change is visible in the VBA that calls it. What's up with that?
Given the seeming persistence of pvData and the unsafe locking mechanism, my instinct is to just scrap all the array manipulation functions and let my threads reach into pvData as they please (the writes never collide, so what could go wrong?) but others on here caution against manual array manipulation for unclear reasons. What's the right approach? Thanks in advance.

deleting data sent to thread in FLTK using c++

I'm performing a long task, implemented in some function, in a new thread (i.e. not the main GUI thread) in FLTK (in C++).
I achieve this with a callback function that in turn creates the thread so that I have something of the form
void callback(Fl_Widget* widget,void* passed_data){
data_type* data = new data_type;
data->value = x; //populate data structure to send to function
fl_create_thread(thread1,function,data);
}
where fl_create_thread (at least for my purposes) is just using pthread_create meaning the data variable is passed as a void pointer and so 'function' takes a void pointer too.
I realised that this would actually create a memory leak as I don't delete 'data':
I can't delete it after the line with fl_create_thread as the thread hasn't necessarily (or ever) finished running. I have tried deleting the pointer at the end of 'function' but this raises two issues
1) Deleting a void pointer is undefined and so I am getting warnings to that effect.
2) This almost defeats the point of using a function: is there a better general coding practice?
Can anyone tell me how I should approach this? Thanks.
1) Deleting a void pointer is undefined and so I am getting warnings
to that effect.
You're probably a casting to the proper type at the very beginning of your function (or you should in order to meaningfully use the parameter). Delete that pointer instead of the void * parameter and you will not get a warning.
2) This almost defeats the point of using a function: is there a
better general coding practice?
Not sure I see your point. This thread interface (pthread) is very c-like, so it has to resort to void pointers in order to pass arbitrary data. You can you look at C++ thread interface for a more C++ way of defining what to execute and with what parameters
With memory you should always have clear the ownership of allocated memory. This approach transfers the ownership to the just created thread.

How do you call a function in another address-space in C++

I'm aware of the threading issues etc that this could cause and of its dangers but I need to know how to do this for a security project I am doing at school. I need to know how to call a function in a remote address space of a given calling convention with arguments - preferably recovering the data the remote function has returned though its really not required that I do.
If I can get specifics from the remote function's function prototype at compile time, I will be able to make this method work. I need to know how big the arguments are and if the arguments are explicitly declared as pointers or not (void*, char*, int*, etc...)
I.e if I define a function prototype like:
typedef void (__cdecl *testFunc_t)(int* pData);
I would need to, at compile time, get the size of arguments at least, and if I could, which ones are pointers or not. Here we are assuming the remote function is either an stdcall or _cdecl call.
The IDE I am using is Microsoft Visual Studio 2007 in case the solution is specific to a particular product.
Here is my plan:
Create a thread in the remote process using CreateRemoteThread at the origin of the function want to call, though I would do so in a suspended state.
I would setup the stack such that the return address was that of a stub of code allocated inside of the process that would call ExitThread(eax) - as this would exit the thread with the function's return value - I would then recover this by by using GetExitCodeThread
I would also copy the arguments for the function call from my local stack to that of the newly created thread - this is where I need to know if function arguments are pointers and the size of the arguments.
Resume the thread and wait for it to exit, at which point I will return to the caller with the threads exit code.
I know that this should be doable at compile time but whether the compiler has some method I can use to do it, I'm not sure. I'm also aware all this data can be easily recovered from a PDB file created after compiling the code and that the size of arguments might change if the compiler performs optimizations. I don't need to be told how dangerous this is, as I am fully aware of it, but this is not a commercial product but a small project I must do for school.
The question:
If I have a function prototype such as
typedef void (__cdecl testFunc_t)(int pData);
Is there anyway I can get the size of this prototype's arguments at compile time(i.e in the above example, the arguments would sum to a total size of sizeof(int*) If, for example, I have a function like:
template<typename T> unsigned long getPrototypeArgLength<T>()
{
//would return size of arguments described in the prototype T
}
//when called as
getPrototypeArgLength<testFunc>()
This seems like quite a school project...
For step 3 you can use ReadProcessMemory / WriteProcessMemory (one of them). For example, the new thread could receive the address (on the calling process), during the thread creation, of the parameters on the start (begin and end). Then it could read the caller process memory from that region and copy it to its own stack.
Did you consider using COM for this whole thing? you could probably get things done much easier if you use a mechanism that was designed especially for that.
Alright, I figured out that I can use the BOOST library to get a lot of type information at compile-time. Specifically, I am using boost::function_traits however, if you look around the boost library, you will find that you can recover quite a bit of information. Here's a bit of code I wrote to demonstrate how to get the number of arguments of a function prototype.
(actually, I haven't tested the below code, its just something I'm throwing together from another function I've made and tested.)
template<typename T>
unsigned long getArgCount()
{
return boost::function_traits<boost::remove_pointer<T>::type>::arity;
}
void (*pFunc)(int, int);
2 = getArgCount<BOOST_TYPEOF(pFunc)>();

What exactly is a reentrant function?

Most of the times, the definition of reentrance is quoted from Wikipedia:
A computer program or routine is
described as reentrant if it can be
safely called again before its
previous invocation has been completed
(i.e it can be safely executed
concurrently). To be reentrant, a
computer program or routine:
Must hold no static (or global)
non-constant data.
Must not return the address to
static (or global) non-constant
data.
Must work only on the data provided
to it by the caller.
Must not rely on locks to singleton
resources.
Must not modify its own code (unless
executing in its own unique thread
storage)
Must not call non-reentrant computer
programs or routines.
How is safely defined?
If a program can be safely executed concurrently, does it always mean that it is reentrant?
What exactly is the common thread between the six points mentioned that I should keep in mind while checking my code for reentrant capabilities?
Also,
Are all recursive functions reentrant?
Are all thread-safe functions reentrant?
Are all recursive and thread-safe functions reentrant?
While writing this question, one thing comes to mind:
Are the terms like reentrance and thread safety absolute at all i.e. do they have fixed concrete definitions? For, if they are not, this question is not very meaningful.
1. How is safely defined?
Semantically. In this case, this is not a hard-defined term. It just mean "You can do that, without risk".
2. If a program can be safely executed concurrently, does it always mean that it is reentrant?
No.
For example, let's have a C++ function that takes both a lock, and a callback as a parameter:
#include <mutex>
typedef void (*callback)();
std::mutex m;
void foo(callback f)
{
m.lock();
// use the resource protected by the mutex
if (f) {
f();
}
// use the resource protected by the mutex
m.unlock();
}
Another function could well need to lock the same mutex:
void bar()
{
foo(nullptr);
}
At first sight, everything seems ok… But wait:
int main()
{
foo(bar);
return 0;
}
If the lock on mutex is not recursive, then here's what will happen, in the main thread:
main will call foo.
foo will acquire the lock.
foo will call bar, which will call foo.
the 2nd foo will try to acquire the lock, fail and wait for it to be released.
Deadlock.
Oops…
Ok, I cheated, using the callback thing. But it's easy to imagine more complex pieces of code having a similar effect.
3. What exactly is the common thread between the six points mentioned that I should keep in mind while checking my code for reentrant capabilities?
You can smell a problem if your function has/gives access to a modifiable persistent resource, or has/gives access to a function that smells.
(Ok, 99% of our code should smell, then… See last section to handle that…)
So, studying your code, one of those points should alert you:
The function has a state (i.e. access a global variable, or even a class member variable)
This function can be called by multiple threads, or could appear twice in the stack while the process is executing (i.e. the function could call itself, directly or indirectly). Function taking callbacks as parameters smell a lot.
Note that non-reentrancy is viral : A function that could call a possible non-reentrant function cannot be considered reentrant.
Note, too, that C++ methods smell because they have access to this, so you should study the code to be sure they have no funny interaction.
4.1. Are all recursive functions reentrant?
No.
In multithreaded cases, a recursive function accessing a shared resource could be called by multiple threads at the same moment, resulting in bad/corrupted data.
In singlethreaded cases, a recursive function could use a non-reentrant function (like the infamous strtok), or use global data without handling the fact the data is already in use. So your function is recursive because it calls itself directly or indirectly, but it can still be recursive-unsafe.
4.2. Are all thread-safe functions reentrant?
In the example above, I showed how an apparently threadsafe function was not reentrant. OK, I cheated because of the callback parameter. But then, there are multiple ways to deadlock a thread by having it acquire twice a non-recursive lock.
4.3. Are all recursive and thread-safe functions reentrant?
I would say "yes" if by "recursive" you mean "recursive-safe".
If you can guarantee that a function can be called simultaneously by multiple threads, and can call itself, directly or indirectly, without problems, then it is reentrant.
The problem is evaluating this guarantee… ^_^
5. Are the terms like reentrance and thread safety absolute at all, i.e. do they have fixed concrete definitions?
I believe they do, but then, evaluating a function is thread-safe or reentrant can be difficult. This is why I used the term smell above: You can find a function is not reentrant, but it could be difficult to be sure a complex piece of code is reentrant
6. An example
Let's say you have an object, with one method that needs to use a resource:
struct MyStruct
{
P * p;
void foo()
{
if (this->p == nullptr)
{
this->p = new P();
}
// lots of code, some using this->p
if (this->p != nullptr)
{
delete this->p;
this->p = nullptr;
}
}
};
The first problem is that if somehow this function is called recursively (i.e. this function calls itself, directly or indirectly), the code will probably crash, because this->p will be deleted at the end of the last call, and still probably be used before the end of the first call.
Thus, this code is not recursive-safe.
We could use a reference counter to correct this:
struct MyStruct
{
size_t c;
P * p;
void foo()
{
if (c == 0)
{
this->p = new P();
}
++c;
// lots of code, some using this->p
--c;
if (c == 0)
{
delete this->p;
this->p = nullptr;
}
}
};
This way, the code becomes recursive-safe… But it is still not reentrant because of multithreading issues: We must be sure the modifications of c and of p will be done atomically, using a recursive mutex (not all mutexes are recursive):
#include <mutex>
struct MyStruct
{
std::recursive_mutex m;
size_t c;
P * p;
void foo()
{
m.lock();
if (c == 0)
{
this->p = new P();
}
++c;
m.unlock();
// lots of code, some using this->p
m.lock();
--c;
if (c == 0)
{
delete this->p;
this->p = nullptr;
}
m.unlock();
}
};
And of course, this all assumes the lots of code is itself reentrant, including the use of p.
And the code above is not even remotely exception-safe, but this is another story… ^_^
7. Hey 99% of our code is not reentrant!
It is quite true for spaghetti code. But if you partition correctly your code, you will avoid reentrancy problems.
7.1. Make sure all functions have NO state
They must only use the parameters, their own local variables, other functions without state, and return copies of the data if they return at all.
7.2. Make sure your object is "recursive-safe"
An object method has access to this, so it shares a state with all the methods of the same instance of the object.
So, make sure the object can be used at one point in the stack (i.e. calling method A), and then, at another point (i.e. calling method B), without corrupting the whole object. Design your object to make sure that upon exiting a method, the object is stable and correct (no dangling pointers, no contradicting member variables, etc.).
7.3. Make sure all your objects are correctly encapsulated
No one else should have access to their internal data:
// bad
int & MyObject::getCounter()
{
return this->counter;
}
// good
int MyObject::getCounter()
{
return this->counter;
}
// good, too
void MyObject::getCounter(int & p_counter)
{
p_counter = this->counter;
}
Even returning a const reference could be dangerous if the user retrieves the address of the data, as some other portion of the code could modify it without the code holding the const reference being told.
7.4. Make sure the user knows your object is not thread-safe
Thus, the user is responsible to use mutexes to use an object shared between threads.
The objects from the STL are designed to be not thread-safe (because of performance issues), and thus, if a user want to share a std::string between two threads, the user must protect its access with concurrency primitives;
7.5. Make sure your thread-safe code is recursive-safe
This means using recursive mutexes if you believe the same resource can be used twice by the same thread.
"Safely" is defined exactly as the common sense dictates - it means "doing its thing correctly without interfering with other things". The six points you cite quite clearly express the requirements to achieve that.
The answers to your 3 questions is 3× "no".
Are all recursive functions reentrant?
NO!
Two simultaneous invocations of a recursive function can easily screw up each other, if
they access the same global/static data, for example.
Are all thread-safe functions reentrant?
NO!
A function is thread-safe if it doesn't malfunction if called concurrently. But this can be achieved e.g. by using a mutex to block the execution of the second invocation until the first finishes, so only one invocation works at a time. Reentrancy means executing concurrently without interfering with other invocations.
Are all recursive and thread-safe functions reentrant?
NO!
See above.
The common thread:
Is the behavior well defined if the routine is called while it is interrupted?
If you have a function like this:
int add( int a , int b ) {
return a + b;
}
Then it is not dependent upon any external state. The behavior is well defined.
If you have a function like this:
int add_to_global( int a ) {
return gValue += a;
}
The result is not well defined on multiple threads. Information could be lost if the timing was just wrong.
The simplest form of a reentrant function is something that operates exclusively on the arguments passed and constant values. Anything else takes special handling or, often, is not reentrant. And of course the arguments must not reference mutable globals.
Now I have to elaborate on my previous comment. #paercebal answer is incorrect. In the example code didn't anyone notice that the mutex which as supposed to be parameter wasn't actually passed in?
I dispute the conclusion, I assert: for a function to be safe in the presence of concurrency it must be re-entrant. Therefore concurrent-safe (usually written thread-safe) implies re-entrant.
Neither thread safe nor re-entrant have anything to say about arguments: we're talking about concurrent execution of the function, which can still be unsafe if inappropriate parameters are used.
For example, memcpy() is thread-safe and re-entrant (usually). Obviously it will not work as expected if called with pointers to the same targets from two different threads. That's the point of the SGI definition, placing the onus on the client to ensure accesses to the same data structure are synchronised by the client.
It is important to understand that in general it is nonsense to have thread-safe operation include the parameters. If you've done any database programming you will understand. The concept of what is "atomic" and might be protected by a mutex or some other technique is necessarily a user concept: processing a transaction on a database can require multiple un-interrupted modifications. Who can say which ones need to be kept in sync but the client programmer?
The point is that "corruption" doesn't have to be messing up the memory on your computer with unserialised writes: corruption can still occur even if all individual operations are serialised. It follows that when you're asking if a function is thread-safe, or re-entrant, the question means for all appropriately separated arguments: using coupled arguments does not constitute a counter-example.
There are many programming systems out there: Ocaml is one, and I think Python as well, which have lots of non-reentrant code in them, but which uses a global lock to interleave thread acesss. These systems are not re-entrant and they're not thread-safe or concurrent-safe, they operate safely simply because they prevent concurrency globally.
A good example is malloc. It is not re-entrant and not thread-safe. This is because it has to access a global resource (the heap). Using locks doesn't make it safe: it's definitely not re-entrant. If the interface to malloc had be design properly it would be possible to make it re-entrant and thread-safe:
malloc(heap*, size_t);
Now it can be safe because it transfers the responsibility for serialising shared access to a single heap to the client. In particular no work is required if there are separate heap objects. If a common heap is used, the client has to serialise access. Using a lock inside the function is not enough: just consider a malloc locking a heap* and then a signal comes along and calls malloc on the same pointer: deadlock: the signal can't proceed, and the client can't either because it is interrupted.
Generally speaking, locks do not make things thread-safe .. they actually destroy safety by inappropriately trying to manage a resource that is owned by the client. Locking has to be done by the object manufacturer, thats the only code that knows how many objects are created and how they will be used.
The "common thread" (pun intended!?) amongst the points listed is that the function must not do anything that would affect the behaviour of any recursive or concurrent calls to the same function.
So for example static data is an issue because it is owned by all threads; if one call modifies a static variable the all threads use the modified data thus affecting their behaviour. Self modifying code (although rarely encountered, and in some cases prevented) would be a problem, because although there are multiple thread, there is only one copy of the code; the code is essential static data too.
Essentially to be re-entrant, each thread must be able to use the function as if it were the only user, and that is not the case if one thread can affect the behaviour of another in a non-deterministic manner. Primarily this involves each thread having either separate or constant data that the function works on.
All that said, point (1) is not necessarily true; for example, you might legitimately and by design use a static variable to retain a recursion count to guard against excessive recursion or to profile an algorithm.
A thread-safe function need not be reentrant; it may achieve thread safety by specifically preventing reentrancy with a lock, and point (6) says that such a function is not reentrant. Regarding point (6), a function that calls a thread-safe function that locks is not safe for use in recursion (it will dead-lock), and is therefore not said to be reentrant, though it may nonetheless safe for concurrency, and would still be re-entrant in the sense that multiple threads can have their program-counters in such a function simultaneously (just not with the locked region). May be this helps to distinguish thread-safety from reentarncy (or maybe adds to your confusion!).
The answers your "Also" questions are "No", "No" and "No". Just because a function is recursive and/or thread safe it doesn't make it re-entrant.
Each of these type of function can fail on all the points you quote. (Though I'm not 100% certain of point 5).
non reentrant function means that there will be a static context, maintained by function. when first time entering, there will be create new context for you. and next entering, you don't send more parameter for that, for convenient to token analyze, . e.g. strtok in c. if you have not clear the context, there might be some errors.
/* strtok example */
#include <stdio.h>
#include <string.h>
int main ()
{
char str[] ="- This, a sample string.";
char * pch;
printf ("Splitting string \"%s\" into tokens:\n",str);
pch = strtok (str," ,.-");
while (pch != NULL)
{
printf ("%s\n",pch);
pch = strtok (NULL, " ,.-");
}
return 0;
}
on the contrary of non-reentrant, reentrant function means calling function in anytime will get the same result without side effect. because there is none of context.
in the view of thread safe, it just means there is only one modification for public variable in current time, in current process. so you should add lock guard to ensure just one change for public field in one time.
so thread safety and reentrant are two different things in different views.reentrant function safety says you should clear context before next time for context analyze. thread safety says you should keep visit public field order.
The terms "Thread-safe" and "re-entrant" mean only and exactly what their definitions say. "Safe" in this context means only what the definition you quote below it says.
"Safe" here certainly doesn't mean safe in the broader sense that calling a given function in a given context won't totally hose your application. Altogether, a function might reliably produce a desired effect in your multi-threaded application but not qualify as either re-entrant or thread-safe according to the definitions. Oppositely, you can call re-entrant functions in ways that will produce a variety of undesired, unexpected and/or unpredictable effects in your multi-threaded application.
Recursive function can be anything and Re-entrant has a stronger definition than thread-safe so the answers to your numbered questions are all no.
Reading the definition of re-entrant, one might summarize it as meaning a function which will not modify any anything beyond what you call it to modify. But you shouldn't rely on only the summary.
Multi-threaded programming is just extremely difficult in the general case. Knowing which part of one's code re-entrant is only a part of this challenge. Thread safety is not additive. Rather than trying to piece together re-entrant functions, it's better to use an overall thread-safe design pattern and use this pattern to guide your use of every thread and shared resources in the your program.

Is it possible to use function pointers across processes?

I'm aware that each process creates it's own memory address space, however I was wondering,
If Process A was to have a function like :
int DoStuff() { return 1; }
and a pointer typedef like :
typedef int(DoStuff_f*)();
and a getter function like :
DoStuff_f * getDoStuff() { return DoStuff; }
and a magical way to communicate with Process B via... say boost::interprocess
would it be possible to pass the function pointer to process B and call
Process A's DoStuff from Process B directly?
No. All a function pointer is is an address in your process's address space. It has no intrinsic marker that is unique to different processes. So, even if your function pointer just happened to still be valid once you've moved it over to B, it would call that function on behalf of process B.
For example, if you had
////PROCESS A////
int processA_myfun() { return 3; }
// get a pointer to pA_mf and pass it to process B
////PROCESS B////
int processB_myfun() { return 4; } // This happens to be at the same virtual address as pA_myfun
// get address from process A
int x = call_myfun(); // call via the pointer
x == 4; // x is 4, because we called process B's version!
If process A and B are running the same code, you might end up with identical functions at identical addresses - but you'll still be working with B's data structures and global memory! So the short answer is, no, this is not how you want to do this!
Also, security measures such as address space layout randomization could prevent these sort of "tricks" from ever working.
You're confusing IPC and RPC. IPC is for communicating data, such as your objects or a blob of text. RPC is for causing code to be executed in a remote process.
In short, you cannot use function pointer that passed to another process.
Codes of function are located in protected pages of memory, you cannot write to them. And each process has isolated virtual address space, so address of function is not valid in another process. In Windows you could use technique described in this article to inject your code in another process, but latest version of Windows rejects it.
Instead of passing function pointer, you should consider creating a library which will be used in both processes. In this case you could send message to another process when you need to call that function.
If you tried to use process A's function pointer from process B, you wouldn't be calling process A - you'd call whatever is at the same address in process B. If they are the same program you might get lucky and it will be the same code, but it won't have access to any of the data contained in process A.
A function pointer won't work for this, because it only contains the starting address for the code; if the code in question doesn't exist in the other process, or (due to something like address space randomization) is at a different location, the function pointer will be useless; in the second process, it will point to something, or nothing, but almost certainly not where you want it to.
You could, if you were insane^Wdaring, copy the actual instruction sequence onto the shared memory and then have the second process jump directly to it - but even if you could get this to work, the function would still run in Process B, not Process A.
It sounds like what you want is actually some sort of message-passing or RPC system.
This is why people have invented things like COM, RPC and CORBA. Each of them gives this general kind of capability. As you'd guess, each does so the job a bit differently from the others.
Boost IPC doesn't really support remote procedure calls. It will enable putting a variable in shared memory so its accessible to two processes, but if you want to use a getter/setter to access that variable, you'll have to do that yourself.
Those are all basically wrappers to produce a "palatable" version of something you can do without them though. In Windows, for example, you can put a variable in shared memory on your own. You can do the same in Linux. The Boost library is a fairly "thin" library around those, that lets you write the same code for Windows or Linux, but doesn't try to build a lot on top of that. CORBA (for one example) is a much thicker layer, providing a relatively complete distributed environment.
If both processes are in the same application, then this should work. If you are trying to send function pointers between applications then you are out of luck.
My original answer was correct if you assume a process and a thread are the same thing, which they're not. The other answers are correct - different processes cannot share function pointers (or any other kind of pointers, for that matter).