Problems regarding Boost::Python and Boost::Threads - c++

Me and a friend are developing an application which uses Boost::Python. I have defined an interface in C++ (well a pure virtual class), exposed through Boost::Python to the users, who have to inherit from it and create a class, which the application takes and uses for some callback mechanism.
Everything that far goes pretty well. Now, the function callback may take some time (the user may have programmed some heavy stuff)... but we need to repaint the window, so it doesn't look "stuck".We wanted to use Boost::Thread for this. Only one callback will be running at a time (no other threads will call python at the same time), so we thought it wouldn't be such a great deal... since we don't use threads inside python, nor in the C++ code wrapped for python.
What we do is calling PyEval_InitThreads() just after Py_Initialize(), then, before calling the function callback inside it's own boost thread, we use the macro PY_BEGIN_ALLOW_THREADS and, and the macro PY_END_ALLOW_THREADS when the thread has ended.
I think I don't need to say the execution never reaches the second macro. It shows several errors everytime it runs... but t's always while calling the actual callback. I have googled a lot, even read some of the PEP docs regarding threads, but they all talk about threading inside the python module (which I don't sice it's just a pure virtual class exposed) or threading inside python, not about the main application calling Python from several threads.
Please help, this has been frustrating me for several hours.
Ps. help!

Python can be called from multiple threads serially, I don't think that's a problem. It sounds to me like your errors are just coming from bad C++ code, as you said the errors happened after PY_BEGIN_ALLOW_THREADS and before PY_END_ALLOW_THREADS.
If you know that's not true, can you post a little more of your actual code and show exactly where its erroring and exactly what errors its giving?

Related

C++ calling conventions -- converting between win32 and Linux/GCC?

I'm knee deep in a project to connect Windows OpenVR applications running in Wine directly to Linux native SteamVR via a Winelib wrapper, the idea being to sidestep all the problems with trying to make what is effectively a very complicated device driver run inside Wine itself. I pretty much immediately hit a wall. The problem appears to be related to calling conventions, though I've had trouble getting meaningful information out of winedbg, so there's a chance I'm way way off.
The OpenVR API is C++ and consists primarily of classes filled with virtual methods. The application calls a getter (VR_GetGenericInterface) to acquire a derivative class object implementing those methods from the (closed source) runtime. It then calls those methods directly on the object given to it.
My current attempt goes like this: My wrapped VR_GetGenericInterface returns a custom wrapper class for the requested interface. This class's methods are defined in their own file separately compiled with -mabi=ms. It calls non-member methods in a separate file that is compiled without -mabi=ms, which finally make the corresponding call into the actual runtime.
This seems to work, until the application calls a method that returns a struct of some description. Then the application segfaults on the line the call happened, apparently just after the call returned, as my code isn't anywhere on the stack at that point and I verified with printfs that my wrapped class reaches the point of returning to the app.
This leads me to think there's yet another calling convention gotcha I haven't accounted for yet, related to structs and similar complex types. From googling it appears returning a struct is one of the more poorly-defined things in ABIs typically, but I found no concrete answers or descriptions of the differences.
What is most likely happening, and how can I dig deeper to see what exactly the application is expecting?

How to pass context to a WinEventProc (callback for SetWinEventHook)?

The question is in the title: How can I have the callback for SetWinEventHook have some context information? Usually functions that take callbacks also take a context parameter they pass on to the callbacks, but not this one.
Using a global (including thread_local) variable as suggested in SetWinEventHook with custom data for example is almost certainly unacceptable.
There may be more than one caller to the utility function that sets the WinEvent hook to do something so global is out. thread_local may work, but it also may not. What if the callback calls the same utility function and sets yet another hook? They will both get called next time I pump messages. Using thread_local requires me to define a contract that forbids a single thread to set more than one WinEvent hook through me, and requires my callers to abide by it. That's a pretty arbitrary and artificial limitation and I prefer not to set it.
I'm reimplementing in C++ a little PoC I had in C#. There I could simple use capturing lambdas and they would convert to function pointers that I can pass via P/Invoke to SetWinEventHook. C++ unfortunately doesn't have that feature (probably because it requires allocating WX memory and dynamically generating a little code, which some platforms don't allow), but perhaps there is a platform-specific way to do this?
I seem to remember that ATL did something like that in the past, and I guess I can probably implement something like this too, but maybe there's a better way? If not, perhaps somebody has already written something like that?
(Another valid solution would be a clever-enough data structure and algorithm to solve the reentrancy problem.)

Using C++ file in Swift Project

I've added a C++ file to my project written in Swift. It only calculates some random numbers and also uses an vector array. It's wrapper is written in Objective-C. When I try to call functions from the cpp file the App crashes after some time. But there's a strange behavior, because it doesn't crash while executing the C++ code (it runs like I expect to), but when I load a navigation controller. This hasn't to do anything with either. The console shows this:
'pthread_mutex_lock(&mutex)' failed with error 'EINVAL'(22)
I googled this bug, but I don't really understand the problem in my case.
Because you're using threaded code - the pthreads - the "crashes after some time" makes sense. I suspect it IS running the C++ code: your Swift code calls some Objective-C++ wrapper code, which calls some C++, which spawns a thread, and then returns back to you and you get the data at a later time somehow.
If I were you I'd look at the C++ threading code. There's a Stackoverflow answer that might be relevant here: EINVAL on pthread_mutex. Maybe there's a bug, or the C++ code fails because it assumes Linux and you're on macOS, or something.
I also almost hate to suggest this, but depending on the size/complexity of the C++ maybe it makes sense to just rewrite it in Swift. You're going through a lot of bridging layers to call this code, feels like it could be kind of fragile (which may explain what you're seeing).
(OR compile the C++ as a separate helper app and use cross-communication like XPC or just NSTask to talk back and forth from your C++ process to your Swift process)

C# / C++ Asynchronous reverse pinvoke?

I need to call C# code from a native C/C++ .dll asynchronously.
While searching how to do I found that I could create a C# delegate and get a function pointer from it, which I would use inside my native code.
The problem is that my native code needs to run asynchronously, i.e in a separate thread created from the native code, which means that the native code called from the C# will return to C# before the delegate is called.
I read in another SO question that someone had trouble with this, despite what the MSDN says, because his delegate was garbage collected before it gets called, due to the asynchronous nature of his task.
My question is : is it really posible to call a C# delegate using a function pointer from a native code running in a thread that was created inside the native code ? Thank you.
No, this is a universal bug and not specific to asynchronous code. It is just a bit more likely to byte in your case since you never have the machinery behind [DllImport] to keep you out of trouble. I'll explain why this goes wrong, maybe that helps.
A delegate declaration for a callback method is always required, that's how the CLR knows now to make the call to the method. You often declare it explicitly with the delegate keyword, you might need to apply the [UnmanagedFunctionPointer] attribute if the unmanaged code is 32-bit and assumes the function was written in C or C++. The declaration is important, that's how the CLR knows how the arguments you pass from your native code need to be converted to their managed equivalent. That conversion can be intricate if your native code passes strings, arrays or structures to the callback.
The scenario is heavily optimized in the CLR, important because managed code inevitably runs on an unmanaged operating system. There are a lot of these transitions, you can't see them because most of them happen inside .NET Framework code. This optimization involves a thunk, a sliver of auto-generated machine code that takes care of making the call to foreign method or function. Thunks are created on-the-fly, whenever you make the interop call that uses the delegate. In your case when C# code passes the delegate to your C++ code. Your C++ code gets a pointer to the thunk, a function pointer, you store it and make the callback later.
"You store it" is where the problem starts. The CLR is unaware that you stored the pointer to the thunk, the garbage collector cannot see it. Thunks require memory, usually just a few handful of bytes for the machine code. They don't live forever, the CLR automatically releases the memory when the thunk is no longer needed.
"Is no longer needed" is the rub, your C++ code cannot magically tell the CLR that it no longer is going to make a callback. So the simple and obvious rule it uses is that the thunk is destroyed when the delegate object is garbage collected.
Programmers forever get in trouble with that rule. They don't realize that the life-time of the delegate object is important. Especially tricky in C#, it has a lot of syntax sugar that makes it very easy to create delegate objects. You don't even have to use the new keyword or name the delegate type, just using the target method name is enough. The lifetime of such a delegate object is only the pinvoke call. After the call completes and your C++ code has stored the pointer, the delegate object isn't referenced anywhere anymore so is eligible for garbage collection.
Exactly when that happens, and the thunk is destroyed, is unpredictable. The GC runs only when needed. Could be a nanosecond after you made the call, that's unlikely of course, could be seconds. Most tricky, could be never. Happens in a typical unit test that doesn't otherwise calls GC.Collect() explicitly. Unit tests rarely put enough pressure on the GC heap to induce a collection. It is a bit more likely when you make the callback from another thread, implicit is that other code is running on other threads that make it more likely that a GC is triggered. You'll discover the problem quicker. Nevertheless, the thunk is going to get destroyed in a real program sooner or later. Kaboom when you make the callback in your C++ code after that.
So, rock-hard rule, you must store a reference to the delegate to avoid the premature collection problem. Very simple to do, just store it in a variable in your C# program that is declared static. Usually good enough, you might want to set it explicitly back to null when the C# code tells your C++ code to stop making callbacks, unlikely in your case. Very occasionally, you'd want to use GCHandle.Alloc()instead of a static variable.

Detect stage of static initialization?

What I really want is, how do I know when each stage of C++ initialization is truly DONE?
There is static initialization where simple things get assigned. Then there's dynamic static initialization where more complicated statics get assigned, which is not defined across 'translation units'. This is kind of horrible, and there are not many easy ways to cope. I use namespaces in places to make an immediate assignment that happens on loading header files, but the flaw here is that this can then be overwritten in one of the initialization phases.
I can set the initialization to be a function which does 'the right thing' but it would be much easier if I could KNOW what 'phase' I am in somehow. So far as I can tell, this is not possible in any way at all, but I am hoping someone out there will have some good news.
I have worked around the issue that was causing this, which was code not used being unexpectedly linked in because it was in the project. It would still be nice to know the answer to this, but I am guessing the answer is 'there is no way to know for sure'.
I edited the question, I don't really want to know main is started per se.
I don't get what problem are you trying to solve.
When you build your application, the linker adds the startup code that is the first code to be executed when the OS loads your program in memory. This code will do all the initialization stuff and, when finished, will call your main() function.
If you are talking about replacing this code with your own, you should check the inner details of your compiler/linker (and be very sure you know what are you doing!!).
If your question is about having multiple processes and you need to know if one of the process has started, you should use a proper syncronization mechanism (that can be one of those provided by the underlying OS or one you make your own).
how about something like this:
bool is_started(bool set_started=false){
static bool flag = false;
if(set_started)
flag=true;
return flag;
}
main(){
is_started(true);
}
If your question is about windows, I know you can detect the messagepump from a process has started. This way you know for sure everything is initialized.
Of course this doesn't fly for *nix
if your running on windows, create a mutex after your done initializing. You can then WaitForSingleOject on that mutex to detect if your program is truly initialized.
Many applications do this to detect if startup was complete and what the other instance of the application is. This is especially true if you want only 1 instance of a program running.