Calling boost::python::object as function in separate thread - c++

I am trying to wrap some c++ functionality into python with the help of boost::python. I have some trouble getting a particular callback mechanism to work. The following code snippet explains what I am trying to do:
//c++ side
class LoopClass {
public:
//some class attributes
void call_once(std::function const& fun) const;
};
void callOnce(LoopClass& loop, boost::python::object const& function) {
auto fun = [&]() {
function();
};
loop->call_once(fun);
}
boost::python::class_<LoopClass>("LoopClass")
.def("call_once", &callOnce);
//python side
def foo():
print "foo"
loop = LoopClass()
loop.call_once(foo)
Here is the deal: The function call_once() takes a std::function and puts it in a queue. LoopClass maintains an eternal loop which is run in a separate thread and, at a certain point, processes the queue of stored callback functions. To tread a boost::python::object as a function, the cast operator has to be called explicitly. This is why I didn't wrap call_once() directly but wrote the little conversion function callOnce() which forwards the cast operator call through a lambda.
Anyhow, when I try to run this code, accessing the boost::python::object fails with a segmentation fault. I guess it's just not that easy to share python objects between to threads. But how can this be done?
Thanks in advance for any help!
Update
I tried to follow the advice of #JanneKarila
See Non-Python created threads. – Janne Karila
I guess this is the right point to find a solution, but unfortunately I am not able to figure out how to apply it.
I tried
void callOnce(LoopClass& loop, boost::python::object const& function) {
auto fun = [&]() {
PyGILState_STATE gstate;
gstate = PyGILState_Ensure();
function();
PyGILState_Release(gstate);
};
loop->call_once(fun);
}
which doesn't work. Am I missing something or just too dumb?

Have you called PyEval_InitThreads(); ?
If so maybe this http://www.codevate.com/blog/7-concurrency-with-embedded-python-in-a-multi-threaded-c-application piece can help?

Related

Resetting a shared pointer captured in a lambda function

(I'm very unsure about the phrasing of the question title. I'm hoping it's not misleading because I really don't know how to summarize this. But I'll try to explain my problem as well as I can.)
In a project, there is something like this (written from memory and simplified):
Class A {
private:
boost::weak_ptr<SomeClassB> b;
public:
static boost::shared_ptr<SomeClassB> StopSomeProcesses () {
boost::shared_ptr<SomeClassB> temp (new SomeClassB());
b = temp;
return temp;
}
}
Now in another project, I need to do something similar to the following:
boost::shared_ptr<SomeClassB> obj;
void someFunction () {
obj = A::StopSomeProcesses();
auto callback = [](){
//some other stuff here
obj.reset();
};
NamespaceFromYetAnotherProject::DoSomething(callback);
}
What this basically does is while b holds a valid object from A::StopSomeProcesses, as the name implies, some processes will be stopped. In this case, the processes are stopped while DoSomething is executed. At the end, DoSomething will call callback where obj is reset and the stopped processes can now finally continue.
I've done this and it works. However, as much as possible, I'd like to avoid using global variables. I tried doing the following:
void someFunction () {
boost::shared_ptr<SomeClassB> obj;
obj = A::StopSomeProcesses();
auto callback = [&obj](){
//some other stuff here
obj.reset();
};
NamespaceFromYetAnotherProject::DoSomething(callback);
}
The above code works. But I'm not sure if I was already in "undefined behavior" territory and just got lucky. Doesn't obj's scope end already? Or does the fact that the lambda was passed as an argument help extend its "life"? If this is safe to do, is that safety lost if callback is run on another thread?
I also tried doing this:
void someFunction () {
boost::shared_ptr<SomeClassB> obj;
obj = A::StopSomeProcesses();
auto callback = [obj](){
//some other stuff here
boost::shared_ptr<SomeClassB> tempObj (new SomeClassB(*obj));
tempObj.reset();
};
NamespaceFromYetAnotherProject::DoSomething(callback);
}
But this was something I tried randomly. I wrote it while completely focused on just deleting the object held by the shared pointer. It worked, but I'm not even sure if it's just roundabout or even valid.
Are these attempts going anywhere? Or am I completely going the wrong way? Or should I just stick to using a global variable? Would appreciate any help on how to go about this problem. Thanks!
You are using a shared_ptr and StopSomeProcesses will internally allocate the memory it points to. Pointers are passed by value so the lifetime of obj is irelevant. Every function call makes a new copy of it as does the binding in the lambda. What matters is what the pointer points too and that was allocated with new and lives on.

C++ return value on concurrent queue pushing functions

After receiving answers to a previous question on logging on a different thread, I am currently at the following bit of code (note: the concurrent_queue here is from ppl, but any other concurrent_queue should work):
class concurrentFuncQueue
{
private:
typedef std::function<void()> LambdaFunction;
mutable concurrency::concurrent_queue<LambdaFunction> functionQueue;
mutable std::atomic<bool> endcond;
LambdaFunction function;
std::thread thd;
public:
concurrentFuncQueue() : endcond(false), thd([=]{
while (endcond != true)
{
if (functionQueue.try_pop( function ))
{
function(); //note: I am popping a function and adding () to execute it
}
}
}){}
~concurrentFuncQueue() { functionQueue.push([=]{ endcond = true; }); thd.join(); }
void pushFunction(LambdaFunction function) const { functionQueue.push(function); }
};
Basically the functions I push are run on a different thread sequentially (ex. a logging function) as to avoid performance issues on the main thread.
Current usage is along the following:
static concurrentFuncQueue Logger;
vector<char> outstring(256);
Logger.pushFunction([=]{ OutputDebugString(debugString.c_str()) });
Great so far. I can push functions on to a concurrent queue that will run my functions sequentially on a separate thread.
One thing I also need to have, but currently don't are return values so that ex (pseudo-code):
int x = y = 3;
auto intReturn = Logger.pushFunction([=]()->int { return x * y; });
will push x * y on to the concurrent queue, and after the pop and completion of the function (on the other thread), returns the value calculated to the caller thread.
(I understand that I'll be blocking the caller thread until the pushed function is returned. That is exactly what I want)
I get the feeling that I might have to use something along the line of std::promise, but sadly my current low understanding of them prevent me from formulating something codable.
Any ideas? Thoughts on the above C++ code and any other comments are also much welcome (please just ignore the code completely if you feel another implementation is more appropriate or solves the problem).
You should be able to use something along the lines of:
template<typename Foo>
std::future<typename std::result_of<Foo()>::type> pushFunction(Foo&& f) {
using result_type = typename std::result_of<Foo()>::type; // change to typedef if using is not supported
std::packaged_task<result_type()> t(f);
auto ret_fut = t.get_future();
functionQueue.push(std::move(t));
return ret_fut;
}
For this to work you need to make your LambdaFunction a type-erased function handler.

Lambdas and threads

I've recently started using lambdas an awful lot within threads, and want to make sure I'm not setting myself up for thread-safety issues/crashes later. My usual way of using them is:
class SomeClass {
int someid;
void NextCommand();
std::function<void(int, int)> StoreNumbers;
SomeClass(id, fn); // constructor sets id and storenumbers fn
}
// Called from multiple threads
static void read_callback(int fd, void* ptr)
{
SomeClass* sc = static_cast<SomeClass*>ptr;
..
sc->StoreNumbers(someint,someotherint); // voila, thread specific storage.
}
static DWORD WINAPI ThreadFn(LPVOID param)
{
std::list<int> ints1;
std::list<int> ints2;
auto storenumbers = [&] (int i, int i2) {
// thread specific lambda.
ints1.push_back(i);
ints2.push_back(i2);
};
SomeClass s(id, storenumbers);
...
// set up something that eventually calls read_callback with s set as the ptr.
}
ThreadFn is used as the thread function for 30-40 threads.
Is this acceptable? I usually have a few of these thread-specific lambdas that operate on a bunch of thread specific data.
Thank you!
There's no problem here. A data access with a lambda is no different to a data access with a named function, through inline code, a traditional functor, one made with bind, or any other way. As long as that lambda is invoked from only one thread at a time, I don't see any evidence of thread-related problems.

How to maintain a list of functions in C++/STL?

Before asking you my question directly, I'm going to describe the nature of my prolem.
I'm coding a 2D simulation using C++/OpenGL with the GLFW library. And I need to manage a lot of threads properly. In GLFW we have to call the function:
thread = glfwCreateThread(ThreadFunc, NULL); (the first parameter is the function that'll execute the thread, and the second represents the parameters of this function).
And glfwCreateThread, has to be called every time! (ie: in each cycle). This way of working, doesn't really help me, because it breaks the way i'm building my code because i need to create threads out of the main loop scope. So I'm creating a ThreadManager class, that'll have the following prototype :
class ThreadManager {
public:
ThreadManager();
void AddThread(void*, void GLFWCALL (*pt2Func)(void*));
void DeleteThread(void GLFWCALL (*pt2Func)(void*));
void ExecuteAllThreads();
private:
vector<void GLFWCALL (*pt2Func)(void*)> list_functions;
// some attributs
};
So for example, if I want to add a specific thread I'll just need to call AddThread with the specific parameters, and the specific function. And the goal is just to be able to call: ExecuteAllThreads(); inside the main loop scope. But for this i need to have something like:
void ExecuteAllThreads() {
vector<void GLFWCALL (*pt2Func)(void*)>::const_iterator iter_end = list_functions.end();
for(vector<void GLFWCALL (*pt2Func)(void*)>::const_iterator iter = list_functions.begin();
iter != iter_end; ++iter) {
thread = glfwCreateThread(&(iter*), param);
}
}
And inside AddThread, I'll just have to add the function referenced by the pt2Func to the vector : list_functions.
Alright, this is the general idea of what i want to do.. is it the right way to go ? You have a better idea ? How to do this, really ? (I mean the problem is the syntax, i'm not sure how to do this).
Thank you !
You need to create threads in each simulation cycle? That sounds suspicious. Create your threads once, and reuse them.
Thread creation isn't a cheap operation. You definitely don't want to do that in every iteration step.
If possible, I'd recommend you use Boost.Thread for threads instead, to give you type safety and other handy features. Threading is complicated enough without throwing away type safety and working against a primitive C API.
That said, what you're asking is possible, although it gets messy. First, you need to store the arguments for the functions as well, so your class looks something like this:
class ThreadManager {
public:
typedef void GLFWCALL (*pt2Func)(void*); // Just a convenience typedef
typedef std::vector<std::pair<pt2Func, void*> > func_vector;
ThreadManager();
void AddThread(void*, pt2Func);
void DeleteThread(pt2Func);
void ExecuteAllThreads();
private:
func_vector list_functions;
};
And then ExecuteAllThreads:
void ExecuteAllThreads() {
func_vector::const_iterator iter_end = list_functions.end();
for(func_vector::const_iterator iter = list_functions.begin();
iter != iter_end; ++iter) {
thread = glfwCreateThread(iter->first, iter->second);
}
}
And of course inside AddThread you'd have to add a pair of function pointer and argument to the vector.
Note that Boost.Thread would solve most of this a lot cleaner, since it expects a thread to be a functor (which can hold state, and therefore doesn't need explicit arguments).
Your thread function could be defined something like this:
class MyThread {
MyThread(/* Pass whatever arguments you want in the constructor, and store them in the object as members */);
void operator()() {
// The actual thread function
}
};
And since the operator() doesn't take any parameters, it becomes a lot simpler to start the thread.
What about trying to store them using boost::function ?
They could simulate your specific functions, since they behave like real objects but in fact are simple functors.
Consider Boost Thread and Thread Group
I am not familiar with the threading system you use. So bear with me.
Shouldn't you maintain a list of thread identifiers?
class ThreadManager {
private:
vector<thread_id_t> mThreads;
// ...
};
and then in ExecuteAllThreads you'd do:
for_each(mThreads.begin(), mThreads.end(), bind(some_fun, _1));
(using Boost Lambda bind and placeholder arguments) where some_fun is the function you call for all threads.
Or is it that you want to call a set of functions for a given thread?

mem_fun fails, pthread and class ptr

pthread takes in as its parameter void *(*start_routine)(void* userPtr), I was hoping I can use std::mem_fun to solve my problem but I cant.
I would like to use the function void * threadFunc() and have the userPtr act as the class (userPtr->threadFunc()). Is there a function similar to std::mem_func that I can use?
One way is to use a global function that calls your main thread function:
class MyThreadClass {
public:
void main(); // Your real thread function
};
void thread_starter(void *arg) {
reinterpret_cast<MyThreadClass*>(arg)->main();
}
Then, when you want to start the thread:
MyThreadClass *th = new MyThreadClass();
pthread_create(..., ..., &thread_starter, (void*)th);
On the other hand, if you don't really need to use pthreads manually, it might be a good idea to have a look at Boost.Thread, a good C++ thread library. There you get classes for threads, locks, mutexes and so on and can do multi-threading in a much more object-oriented way.