V8-code in another pthread brings to segfault - c++

Why this code brings to SEGFAULT?:
int jack_process(jack_nframes_t nframes, void *arg)
{
Local<Value> test = Local<Value>::New( Number::New(2) );
return 0;
}
jack_process is running in another pthread. How I can do it right way? How I can run V8 code in another pthread?
Note, this code has no any segfaults.
int jack_process(jack_nframes_t nframes, void *arg)
{
Local<Value> test;
return 0;
}
Thanks.

JavaScript and Node are single threaded. By running that code in another thread, you are essentially trying to run two threads of JS at the same time.
V8 allows you to run two JS instances on threads, but they need to be totally independent Isolate instances.
Generally C++ code written in another thread will just use standard C++ classes and variables, and then use libuv's threading support via uv_async_send, and then the async handler in the main thread will convert the values into V8 objects for JS processing.

Related

boost test unit can not call mpi function

I have looked throughfully around but could not find any reference to this problem.
I wrote a c++ program that I am testing with boost/unit. The serial version works fine and the unit test is working.
Now I have made the program parallel via a function doing embarrassingly parallel work with MPI. If a write down my own test calling the parallel function -- let's call it parafunction -- it is working well, MPI runs all right.
Compilation is done with mpic++ and I use mpixec to run the program.
If I call parafunction in boost test case however, the MPI goes all wrong, the test are launched multiple time and the process crash when several MPI::Init are called.
Here is an example of the error I get :
The MPI_comm_size() function was called after MPI_FINALIZE was invoked.
This is disallowed by the MPI standard.
Your MPI job will now abort.
My test case is on a test_unit, automatically handled by a master_test_suite. As I said without the parallelisation it works perfectly well.
Parafunction calls MPI::Init and MPI::Finalize, and no other function of files is supposed to do any MPI related stuff.
Has anyone ever encountered a similar problem before ?
My test run are quite long therefore I could really use the parallel version of my program !
Thanks for your help
A function, which both initialises and then finalises can only be called once, because MPI can be only initialised once during the lifetime of the program and can only be finalised once. To prevent multiple initialisation calls, put the call to MPI_Init() or MPI_Init_thread() in a conditional:
int already_initialised;
MPI_Initialized(&already_initialised);
if (!already_initialised)
MPI_Init(NULL, NULL);
As for the finalisation, it should be moved outside of your function, probably in an atexit(3) handler if you don't want to pollute the outer scope with MPI calls. For example:
void finalise_mpi(void)
{
int already_finalised;
MPI_Finalized(&already_finalised);
if (!already_finalised)
MPI_Finalize();
}
...
atexit(finalise_mpi);
...
The atexit() call could be part of the initialisation code, e.g.:
int already_initialised;
MPI_Initialized(&already_initialised);
if (!already_initialised)
{
MPI_Init(NULL, NULL);
atexit(finalise_mpi);
}
This would not install the atexit(3) handler if MPI was already initialised. The basic idea is that if MPI was initialised on entry to the function, then it would mean that MPI_Init() was called in the outer scope and one would normally expect that MPI_Finalize() is also called there.
If I were you, I would move MPI initialisation and finalisation out of the parallel processing function. The proper calling sequence would be to initialise MPI, run the tests, then finalise MPI.
I've used the C bindings in the above text as the C++ bindings were deprecated in MPI-2.2 and then deleted in MPI-3.0.

Asynchronous call to MATLAB's engEvalString

Edit 2: Problem solved, see my answer.
I am writing a C++ program that communicates with MATLAB through the Engine API. The C++ application is running on Windows 7, and interacting with MATLAB 2012b (32-bit).
I would like to make a time-consuming call to the MATLAB engine, using engEvalString, but cannot figure out how to make the call asynchronous. No callback is necessary (but would be nice if possible).
The following is a minimum example of what doesn't work.
#include <boost/thread.hpp>
extern "C" {
#include <engine.h>
}
int main()
{
Engine* eng = engOpen("");
engEvalString(eng,"x=10");
boost::thread asyncEval(&engEvalString,eng,"y=5");
boost::this_thread::sleep(boost::posix_time::seconds(10));
return 0;
}
After running this program, I switch to the MATLAB engine window and find:
» x
x =
10
» y
Undefined function or variable 'y'.
So it seems that the second call, which should set y=5, is never processed by the MATLAB engine.
The thread definitely runs, you can check this by moving the engEvalString call into a local function and launching this as the thread instead.
I'm really stumped here, and would appreciate any suggestions!
EDIT: As Shafik pointed out in his answer, the engine is not thread-safe. I don't think this should be an issue for my use case, as the calls I need to make are ~5 seconds apart, for a calculation that takes 2 seconds. The reason I cannot wait for this calculation, is that the C++ application is a "medium-hard"-real-time robot controller which should send commands at 50Hz. If this rate drops below 30Hz, the robot will assume network issues and close the connection.
So, I figured out the problem, but would love it if someone could explain why!
The following works:
#include <boost/thread.hpp>
extern "C" {
#include <engine.h>
}
void asyncEvalString()
{
Engine* eng = engOpen("");
engEvalString(eng,"y=5");
}
int main()
{
Engine* eng = engOpen("");
engEvalString(eng,"x=10");
boost::thread asyncEvalString(&asyncEvalString);
boost::this_thread::sleep(boost::posix_time::seconds(1));
engEvalString(eng,"z=15");
return 0;
}
As you can see, you need to get a new pointer to the engine in the new thread. The pointer returned in asyncEvalString is different to the original pointer returned by engOpen in the main function, however both pointers continue to operate without problem:
» x
x =
10
» y
y =
5
» z
z =
15
Finally, to tackle the problem of thread safety, a mutex could be set up around the engEvalString calls to ensure only one thread uses the engine at any one time. The asyncEvalString function could also be modified to trigger a callback function once the engEvalString function has been completed.
I would however appreciate someone explaining why the above solution works. Threads share heap allocated memory of the process, and can access memory on other threads' stacks (?), so I fail to understand why the first Engine* was suddenly invalid when used in a separate thread.
So according to this Mathworks document it is not thread safe so I doubt this will work:
http://www.mathworks.com/help/matlab/matlab_external/using-matlab-engine.html
and according to this document, engOpen forks a new process which would probably explain the rest of the behavior you are seeing:
http://www.mathworks.com/help/matlab/apiref/engopen.html
Also see, threads and forks, think twice about mixing them:
http://www.linuxprogrammingblog.com/threads-and-fork-think-twice-before-using-them

c++ simple start a function with its own thread

i had once had a very simple one or two line code that would start a function with its own thread and continue running until application closed, c++ console app. lost the project it was in, and remember it was hard to find. cant find it online now. most example account for complicated multithreading situations. but i just need to open this one function in its own thread. hopefully someone knows what im talking about, or a similar solution.
eg.
start void abc in its own thread, no parameters
An example using C++11 thread support:
#include <thread>
void abc(); // function declaration
int main()
{
std::thread abcThread(abc); // starts abc() on a separate thread
....
abcThread.join(); // waits until abcThread is done.
}
If you have no C++11 support, the same is possible using boost::thread, just by replacing std::thread by boost::thread.

c++ threads - parallel processing

I was wondering how to execute two processes in a dual-core processor in c++.
I know threads (or multi-threading) is not a built-in feature of c++.
There is threading support in Qt, but I did not understand anything from their reference. :(
So, does anyone know a simple way for a beginner to do it. Cross-platform support (like Qt) would be very helpful since I am on Linux.
Try the Multithreading in C++0x part 1: Starting Threads as a 101. If you compiler does not have C++0x support, then stay with Boost.Thread
Take a look at Boost.Thread. This is cross-platform and a very good library to use in your C++ applications.
What specifically would you like to know?
The POSIX thread (pthreads) library is probably your best bet if you just need a simple threading library, it has implementations both on Windows and Linux.
A guide can be found e.g. here. A Win32 implementation of pthreads can be downloaded here.
Edit: Didn't see you were on Linux. In that case I'm not 100% sure but I think the libraries are probably already bundled in with your GCC installation.
I'd recommend using the Boost libraries Boost.Thread instead. This will wrap platform specifics of Win32 and Posix, and give you a solid set of threading and synchronization objects. It's also in very heavy use, so finding help on any issues you encounter on SO and other sites is easy.
You can search for a free PDF book "C++-GUI-Programming-with-Qt-4-1st-ed.zip" and read Chapter 18 about Multi-threading in Qt.
Concurrent programming features supported by Qt includes (not limited to) the following:
Mutex
Read Write Lock
Semaphore
Wait Condition
Thread Specific Storage
However, be aware of the following trade-offs with Qt:
Performance penalties vs native threading libraries. POSIX thread (pthreads) has been native to Linux since kernel 2.4 and may not substitute for < process.h > in W32API in all situations.
Inter-thread communication in Qt is implemented with SIGNAL and SLOT constructs. These are NOT part of the C++ language and are implemented as macros which requires proprietary code generators provided by Qt to be fully compiled.
If you can live with the above limitations, just follow these recipes for using QThread:
#include < QtCore >
Derive your own class from QThread. You must implement a public function run() that returns void to contain instructions to be executed.
Instantiate your own class and call start() to kick off a new thread.
Sameple Code:
#include <QtCore>
class MyThread : public QThread {
public:
void run() {
// do something
}
};
int main(int argc, char** argv) {
MyThread t1, t2;
t1.start(); // default implementation from QThread::start() is fine
t2.start(); // another thread
t1.wait(); // wait for thread to finish
t2.wait();
return 0;
}
As an important note in c++14, the use of concurrent threading is available:
#include<thread>
class Example
{
auto DoStuff() -> std::string
{
return "Doing Stuff";
}
auto DoStuff2() -> std::string
{
return "Doing Stuff 2";
}
};
int main()
{
Example EO;
std::string(Example::*func_pointer)();
func_pointer = &Example::DoStuff;
std::future<string> thread_one = std::async(std::launch::async, func_pointer, &EO); //Launching upon declaring
std::string(Example::*func_pointer_2)();
func_pointer_2 = &Example::DoStuff2;
std::future<string> thread_two = std::async(std::launch::deferred, func_pointer_2, &EO);
thread_two.get(); //Launching upon calling
}
Both std::async (std::launch::async, std::launch::deferred) and std::thread are fully compatible with Qt, and in some cases may be better at working in different OS environments.
For parallel processing, see this.

How to wrap multithreaded C++ library using Python C/API?

This is a somewhat long question, but I hope I can express it clearly.
I am trying to wrap a C++ library using Python/C API. The main library, say, mylib, has its own object system (it is something like an interpreter for another language ) and uniquely identifies each object in its environment by an Id. It creates multiple threads in its init() function and does different things on different threads (say creating objects on one thread and interpreting commands in another thread).
Now I am trying to wrap it in two levels:
I created a Dummy class with the Id of an object in mylib. The Dummy constructor actually calls a function in mylib to create a new object and store its Id. Other methods in Dummy class similarly call equivalent functions in mylib. This does not use Python/C API.
I created mylibmodule.cpp, which uses the Python/C API to provide the functions that will be called from the Python interpreter.
I call the init() function of mylib in PyMODINIT_FUNC init_mylib().
I code functions like :
static PyObject * py_new_Dummy(PyObject* self, PyObject *args){
// ... process arguments
return reinterpret_cast<PyObject*>(new Dummy);
}
Note that the Dummy constructor does call functions in mylib that are executed on threads created by using pthreads.
I compile this into _mylib.so and I have a mylib.py:
import _mylib
class MyClass(obj):
def __init__(self, *args)
self.__ptr = _mylib.py_new_Dummy()
Now to the actual problem: I can import mylib in the Python interpreter, but as soon as I try:
a = MyClass(some_args)
I get a segmentation fault. A gdb backtrace shows
Program received signal SIGSEGV, Segmentation fault.
__pthread_mutex_lock (mutex=0x0) at pthread_mutex_lock.c:50
Even funnier is that if I disable spawning multiple threads in the mylib code (still linked with pthreads), I can create MyClass instances, but I get a segmentation violation at exit from the Python interpreter.
The "Thin Ice" section in the Python documentation (http://docs.python.org/extending/) did not enlighten me. I am wondering if I should use PyGILState_Ensure and PyGILState_Release around all Python C/API calls in mylibmodule.cpp. Or should it be Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS?
Can anybody help? Is there any definitive documentation on how exactly Python plays with pthreads?
From your description it doesn't really sound like a threading issue at all: you claim you define the Dummy class without using the Python API, but that would mean Dummy instances are not PyObjects, so the reinterpret_cast will do the wrong thing. You can't create PyObjects by just instantiating a C++ class; you need to play along with Python's object system and create a proper PyType struct and a PyObject struct and properly initialize both. You also need to make sure your refcounts are correct.
Once you have that sorted, the main thing to remember about threads is that any call that touches Python objects or that uses any of the Python API (except the functions to grab the GIL) must have the GIL acquired. If any of the threads in your C++ library try to call back to Python code or touch Python objects, the access needs to be wrapped in PyGILState_Ensure/PyGILState_Release.
Thank you Thomas for pointing out the red herring. The problem was in the initialization of the threads in the C++ side.
And yes, it did not need any GIL manipulation as the none of the additional C++ threads were accessing Python C/API.