I am new to embedding python in C++ application. Please forgive me if this has been asked before, even though I did my homework by searching through the web.
So, here is my problem.
If I ran the embedded python code in the application's main thread, everything runs fine. But if I ran the embedded python code in the child thread ( created by main thread ), some of the python import will not work.
I made a small program here to demonstrate:
#include <iostream>
#include <boost/python.hpp>
#include <boost/thread.hpp>
using namespace boost::python;
int threadFunc ( ){
PyGILState_STATE gstate;
gstate = PyGILState_Ensure();
std::cout << "Py_GetProgramName: " << Py_GetProgramName() << std::endl;
std::cout << "Py_GetPath: " << Py_GetPath() << std::endl;
std::cout << "Py_GetExecPrefix: " << Py_GetExecPrefix() << std::endl;
object main_module = boost::python::import ( "__main__");
object main_namespace = main_module.attr("__dict__");
object objc_import = boost::python::import ( "objc" );
exec("a = 10\n", main_namespace);
exec("print a\n", main_namespace);
return 0;
}
int main( ){
Py_InitializeEx( 0 ); //
PyEval_InitThreads();
// Uncomment this will make the program work.
//object objc_import = boost::python::import ( "objc" );
PyEval_ReleaseLock();
boost::thread * theChildThread = new boost::thread (threadFunc);
while (1){
boost::this_thread::sleep( boost::posix_time::seconds(1) );
}
delete theChildThread;
Py_Finalize();
return 0;
}
The code fails when I tries to import objc in the threadFunc
object objc_import = boost::python::import ( "objc" );
By experiment different things, I find that if I do this in the main thread before starting child thread, the whole program would work ( see comments in the main function ):
//Uncomment this will make the program work.
//object objc_import = boost::python::import ( "objc" );
I am wondering what happened there. Is there anyway to without importing objc in main thread, but still make the child thread work?
Thanks,
Related
I am new to tensorflow as well as including python code in c++, therefore I would apprechiate any tips/comments on the following weird behaviour:
I have a c++ class pythoninterface with headerfile pythoninterface.h:
#include <string>
#include <iostream>
class pythoninterface{
private:
const char* file;
const char* funct;
const char* filepath;
public:
pythoninterface();
~pythoninterface();
void CallFunction();
};
The sourcefile pythoninterface.cpp:
#include <Python.h>
#include <string>
#include <sstream>
#include <vector>
#include "pythoninterface.h"
pythoninterface::pythoninterface(){
file = "TensorflowIncludePy";
funct = "myTestFunction";
filepath = "/path/To/TensorflowIncludePy.py";
}
void pythoninterface::CallFunction(){
PyObject *pName, *pModule, *pDict, *pFunc, *pValue, *presult;
// Initialize the Python Interpreter
Py_Initialize();
//Set in path where to find the custom python module other than the path where Python's system modules/packages are found.
std::stringstream changepath;
changepath << "import sys; sys.path.insert(0, '" << filepath << "')";
const std::string tmp = changepath.str();
filepath = tmp.c_str();
PyRun_SimpleString (this->filepath);
// Build the name object
pName = PyString_FromString(this->file);
// Load the module object
pModule = PyImport_Import(pName);
if(pModule != NULL) {
// pDict is a borrowed reference
pDict = PyModule_GetDict(pModule);
// pFunc is also a borrowed reference
pFunc = PyDict_GetItemString(pDict, this->funct);
if (PyCallable_Check(pFunc))
{
pValue=Py_BuildValue("()");
printf("pValue is empty!\n");
PyErr_Print();
presult=PyObject_CallObject(pFunc,pValue);
PyErr_Print();
} else
{
PyErr_Print();
}
printf("Result is %d!\n",PyInt_AsLong(presult));
Py_DECREF(pValue);
// Clean up
Py_DECREF(pModule);
Py_DECREF(pName);
}
else{
std::cout << "Python retuned null pointer, no file!" << std::endl;
}
// Finish the Python Interpreter
Py_Finalize();
}
And the Python File from which the function should be included (TensorflowIncludePy.py):
def myTestFunction():
print 'I am a function without an input!'
gettingStartedTF()
return 42
def gettingStartedTF():
import tensorflow as tf #At this point the error occurs!
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))
return 42
Finally in my main function I only create a pythoninterface object p and call the function p.CallFunction(). The communication between the c++ and python code works all right, but when (at runtime) the line import tensorflow as tf is reached, I get a *** stack smashing detected *** error message and the program finishes. Can anyone guess what the problem might be or had a similar issue before?
I know there is a c++ tensorflow API, but I feel more comfortable with using tensorflow in python so I thought this might be the perfect solution for me (apparently it is not...:P)
I'm using Tcl library version 8.6.4 (compiled with Visual Studio 2015, 64bits) to interpret some Tcl commands from a C/C++ program.
I noticed that if I create interpreters from different threads, the second one ends up in an infinite loop:
#include "tcl.h"
#include <boost/thread.hpp>
#include <boost/filesystem.hpp>
void runScript()
{
Tcl_Interp* pInterp = Tcl_CreateInterp();
std::string sTclPath = boost::filesystem::current_path().string() + "/../../stg/Debug/lib/tcl";
const char* setvalue = Tcl_SetVar( pInterp, "tcl_library", sTclPath.c_str(), TCL_GLOBAL_ONLY );
assert( setvalue != NULL );
int i = Tcl_Init( pInterp );
assert( i == TCL_OK );
int nTclResult = Tcl_Eval( pInterp, "puts \"Hello\"" );
assert( nTclResult == TCL_OK );
Tcl_DeleteInterp( pInterp );
}
int main( int argc, char* argv[] )
{
Tcl_FindExecutable(NULL);
runScript();
runScript();
boost::thread thrd1( runScript );
thrd1.join(); // works OK
boost::thread thrd2( runScript );
thrd2.join(); // never joins
return 1;
}
Infinite loop is here, within Tcl source code:
void
TclInitNotifier(void)
{
ThreadSpecificData *tsdPtr;
Tcl_ThreadId threadId = Tcl_GetCurrentThread();
Tcl_MutexLock(&listLock);
for (tsdPtr = firstNotifierPtr; tsdPtr && tsdPtr->threadId != threadId;
tsdPtr = tsdPtr->nextPtr) {
/* Empty loop body. */
}
// I never exit this loop because, after first thread was joined
// at some point tsdPtr == tsdPtr->nextPtr
Am I doing something wrong? Is there any special function call I'm missing?
Note: TCL_THREADS was not set while I compiled Tcl. However, I feel like I'm doing nothing wrong here. Also, adding
/* Empty loop body. */
if ( tsdPtr != NULL && tsdPtr->nextPtr == tsdPtr )
{
tsdPtr = NULL;
break;
}
within the loop apparently fixes the issue. But I'm not very confident in modifying 3rd party library source code...
After reporting a bug to Tcl team, I was asked to try again with Tcl library compiled with TCL_THREADS enabled. It fixed the issue.
TCL_THREADS was disabled because I compiled on windows using a CMake Lists.txt file I found on the web: this one was actually written for Linux and disabled thread support because it was unable to find pthread on my machine. I finally compiled Tcl libraries using the scripts provided by the Tcl team: threading is enabled by default and the infinite loop is gone!
I have some code of my application that makes usage of boost inteprocess scoped lock with timers. When a mutex is acquired in one thread, a second thread tyring to acquire it for few milliseconds will fail and will log something to screeen.
I don't know why but with the version of boost 1.50 this doens't work anymore.
The code below I can see that the thread #2 doesn't print "ERROR" but is completely stuck.
Am I missing something here?
I am using LINUX kernel 2.6.32 with g++.
COuld it be something to deal with UTC? I read o boost that the time used by such lock is UTC and in date time I am reading right now about local_adjustor and conversion from local to utc and vice-versa.
AFG
#include <iostream>
#include <boost/interprocess/sync/scoped_lock.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
#include <boost/interprocess/sync/named_mutex.hpp>
#include <boost/thread.hpp>
#include <boost/bind.hpp>
namespace bi = boost::interprocess;
void lock_test( bi::named_mutex& mt, bool long_sleep ) {
boost::posix_time::ptime pt =
boost::posix_time::microsec_clock::local_time()
+ boost::posix_time::milliseconds(100);
bi::scoped_lock<bi::named_mutex> l( mt, pt );
if( l.owns() ){
std::cout << "Locked"<<std::endl;
}
else{
std::cout << "ERROR" << std::endl;
std::cout.flush();
return ;
}
if(long_sleep){
while(true) {sleep(1);std::cout<<"[]";std::cout.flush();}
}
}
int main(){
bi::named_mutex m_mutex( bi::open_or_create, "ciao"
, bi::permissions( 0666 ));
boost::thread t1 = boost::thread( &lock_test
, boost::ref( m_mutex), true );
sleep(4);
boost::thread t2 = boost::thread( &lock_test
, boost::ref(m_mutex), false );
while(true){sleep(1);}
}
It looks that if I switch from boost::posix_time::microsec_clock::local_time() to
boost::posix_time::microsec_clock::universal_time()
everything works fine.
You should use boost::get_system_time(), there are quite a few examples with it. Though I can't find the authoritative source, I use microsec_clock exactly as you do and get similar problems. Just discovered the bug though, will update when I'll test the fix.
Usage of boost::unique_lock::timed_lock
I have an interesting problem that seems to be unresolved by my research on the internet.
I'm trying to load libraries dynamically in my c++ project with the functions from dlfcn.h. The problem is that when I try to reload the plugins at running time (because I made a change on any of them), the main program crashes (Segmentation fault (core dumped)) when dlclose() is called.
Here is my example that reproduces the error:
main.cpp:
#include <iostream>
#include <dlfcn.h>
#include <time.h>
#include "IPlugin.h"
int main( )
{
void * lib_handle;
char * error;
while( true )
{
std::cout << "Updating the .so" << std::endl;
lib_handle = dlopen( "./test1.so", RTLD_LAZY );
if ( ! lib_handle )
{
std::cerr << dlerror( ) << std::endl;
return 1;
}
create_t fn_create = ( create_t ) dlsym( lib_handle, "create" );
if ( ( error = dlerror( ) ) != NULL )
{
std::cerr << error << std::endl;
return 1;
}
IPlugin * ik = fn_create( );
ik->exec( );
destroy_t fn_destroy = ( destroy_t ) dlsym( lib_handle, "destroy" );
fn_destroy( ik );
std::cout << "Waiting 5 seconds before unloading..." << std::endl;
sleep( 5 );
dlclose( lib_handle );
}
return 0;
}
IPlugin.h:
class IPlugin
{
public:
IPlugin( ) { }
virtual ~IPlugin( ) { }
virtual void exec( ) = 0;
};
typedef IPlugin * ( * create_t )( );
typedef void ( * destroy_t )( IPlugin * );
Test1.h:
#include <iostream>
#include "IPlugin.h"
class Test1 : public IPlugin
{
public:
Test1( );
virtual ~Test1( );
void exec( );
};
Test1.cpp:
#include "Test1.h"
Test1::Test1( ) { }
Test1::~Test1( ) { }
void Test1::exec( )
{
std::cout << "void Test1::exec( )" << std::endl;
}
extern "C"
IPlugin * create( )
{
return new Test1( );
}
extern "C"
void destroy( IPlugin * plugin )
{
if( plugin != NULL )
{
delete plugin;
}
}
To compile:
g++ main.cpp -o main -ldl
g++ -shared -fPIC Test1.cpp -o plugin/test1.so
The problem occurs when for example I change something on the Test1::exec method (changing the string to be printed or commenting the line) and while the main program sleeps I copy the new test1.so to main running directory (cp). If I use the move command (mv), no error occurs. What makes the difference between using cp or mv? Is there any way to solve this problem or to do that using cp?
I'm using Fedora 14 with g++ (GCC) 4.5.1 20100924 (Red Hat 4.5.1-4).
Thanks in advance.
The difference between cp and mv that is pertinent to this question is as follows:
cp opens the destination file and writes the new contents into it. It therefore replaces the old contents with the new contents.
mv doesn't touch the contents of the original file. Instead, it makes the directory entry point to the new file.
This turns out to be important. While the application is running, the OS keeps open handles to the executable and the shared objects. When it needs to consult one of the these files, it uses the relevant handle to access the file's contents.
If you've used cp, the contents has now been corrupted, so anything can happen (a segfault is a pretty likely outcome).
If you've used mv, the open file handle still refers to the original file, which continues to exist on disk even though there's no longer a directory entry for it.
If you've used mv to replace the shared object, you should be able to dlclose the old one and dlopen the new one. However, this is not something that I've done or would recommend.
Try this:
extern "C"
void destroy( IPlugin * plugin )
{
if( plugin != NULL && dynamic_cast<Test1*>(plugin))
{
delete static_cast<Test1*>(plugin);
}
}
I've been "playing around with" boost threads today as a learning exercise, and I've got a working example I built quite a few months ago (before I was interrupted and had to drop multi-threading for a while) that's showing unusual behaviour.
When I initially wrote it I was using MingW gcc 3.4.5, and it worked. Now I'm using 4.4.0 and it doesn't - incidentally, I've tried again using 3.4.5 (I kept that version it a separate folder when I installed 4.4.0) and it's still working.
The code is at the end of the question; in summary what it does is start two Counter objects off in two child threads (these objects simply increment a variable then sleep for a bit and repeat ad infinitum - they count), the main thread waits for the user via a cin.get() and then interrupts both threads, waits for them to join, then outputs the result of both counters.
Complied with 3.4.5 it runs as expected.
Complied with 4.4.0 it runs until the user input, then dies with a message like the below - it seems the the interrupt exceptions are killing the entire process?
terminate called after throwing an instance of '
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
boost::thread_interrupted'
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
From what I read, I think that any (?) uncaught exception that is allowed to propagate out of a child thread will kill the process? But, I'm catching the interrupts here, aren't I? At least I seem to be when using 3.4.5.
So, firstly, have I understood how interrupting works?
And, any suggestions as to what is happening and how to fix?
Code:
#include <iostream>
#include <boost/thread/thread.hpp>
#include <boost/date_time.hpp>
//fixes a linker error for boost threads in 4.4.0 (not needed for 3.4.5)
//found via Google, so not sure on validity - but does fix the link error.
extern "C" void tss_cleanup_implemented() { }
class CCounter
{
private:
int& numberRef;
int step;
public:
CCounter(int& number,int setStep) : numberRef(number) ,step(setStep) { }
void operator()()
{
try
{
while( true )
{
boost::posix_time::milliseconds pauseTime(50);
numberRef += step;
boost::this_thread::sleep(pauseTime);
}
}
catch( boost::thread_interrupted const& e )
{
return;
}
}
};
int main( int argc , char *argv[] )
{
try
{
std::cout << "Starting counters in secondary threads.\n";
int number0 = 0,
number1 = 0;
CCounter counter0(number0,1);
CCounter counter1(number1,-1);
boost::thread threadObj0(counter0);
boost::thread threadObj1(counter1);
std::cout << "Press enter to stop the counters:\n";
std::cin.get();
threadObj0.interrupt();
threadObj1.interrupt();
threadObj0.join();
threadObj1.join();
std::cout << "Counter stopped. Values:\n"
<< number0 << '\n'
<< number1 << '\n';
}
catch( boost::thread_interrupted& e )
{
std::cout << "\nThread Interrupted Exception caught.\n";
}
catch( std::exception& e )
{
std::cout << "\nstd::exception thrown.\n";
}
catch(...)
{
std::cout << "\nUnexpected exception thrown.\n"
}
return EXIT_SUCCESS;
}
Solved.
It turns out adding the complier flag -static-libgcc removes the problem with 4.4.0 (and has no apparent affect with 3.4.5) - or at least in this case the program returns the expected results.