Python Coverage for C++ PyImport - c++

Situation:
I'm attempting to get coverage reports on all python code in my current project. I've utilized Coverage.py with great success for the most part. Currently I'm using it like this taking advantage of the sitecustomize.py process. For everything that's being started from the command line, and it works amazing.
Issue:
I can't get python modules run from C++ via PyImport_Import() type statements to actually trace and output coverage data.
Example:
[test.cpp]
#include <stdio.h>
#include <iostream>
#include <Python.h>
int main()
{
Py_Initialize();
PyObject* sysPath = PySys_GetObject("path");
PyList_Append(sysPath, PyString_FromString("."));
// Load the module
PyObject *pName = PyString_FromString("test_mod");
PyObject *pModule = PyImport_Import(pName);
if (pModule != NULL) {
std::cout << "Python module found\n";
// Load all module level attributes as a dictionary
PyObject *pDict = PyModule_GetDict(pModule);
PyObject *pFunc = PyObject_GetAttrString(pModule, "getInteger");
if(pFunc)
{
if(PyCallable_Check(pFunc))
{
PyObject *pValue = PyObject_CallObject(pFunc, NULL);
std::cout << PyLong_AsLong(pValue) << std::endl;
}
else
{
printf("ERROR: function getInteger()\n");
}
}
else
{
printf("ERROR: pFunc is NULL\n");
}
}
else
std::cout << "Python Module not found\n";
return 0;
}
[test_mod.py]
#!/bin/python
def getInteger():
print('Python function getInteger() called')
c = 100*50/30
return c
print('Randomness')
Output:
If I manually run test_mod.py it outputs as expected. However, if I run the compiled test.cpp binary, it doesn't output anything for coverage data. I know sitecustomize.py is still being hit, as I added some debugging to ensure I wasn't going insane. I can also see in the coverage debug log that it does indeed want to trace the module..
[cov.log]
New process: executable: /usr/bin/python
New process: cmd: ???
New process: parent pid: 69073
-- config ----------------------------------------------------
_include: None
_omit: None
attempted_config_files: /tmp/.coveragerc
branch: True
concurrency: thread
multiprocessing
config_files: /tmp/.coveragerc
cover_pylib: False
data_file: /tmp/python_data/.coverage
debug: process
trace
sys
config
callers
dataop
dataio
disable_warnings: -none-
exclude_list: #\s*(pragma|PRAGMA)[:\s]?\s*(no|NO)\s*(cover|COVER)
extra_css: None
fail_under: 0.0
html_dir: htmlcov
html_title: Coverage report
ignore_errors: False
note: None
New Section 1 Page 2note: None
parallel: True
partial_always_list: while (True|1|False|0):
if (True|1|False|0):
partial_list: #\s*(pragma|PRAGMA)[:\s]?\s*(no|NO)\s*(branch|BRANCH)
paths: {'source': ['/tmp/python_source', '/opt/test']}
plugin_options: {}
plugins: -none-
precision: 0
report_include: None
report_omit: None
run_include: None
run_omit: None
show_missing: False
skip_covered: False
source: /opt/test/
timid: False
xml_output: coverage.xml
xml_package_depth: 99
-- sys -------------------------------------------------------
version: 4.5.4
coverage: /usr/lib64/python2.7/site-packages/coverage/__init__.pyc
cover_paths: /usr/lib64/python2.7/site-packages/coverage
pylib_paths: /usr/lib64/python2.7
tracer: PyTracer
plugins.file_tracers: -none-
plugins.configurers: -none-
config_files: /tmp/.coveragerc
configs_read: /tmp/.coveragerc
data_path: /tmp/python_data/.coverage
python: 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
platform: Linux-3.10.0-1062.el7.x86_64-x86_64-with-redhat-7.7-Maipo
implementation: CPython
executable: /usr/bin/python
cwd: /opt/test
path: /usr/lib64/python27.zip
/usr/lib64/python2.7
/usr/lib64/python2.7/plat-linux2
/usr/lib64/python2.7/lib-tk
/usr/lib64/python2.7/lib-old
/usr/lib64/python2.7/lib-dynload
/usr/lib64/python2.7/site-packages
environment: COVERAGE_DEBUG = process,trace,sys,config,callers,dataop,dataio
COVERAGE_DEBUG_FILE = /tmp/cov.log
COVERAGE_PROCESS_START = /tmp/.coveragerc
command_line: ???
source_match: /opt/test
source_pkgs_match: -none-
include_match: -none-
omit_match: -none-
cover_match: -none-
pylib_match: -none-
-- end -------------------------------------------------------
<module> : /usr/lib64/python2.7/site.py #556
New Section 1 Page 3<module> : /usr/lib64/python2.7/site.py #556
main : /usr/lib64/python2.7/site.py #539
addsitepackages : /usr/lib64/python2.7/site.py #317
addsitedir : /usr/lib64/python2.7/site.py #190
addpackage : /usr/lib64/python2.7/site.py #152
<module> : <string> #1
process_startup : /usr/lib64/python2.7/site-packages/coverage/control.py #1289
start : /usr/lib64/python2.7/site-packages/coverage/control.py #690
_init : /usr/lib64/python2.7/site-packages/coverage/control.py #362
_write_startup_debug : /usr/lib64/python2.7/site-packages/coverage/control.py #382
write_formatted_info : /usr/lib64/python2.7/site-packages/coverage/debug.py #120
Not tracing '/usr/lib64/python2.7/threading.py': falls outside the --source trees
<module> : /usr/lib64/python2.7/site.py #556
main : /usr/lib64/python2.7/site.py #539
addsitepackages : /usr/lib64/python2.7/site.py #317
addsitedir : /usr/lib64/python2.7/site.py #190
addpackage : /usr/lib64/python2.7/site.py #152
<module> : <string> #1
process_startup : /usr/lib64/python2.7/site-packages/coverage/control.py #1289
start : /usr/lib64/python2.7/site-packages/coverage/control.py #701
start : /usr/lib64/python2.7/site-packages/coverage/collector.py #318
settrace : /usr/lib64/python2.7/threading.py #99
_trace : /usr/lib64/python2.7/site-packages/coverage/pytracer.py #111
_should_trace : /usr/lib64/python2.7/site-packages/coverage/control.py #593
[... Not tracing a bunch of common python code ...]
Tracing './test_mod.py'
<module> : ./test_mod.py #3
_trace : /usr/lib64/python2.7/site-packages/coverage/pytracer.py #111
_should_trace : /usr/lib64/python2.7/site-packages/coverage/control.py #593

I reproduced the issue using your code and you only forgot to call Py_Finalize(). As a result, the report is never generated whereas the data were collected.
It works with the following piece of code:
#include <stdio.h>
#include <iostream>
#include <Python.h>
int main()
{
Py_Initialize();
PyEval_InitThreads();
PyObject* sysPath = PySys_GetObject("path");
PyList_Append(sysPath, PyString_FromString("."));
// Load the module
PyObject *pName = PyString_FromString("test_mod");
PyObject *pModule = PyImport_Import(pName);
if (pModule != NULL) {
std::cout << "Python module found\n";
// Load all module level attributes as a dictionary
PyObject *pDict = PyModule_GetDict(pModule);
PyObject *pFunc = PyObject_GetAttrString(pModule, "getInteger");
if(pFunc)
{
if(PyCallable_Check(pFunc))
{
PyObject *pValue = PyObject_CallObject(pFunc, NULL);
std::cout << PyLong_AsLong(pValue) << std::endl;
}
else
{
printf("ERROR: function getInteger()\n");
}
}
else
{
printf("ERROR: pFunc is NULL\n");
}
}
else
std::cout << "Python Module not found\n";
Py_Finalize();
return 0;

PyObject *PySys_GetObject(char *name) returns a borrowed reference. Is not it the case that the reference count should be incremented? What about:
// ...
PyObject* sysPath = PySys_GetObject("path");
Py_INCREF(sysPath);
PyList_Append(sysPath, PyString_FromString("."));
Py_DECREF(sysPath);
// sysPath = NULL;
// ...

I'm only just starting with the Python-C API myself, but my understanding is that importing modules doesn't actually add them to your main module. You need to do that separately. I'm not sure if this will help with your issue, but my approach that's worked (minus the error checking) has been as follows:
// Initialize main module
PyObject* mainModule = PyImport_AddModule("__main__");;
// Initialize module to be added
PyObject* moduleNamePyObject= PyUnicode_DecodeFSDefault("moduleName");
PyImport_Import(moduleNamePyObject);
// Add module to main module
PyObject_SetAttrString(mainModulePtr, "moduleName", modulePyObject);

Normally, when importing a module, Python tries to find the module file next to the importing module (the module that contains the import statement). Python then tries the directories in “sys.path”. The current working directory is usually not considered. In our case, the import is performed via the API, so there is no importing module in whose directory Python could search for “test_mod.py”. The plug-in is also not on “sys.path”. One way of enabling Python to find the plug-in is to add the current working directory to the module search path by doing the equivalent of “sys.path.append(‘.’)” via the API.
Py_Initialize();
PyObject* sysPath = PySys_GetObject((char*)"path");
PyObject* programName = PyString_FromString(<DIRECTORY>.c_str());
PyList_Append(sysPath, programName);
Py_DECREF(programName);
If you are using python3 ,
Change PyString_FromString to PyUnicode_FromString.
Sources :
https://realmike.org/blog/2012/07/08/embedding-python-tutorial-part-1/
Python Embedding: PyImport_Import not from the current directory

Related

Fails to load Python module with Python 3

#include <Python.h>
#include <fstream>
#include <iostream>
#include <string>
#include <filesystem>
#include <sys/types.h>
#include <dirent.h>
static const char * sPythonCode =
"class Test :\n"
" def __init__(self) : \n"
" self.Disc_ = 0. \n"
" def getset(self) : \n"
" self.Disc_ = 7. \n"
" return self.Disc_ \n";
std::string writeFile()
{
static int iFile = 0;
std::string sFileName(std::string("test") + std::to_string(iFile));
std::ofstream out("py/" + sFileName + ".py");
out << sPythonCode;
out.flush();
out.close();
iFile++;
return sFileName;
}
static bool bPythonOpen = false;
#define PYTHONPATHLEN 501
static void _PyInit()
{
if (!Py_IsInitialized())
{
Py_InitializeEx(0);
}
}
void openPython(void)
{
if (!bPythonOpen)
{
const size_t szBufferN = 1000;
char acLoadPath[szBufferN];
const char *pypath = "./py";
_PyInit();
PyRun_SimpleString("import sys");
PyRun_SimpleString("print('python (%d.%d.%d) initialized' % (sys.version_info.major, sys.version_info.minor, sys.version_info.micro))");
PyRun_SimpleString("print('--------------------------')");
snprintf(acLoadPath, szBufferN, "sys.path.append('%s')", pypath);
PyRun_SimpleString(acLoadPath);
bPythonOpen = true;
}
}
PyObject *loadPythonModule(const char *acModule)
{
PyObject *pyModule = NULL;
if (bPythonOpen && acModule && strcmp(acModule, ""))
{
printf("%s\n", acModule);
pyModule = PyImport_ImportModule(acModule);
if (!pyModule)
{
PyErr_Print();
}
}
return pyModule;
}
void loadPython()
{
std::string sFileName = writeFile();
openPython();
//sleep(1);
PyObject *pPythonModule = loadPythonModule(sFileName.c_str());
if (pPythonModule)
PyDict_DelItemString(PyImport_GetModuleDict(), PyModule_GetName((PyObject *)pPythonModule));
}
int main(int argc, char **argv)
{
for (int i = 0; i < 10; i++)
{
loadPython();
}
}
My working env:
gcc version 8.3.1 20190311 (Red Hat 8.3.1-3) (GCC)
Red Hat Enterprise Linux Server release 7.6 (Maipo)
problem with python 3.6.10 / 3.8.3
Command to compile:
g++ pythontest.cpp -I/opt/python/python3.6.10/include/python3.6m -L/opt/python/python3.6.10/lib -lpython3.6m
create py directory:
mkdir py
When I run this code I have random error on different test file that I load.
Example of output:
python (3.6.10) initialized
--------------------------
test0
test1
test2
test3
ModuleNotFoundError: No module named 'test3'
test4
test5
ModuleNotFoundError: No module named 'test5'
test6
test7
ModuleNotFoundError: No module named 'test7'
test8
test9
ModuleNotFoundError: No module named 'test9'
Good to know:
If I uncomment the line with the sleep it works well
If I remove the iFile++, it works also as it used after an already created file
If I relaunch a second without rm -rf py directory it works also
If I erase file after each run in the loadPython function and remove iFile++ it works also
If I use strace to launch the executable I don't see the problem
For the moment it seems that the Python loader does not see the file on disk, however in case of failure if I print what I have in the directory thanks to dirent I see the testx.py
Please note that we reproduce the error on different Linux servers (not a hardware problem and even on Windows), with Python 2.7.x it works perfectly well.
You should call __import__('importlib').invalidate_caches() each time you modify modules folders to let C Python knows it must read directories again.

Torch Vision C++ interface error "Unknown builtin op: torchvision::nms"

I'm trying to run a script in jit(using script) in torchscript for FasterRCNN.
I installed CUDA 10.1, compatible cudnn, LibTorch (C++) 1.7.1 and Torch Vision 0.8.2
I followed the instructions in both torchscript and vision and I have the following:
--- CMakeLists.txt ---
cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(load_and_run_model_proj)
list(APPEND CMAKE_PREFIX_PATH "/home/fstrati/libtorch_shared_cuda_10.1/libtorch")
list(APPEND CMAKE_PREFIX_PATH "/opt/vision_0.8.2")
find_package(Torch REQUIRED)
find_package(TorchVision REQUIRED)
add_executable(load_and_run_model src/load_and_run_model.cpp)
# target_link_libraries(load_and_run_model "${TORCH_LIBRARIES}")
target_link_libraries(load_and_run_model PUBLIC TorchVision::TorchVision)
set_property(TARGET load_and_run_model PROPERTY CXX_STANDARD 14)
--- CMakeLists.txt ---
and
--- src/load_and_run_model.cpp ---
#include <torch/script.h> // One-stop header.
#include <torchvision/vision.h>
#include <torchvision/nms.h>
#include <iostream>
#include <memory>
int main(int argc, const char* argv[])
{
if (argc != 2)
{
std::cerr << "usage: example-app <path-to-exported-script-module>\n";
return -1;
}
torch::jit::script::Module module;
try
{
// Deserialize the ScriptModule from a file using torch::jit::load().
module = torch::jit::load(argv[1]);
}
catch (const c10::Error& e)
{
std::cerr << e.what() << std::endl;
std::cerr << "error loading the model\n";
return -1;
}
std::cout << "ok\n";
return 0;
}
--- src/load_and_run_model.cpp ---
I compile & link fine.
When I try to run it with traced script TorchCcript for fasterRCNN creates with jit.script
I get the following error:
terminate called after throwing an instance of 'torch::jit::ErrorReport'
what():
Unknown builtin op: torchvision::nms.
Could not find any similar ops to torchvision::nms. This op may not exist or may not be currently supported in TorchScript.
:
File "C:\Users\andre\anaconda3\envs\pytorch\lib\site-packages\torchvision\ops\boxes.py", line 42
"""
_assert_has_ops()
return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
Serialized File "code/__torch__/torchvision/ops/boxes.py", line 93
_42 = __torch__.torchvision.extension._assert_has_ops
_43 = _42()
_44 = ops.torchvision.nms(boxes, scores, iou_threshold)
~~~~~~~~~~~~~~~~~~~ <--- HERE
return _44
'nms' is being compiled since it was called from 'batched_nms'
File "C:\Users\andre\anaconda3\envs\pytorch\lib\site-packages\torchvision\ops\boxes.py", line 88
offsets = idxs.to(boxes) * (max_coordinate + torch.tensor(1).to(boxes))
boxes_for_nms = boxes + offsets[:, None]
keep = nms(boxes_for_nms, scores, iou_threshold)
~~~ <--- HERE
return keep
Serialized File "code/__torch__/torchvision/ops/boxes.py", line 50
_18 = torch.slice(offsets, 0, 0, 9223372036854775807, 1)
boxes_for_nms = torch.add(boxes, torch.unsqueeze(_18, 1), alpha=1)
keep = __torch__.torchvision.ops.boxes.nms(boxes_for_nms, scores, iou_threshold, )
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
_11 = keep
return _11
'batched_nms' is being compiled since it was called from 'RegionProposalNetwork.filter_proposals'
Serialized File "code/__torch__/torchvision/models/detection/rpn.py", line 64
_11 = __torch__.torchvision.ops.boxes.clip_boxes_to_image
_12 = __torch__.torchvision.ops.boxes.remove_small_boxes
_13 = __torch__.torchvision.ops.boxes.batched_nms
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
num_images = (torch.size(proposals))[0]
device = ops.prim.device(proposals)
'RegionProposalNetwork.filter_proposals' is being compiled since it was called from 'RegionProposalNetwork.forward'
File "C:\Users\andre\anaconda3\envs\pytorch\lib\site-packages\torchvision\models\detection\rpn.py", line 344
proposals = self.box_coder.decode(pred_bbox_deltas.detach(), anchors)
proposals = proposals.view(num_images, -1, 4)
boxes, scores = self.filter_proposals(proposals, objectness, images.image_sizes, num_anchors_per_level)
~~~~~~~~~~~~~~~~~~~~~ <--- HERE
losses = {}
Serialized File "code/__torch__/torchvision/models/detection/rpn.py", line 37
proposals = (self.box_coder).decode(torch.detach(pred_bbox_deltas0), anchors, )
proposals0 = torch.view(proposals, [num_images, -1, 4])
_8 = (self).filter_proposals(proposals0, objectness0, images.image_sizes, num_anchors_per_level, )
~~~~~~~~~~~~~~~~~~~~~ <--- HERE
boxes, scores, = _8
losses = annotate(Dict[str, Tensor], {})
Any ideas or suggestion on how to cure this error: seems like the operator nms is not registered.
By the way the master branch of torchvision with cuda is not compiling so I report the error
for the tag v0.8.2 of torch vision.
After researching this issue I came about
this commit in master that is not in
v0.8.2 of torch vision:
https://github.com/pytorch/vision/pull/2798/commits/fb893e7ba390d1b668efb4b84b3376cf634bd043
Applying the commit to v0.8.2 solved the problem:
now operators are correctly registered in torch script

stack smashing calling python function importing tensorflow c++

I am new to tensorflow as well as including python code in c++, therefore I would apprechiate any tips/comments on the following weird behaviour:
I have a c++ class pythoninterface with headerfile pythoninterface.h:
#include <string>
#include <iostream>
class pythoninterface{
private:
const char* file;
const char* funct;
const char* filepath;
public:
pythoninterface();
~pythoninterface();
void CallFunction();
};
The sourcefile pythoninterface.cpp:
#include <Python.h>
#include <string>
#include <sstream>
#include <vector>
#include "pythoninterface.h"
pythoninterface::pythoninterface(){
file = "TensorflowIncludePy";
funct = "myTestFunction";
filepath = "/path/To/TensorflowIncludePy.py";
}
void pythoninterface::CallFunction(){
PyObject *pName, *pModule, *pDict, *pFunc, *pValue, *presult;
// Initialize the Python Interpreter
Py_Initialize();
//Set in path where to find the custom python module other than the path where Python's system modules/packages are found.
std::stringstream changepath;
changepath << "import sys; sys.path.insert(0, '" << filepath << "')";
const std::string tmp = changepath.str();
filepath = tmp.c_str();
PyRun_SimpleString (this->filepath);
// Build the name object
pName = PyString_FromString(this->file);
// Load the module object
pModule = PyImport_Import(pName);
if(pModule != NULL) {
// pDict is a borrowed reference
pDict = PyModule_GetDict(pModule);
// pFunc is also a borrowed reference
pFunc = PyDict_GetItemString(pDict, this->funct);
if (PyCallable_Check(pFunc))
{
pValue=Py_BuildValue("()");
printf("pValue is empty!\n");
PyErr_Print();
presult=PyObject_CallObject(pFunc,pValue);
PyErr_Print();
} else
{
PyErr_Print();
}
printf("Result is %d!\n",PyInt_AsLong(presult));
Py_DECREF(pValue);
// Clean up
Py_DECREF(pModule);
Py_DECREF(pName);
}
else{
std::cout << "Python retuned null pointer, no file!" << std::endl;
}
// Finish the Python Interpreter
Py_Finalize();
}
And the Python File from which the function should be included (TensorflowIncludePy.py):
def myTestFunction():
print 'I am a function without an input!'
gettingStartedTF()
return 42
def gettingStartedTF():
import tensorflow as tf #At this point the error occurs!
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))
return 42
Finally in my main function I only create a pythoninterface object p and call the function p.CallFunction(). The communication between the c++ and python code works all right, but when (at runtime) the line import tensorflow as tf is reached, I get a *** stack smashing detected *** error message and the program finishes. Can anyone guess what the problem might be or had a similar issue before?
I know there is a c++ tensorflow API, but I feel more comfortable with using tensorflow in python so I thought this might be the perfect solution for me (apparently it is not...:P)

Call Python function from C/C++

I am trying to call a simple python function which is defined in ("ctest.py") as
def square(a)
return a**2
the following ("pytest.c") (in same directory) is the C code that I am trying to use to call the function. The issue that I am experiencing is that when the C program tries to load the python module NULL is returned.
#include <Python.h>
int main(int argc, char* argv[])
{
printf("Calling Python Function\n");
Py_Initialize(); // Initialize the Python interpreter.
PyObject *pName, *pModule, *pDict, *pFunc, *pArgs, *pValue; // Create some Python objects that will later be assigned values.
// Convert the file name to a Python string.
pName = PyString_FromString("ctest.py");
if (pName==NULL)
printf("file not found");
else
printf("%s\n", PyString_AsString(pName));
// Import the file as a Python module.
pModule = PyImport_Import(pName); //PROBLEM LINE
if(pModule==NULL)
printf("no Module\n");
// Create a dictionary for the contents of the module.
pDict = PyModule_GetDict(pModule);
printf("After Dictionary retrieval\n");
// Get the add method from the dictionary.
pFunc = PyDict_GetItemString(pDict, "square");
printf("after function retrieval\n");
// Convert 2 to a Python integer.
pValue = PyInt_FromLong(2);
// Call the function with the arguments.
PyObject* pResult = PyObject_CallObject(pFunc, pValue);
// Print a message if calling the method failed.
if(pResult == NULL)
printf("Calling the add method failed.\n");
// Convert the result to a long from a Python object.
long result = PyInt_AsLong(pResult);
// Destroy the Python interpreter.
Py_Finalize();
// Print the result.
printf("The result is %d.\n", result);
return 0;
}
The C Code is built with:
gcc -o pytest -lpythhon2.7 -I/usr/include/python2.7 pytest.c
It looks like you are running into a naming/path issue.
You might have a look at this answer:
Why does PyImport_Import fail to load a module from the current directory?

python ctypes + OCCI error

I trying to test using python ctypes with a small C++ program that uses Oracle OCCI to see if it's possible to use such a combination. It compiles to an .so library file ok but I am getting a nice linker error when I try to use it from Python, I think:
#include <string>
#include <iostream>
#include "occi.h"
using namespace std;
using namespace oracle::occi;
static string userName = "****";
static string passWord = "****";
static string connectString = "****";
class Account{
public:
bool updateAccount(){
bool updated = false;
try{
Environment *env = Environment::createEnvironment(Environment::DEFAULT);
Connection *conn = env->createConnection(userName,passWord,connectString);
Statement *stmt = conn->createStatement("select * from test");
ResultSet *rs = stmt->executeQuery();
while(rs->next()){
cout<<rs->getString(1)<<endl;
cout<<rs->getString(2)<<endl;
cout<<rs->getString(3)<<endl;
}
conn->terminateStatement(stmt);
env->terminateConnection(conn);
Environment::terminateEnvironment(env);
}catch(...){
}
return updated;
}
};
extern "C" {
Account* Account_new(){ return new Account(); }
bool Account_updateAccount(Account* account){ account->updateAccount(); }
}
#!/usr/local/bin/python2.6
import ctypes
import os
lib = ctypes.cdll.LoadLibrary(os.getcwd()+'/occi.so')
class Account(object):
def __init__(self):
self.obj = lib.Account_new()
def updateAccount(self):
lib.Account_updateAccount(self.obj)
if __name__ == "__main__":
a = Account()
b = a.updateAccount()
print str(b)
Error when I run ctest.py:
Traceback (most recent call last):
File "./ctest.py", line 7, in <module>
lib = ctypes.cdll.LoadLibrary('/oracle/ctypes/occi.so')
File "/usr/local/lib/python2.6/ctypes/__init__.py", line 431, in LoadLibrary
return self._dlltype(name)
File "/usr/local/lib/python2.6/ctypes/__init__.py", line 353, in __init__
self._handle = _dlopen(self._name, mode)
OSError: ld.so.1: python2.6: fatal: relocation error: file /oracle/ctypes/occi.so: symbol _ZN6oracle4occi11Environment17createEnvironmentENS1_4ModeEPvPFS3_S3_jEPFS3_S3_S3_jEPFvS3_S3_E: referenced symbol not found
Any ideas, could it be an issue with using the Oracle Instant Client libs. I have seen weird issues with these in the past when trying to compile other 3rd party libraries against them ?
Thanks