How do you wrap C++ OpenCV code with Boost::Python? - c++

I want to wrap my C++ OpenCV code with boost::python, and to learn how to do it, I tried a toy example, in which
I use the Boost.Numpy project to provide me with boost::numpy::ndarray.
The C++ function to be wrapped, square() takes a boost::numpy::ndarray and modifies it in place by squaring each element in it.
The exported Python module name is called test.
The square() C++ function is exported as the square name in the exported module.
I am not using bjam because IMO it is too complicated and just doesn't work for me no matter what. I'm using good old make.
Now, here's the code:
// test.cpp
#include <boost/python.hpp>
#include <boost/numpy.hpp>
#include <boost/scoped_array.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
namespace py = boost::python;
namespace np = boost::numpy;
void square(np::ndarray& array)
{
if (array.get_dtype() != np::dtype::get_builtin<int>())
{
PyErr_SetString(PyExc_TypeError, "Incorrect array data type.");
py::throw_error_already_set();
}
size_t rows = array.shape(0), cols = array.shape(1);
size_t stride_row = array.strides(0) / sizeof(int),
stride_col = array.strides(1) / sizeof(int);
cv::Mat mat(rows, cols, CV_32S);
int *row_iter = reinterpret_cast<int*>(array.get_data());
for (int i = 0; i < rows; i++, row_iter += stride_row)
{
int *col_iter = row_iter;
int *mat_row = (int*)mat.ptr(i);
for (int j = 0; j < cols; j++, col_iter += stride_col)
{
*(mat_row + j) = (*col_iter) * (*col_iter);
}
}
for (int i = 0; i < rows; i++, row_iter += stride_row)
{
int *col_iter = row_iter;
int *mat_row = (int*)mat.ptr(i);
for (int j = 0; j < cols; j++, col_iter += stride_col)
{
*col_iter = *(mat_row + j);
}
}
}
BOOST_PYTHON_MODULE(test)
{
using namespace boost::python;
def("square", square);
}
And here's the Makefile:
PYTHON_VERSION = 2.7
PYTHON_INCLUDE = /usr/include/python$(PYTHON_VERSION)
BOOST_INC = /usr/local/include
BOOST_LIB = /usr/local/lib
OPENCV_LIB = $$(pkg-config --libs opencv)
OPENCV_INC = $$(pkg-config --cflags opencv)
TARGET = test
$(TARGET).so: $(TARGET).o
g++ -shared -Wl,--export-dynamic \
$(TARGET).o -L$(BOOST_LIB) -lboost_python \
$(OPENCV_LIB) \
-L/usr/lib/python$(PYTHON_VERSION)/config -lpython$(PYTHON_VERSION) \
-o $(TARGET).so
$(TARGET).o: $(TARGET).cpp
g++ -I$(PYTHON_INCLUDE) $(OPENCV_INC) -I$(BOOST_INC) -fPIC -c $(TARGET).cpp
With this scheme, I can type make and test.so gets created. But when I try to import it,
In [1]: import test
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-73ae3ffe1045> in <module>()
----> 1 import test
ImportError: ./test.so: undefined symbol: _ZN5boost6python9converter21object_manager_traitsINS_5numpy7ndarrayEE10get_pytypeEv
In [2]:
This is a linker error which I can't seem to fix. Can anyone please help me with what's going on? Do you have (links to) code that already does integrate OpenCV, numpy and Boost.Python without things like Py++ or the likes?.

Okay I fixed this. It was a simple issue, but a sleepy brain and servings of bjam had made me ignore it. In the Makefile, I'd forgotten to put -lboost_numpy that links the Boost.Numpy libs to my lib. So, the modified Makefile looks like this:
PYTHON_VERSION = 2.7
PYTHON_INCLUDE = /usr/include/python$(PYTHON_VERSION)
BOOST_INC = /usr/local/include
BOOST_LIB = /usr/local/lib
OPENCV_LIB = $$(pkg-config --libs opencv)
OPENCV_INC = $$(pkg-config --cflags opencv)
TARGET = test
$(TARGET).so: $(TARGET).o
g++ -shared -Wl,--export-dynamic \
$(TARGET).o -L$(BOOST_LIB) -lboost_python -lboost_numpy \
$(OPENCV_LIB) \
-L/usr/lib/python$(PYTHON_VERSION)/config -lpython$(PYTHON_VERSION) \
-o $(TARGET).so
$(TARGET).o: $(TARGET).cpp
g++ -I$(PYTHON_INCLUDE) $(OPENCV_INC) -I$(BOOST_INC) -fPIC -c $(TARGET).cpp

Related

Cannot open a file through fopen()

Im working on making a simulator with c++, for which I need to read files.
my directory looks something like this
proj
------>bin #stores the executable
------>include #stroes the external library includefiles
------>lib #stores the lib files of the libraries
------>obj #stores the .o files
------>src #source files
makefile
my make file looks this
CC = g++
OUT = chip8
ODIR = ./obj
SDIR = ./src
OUTDIR = ./bin
IDIR = ./include
LDIR = ./lib
libs = -lmingw32 -lSDL2main -lSDL2 -lSDL2_image
OBJS = $(patsubst $(SDIR)/%.cpp,$(ODIR)/%.o,$(wildcard $(SDIR)/*.cpp))
vpath %.cpp $(SDIR):$(SDIR)/Chip8
$(ODIR)/%.o : %.cpp
$(CC) -c -I $(IDIR) -o $# $^
$(OUTDIR)/% : $(wildcard obj/*.o)
$(CC) -L $(LDIR) -o $# $^ $(libs)
.PHONY : run
run :
$(OUTDIR)/$(OUT)
these are my source files
main.cpp
#include <iostream>
#include "Chip8/RomFileReader.h"
int main(){
RomReader romReader;
if(romReader.OpenFile("picture.ch8") == -1){
std::cout<<"could not open file !"<<std::endl;
return EXIT_FAILURE;
}
romReader.GetRom();
uint8_t * rom = romReader.ReturnRom();
int size = romReader.GetRomSize();
for(int i = 0; i<size; i++)
std::cout<<rom[i]<<std::endl;
free(rom);
romReader.FreeRom();
romReader.CloseReader();
}
ReadRomFile.h
#pragma once
#include <iostream>
#include <stdlib.h>
#include <inttypes.h>
#include <stdio.h>
class RomReader{
private :
FILE * m_Reader;
uint8_t * m_Rom;
public :
int OpenFile(const char * fileName);
void GetRom();
void FreeRom();
uint8_t * ReturnRom();
void CloseReader();
int GetRomSize();
};
RomFileReader.cpp
#include "RomFileReader.h"
int RomReader :: OpenFile(const char * fileName){
m_Reader = fopen(fileName,"rb");
if(m_Reader == NULL){
return -1;
} else
return 1;
}
int RomReader :: GetRomSize(){
int start = ftell(m_Reader);
fseek(m_Reader,0,SEEK_END);
int end = ftell(m_Reader);
fseek(m_Reader,0,SEEK_SET);
int size = end - size;
return size;
}
void RomReader :: GetRom(){
int size = GetRomSize();
if(m_Rom == NULL){
m_Rom = new uint8_t[size];
}
fread(m_Rom,1,size,m_Reader);
}
void RomReader :: FreeRom(){
free(m_Rom);
}
uint8_t * RomReader :: ReturnRom(){
return m_Rom;
}
void RomReader :: CloseReader(){
fclose(m_Reader);
}
this is the error I'm getting
./bin/chip8
could not open file !
make: *** [run] Error 1
I could use fstream but I'm more comfortable and confident in using FILE instead, I had done something similar in c and it worked without any issue.
I'm really not able to point at what is exactly not working.
my picture.ch8 is in the bin folder along with the executable, yet I get this error. What is it that I'm missing exactly?
Your main is calling
romReader.FreeRom();
I think m_Rom is not NULL. So the memory get freed, so the memory exception getting fired?!?
Set it to NULL in a constructor of your class:
class RomReader {
...
public :
RomReader() { m_Rom = NULL; };
~RomReader() { if ( m_Rom != NULL ) delete [] m_Rom; };
...
}
The basic problem you have is that wildcard expands to the files that exist when make reads the makefile. So when you build with a clean tree (where the obj directory is empty), it expands to nothing, so nothing gets built.
The upshot is that wildcard cannot be usefully used with intermediate files generated as part of your build as they might not exist yet when you build. It is only useful for finding source files.
You need instead something like
$(OUTDIR)/$(OUT): $(patsubst src/%.cpp, obj/%.o, $(wildcard src/*.cpp)) $(patsubst src/Chip8/%.cpp, obj/%.o, $(wildcard src/Chip8/*.cpp))
You also probably want to have your run target depend on the executable
run: $(OUTDIR)/$(OUT)
otherwise it will not (re)build it when you try to run.

Undefined symbol when trying to link with shared library built from CUDA objects

I'm experimenting with building a simple application from a couple of .cu source files and a very simple C++ main that calls a function from one of the .cu files. I'm making a shared library (.so file) from the compiled .cu files. I'm finding that everything builds without trouble, but when I try to run the application, I get a linker undefined symbol error, with the mangled name of the .cu function I'm calling from main(). If I build a static library instead, my application runs just fine. Here's the makefile I've set up:
.PHONY: clean
NVCCFLAGS = -std=c++11 --compiler-options '-fPIC'
CXXFLAGS = -std=c++11
HLIB = libhello.a
SHLIB = libhello.so
CUDA_OBJECTS = bridge.o add.o
all: driver
%.o :: %.cu
nvcc -o $# $(NVCCFLAGS) -c -I. $<
%.o :: %.cpp
c++ $(CXXFLAGS) -o $# -c -I. $<
$(HLIB): $(CUDA_OBJECTS)
ar rcs $# $^
$(SHLIB): $(CUDA_OBJECTS)
nvcc $(NVCCFLAGS) --shared -o $# $^
#driver : driver.o $(HLIB)
# c++ -std=c++11 -fPIC -o $# driver.o -L. -lhello -L/usr/local/cuda-10.1/targets/x86_64-linux/lib -lcudart
driver : driver.o $(SHLIB)
c++ -std=c++11 -fPIC -o $# driver.o -L. -lhello
clean:
-rm -f driver *.o *.so *.a
Here are the various source files that the makefile takes as fodder.
add.cu:
__global__ void add(int n, int* a, int* b, int* c) {
int index = threadIdx.x;
int stride = blockDim.x;
for (int ii = index; ii < n; ii += stride) {
c[ii] = a[ii] + b[ii];
}
}
add.h:
extern __global__ void add(int n, int* a, int* b, int* c);
bridge.cu:
#include <iostream>
#include "add.h"
void bridge() {
int N = 1 << 16;
int blockSize = 256;
int numBlocks = (N + blockSize - 1)/blockSize;
int* a;
int* b;
int* c;
cudaMallocManaged(&a, N*sizeof(int));
cudaMallocManaged(&b, N*sizeof(int));
cudaMallocManaged(&c, N*sizeof(int));
for (int ii = 0; ii < N; ii++) {
a[ii] = ii;
b[ii] = 2*ii;
}
add<<<numBlocks, blockSize>>>(N, a, b, c);
cudaDeviceSynchronize();
for (int ii = 0; ii < N; ii++) {
std::cout << a[ii] << " + " << b[ii] << " = " << c[ii] << std::endl;
}
cudaFree(a);
cudaFree(b);
cudaFree(c);
}
bridge.h:
extern void bridge();
driver.cpp:
#include "bridge.h"
int main() {
bridge();
return 0;
}
I'm very new to cuda, so I expect that's where I'm doing something wrong. I've played a bit with using extern "C" declarations, but that just seems to move the "undefined symbol" error from run time to build time.
I'm familiar with various ways that one can end up with an undefined symbol, and I've mentioned various experiments I've already performed (static linking, extern "C" declarations) that make me think that this problem isn't addressed by the proposed duplicate question.
My unresolved symbol is _Z6bridgev
It looks to me as though the linker should be able resolve the symbol. If I can nm on driver.o, I see:
0000000000000000 T main
U _Z6bridgev
And if I run nm on libhello.so, I see:
0000000000006e56 T _Z6bridgev
When Robert Crovella was able to get my example to work on his machine, while I wasn't able to get his example to work on mine, I started realizing that my problem had nothing to do with cuda or nvcc. It was the fact that with a shared library, the loader has to resolve symbols at runtime, and my shared library wasn't in a "well-known location". I built a simple test case just now, purely with c++ sources, and repeated my failure. Once I copied libhello.so to /usr/local/lib, I was able to run driver successfully. So, I'm OK with closing my original question, if that's the will of the people.

cublasSasum causing segmentation fault in custom TensorFlow op [duplicate]

The code of cublas below give us the errors:core dumped while being at "cublasSnrm2(handle,row,dy,incy,de)",could you give some advice?
main.cu
#include <iostream>
#include "cublas.h"
#include "cublas_v2.h"
#include "helper_cuda.h"
using namespace std;
int main(int argc,char *args[])
{
float y[10] = {1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0};
int dev=0;
checkCudaErrors(cudaSetDevice(dev));
//cublas init
cublasStatus stat;
cublasInit();
cublasHandle_t handle;
stat = cublasCreate(&handle);
if (stat !=CUBLAS_STATUS_SUCCESS)
{
printf("cublas handle create failed!\n");
cublasShutdown();
}
float * dy,*de,*e;
int incy = 1,ONE = 1,row = 10;
e = (float *)malloc(sizeof(float)*ONE);
e[0]=0.0f;
checkCudaErrors(cudaMalloc(&dy,sizeof(float)*row));
checkCudaErrors(cudaMalloc(&de,sizeof(float)*ONE));
checkCudaErrors(cudaMemcpy(dy,y,row*sizeof(float),cudaMemcpyHostToDevice));
checkCudaErrors(cudaMemcpy(de,e,ONE*sizeof(float),cudaMemcpyHostToDevice));
stat = cublasSnrm2(handle,row,dy,incy,de);
if (stat !=CUBLAS_STATUS_SUCCESS)
{
printf("norm2 compute failed!\n");
cublasShutdown();
}
checkCudaErrors(cudaMemcpy(e,de,ONE*sizeof(float),cudaMemcpyDeviceToHost));
std::cout<<e[0]<<endl;
return 0;
}
makefile is below:
NVIDIA = $(HOME)/NVIDIA_CUDA-5.0_Samples
CUDA = /usr/local/cuda-5.0
NVIDINCADD = -I$(NVIDIA)/common/inc
CUDAINCADD = -I$(CUDA)/include
CC = -L/usr/lib64/ -lstdc++
GCCOPT = -O2 -fno-rtti -fno-exceptions
INTELOPT = -O3 -fno-rtti -xW -restrict -fno-alias
DEB = -g
NVCC = -G
ARCH = -arch=sm_35
bcg:main.cu
nvcc $(DEB) $(NVCC) $(ARCH) $(CC) -lm $(NVIDINCADD) $(CUDAINCADD) -lcublas -I./ -o $(#) $(<)
clean:
rm -f bcg
rm -f hyb
My OS is linux redhat 6.2,CUDA's version is 5.0, GPU is K20M.
The problem is here:
cublasSnrm2(handle,row,dy,incy,de);
By default, the last parameter is a host pointer. So either pass e to the snrm2 call rather than de or do this:
cublasSetPointerMode(handle,CUBLAS_POINTER_MODE_DEVICE);
stat = cublasSnrm2(handle,row,dy,incy,de);
The pointer mode needs to be set to device if you want to pass a device pointer to store the result.

linker error: undefined symbol

What I'm trying to do
I am attempting to create 2 c++ classes.
One, named Agent that will be implemented as a member of class 2
Two, named Env that will be exposed to Python through boost.python (though I suspect this detail to be inconsequential to my problem)
The problem
After successful compilation with my make file, I attempt to run my python script and I receive an import error on my extension module (the c++ code) that reads "undefined symbol: _ZN5AgentC1Effff". All the boost-python stuff aside, I believe this to be a simple c++ linker error.
Here are my files:
Agent.h
class Agent {
public:
float xy_pos[2];
float xy_vel[2];
float yaw;
float z_pos;
Agent(float x_pos, float y_pos, float yaw, float z_pos);
};
Agent.cpp
#include "Agent.h"
Agent::Agent(float x_pos, float y_pos, float yaw, float z_pos)
{
xy_vel[0] = 0;
xy_vel[1] = 0;
xy_pos[0] = x_pos;
xy_pos[1] = y_pos;
z_pos = z_pos;
yaw = yaw;
};
test_ext.cpp (where my Env class lives)
#include "Agent.h"
#include <boost/python.hpp>
class Env{
public:
Agent * agent;
//some other members
Env() {
agent = new Agent(13, 10, 0, 2);
}
np::ndarray get_agent_vel() {
return np::from_data(agent->xy_vel, np::dtype::get_builtin<float>(),
p::make_tuple(2),
p::make_tuple(sizeof(float)),
p::object());
}
void set_agent_vel(np::ndarray vel) {
agent->xy_vel[0] = p::extract<float>(vel[0]);
agent->xy_vel[1] = p::extract<float>(vel[1]);
}
}
BOOST_PYTHON_MODULE(test_ext) {
using namespace boost::python;
class_<Env>("Env")
.def("set_agent_vel", &Env::set_agent_vel)
.def("get_agent_vel", &Env::get_agent_vel)
}
Makefile
PYTHON_VERSION = 3.5
PYTHON_INCLUDE = /usr/include/python$(PYTHON_VERSION)
# location of the Boost Python include files and library
BOOST_INC = /usr/local/include/boost_1_66_0
BOOST_LIB = /usr/local/include/boost_1_66_0/stage/lib/
# compile mesh classes
TARGET = test_ext
CFLAGS = --std=c++11
$(TARGET).so: $(TARGET).o
g++ -shared -Wl,--export-dynamic $(TARGET).o -L$(BOOST_LIB) -lboost_python3 -lboost_numpy3 -L/usr/lib/python3.5/config-3.5m-x86_64-linux-gnu -lpython3.5 -o $(TARGET).so
$(TARGET).o: $(TARGET).cpp Agent.o
g++ -I$(PYTHON_INCLUDE) -I$(BOOST_INC) -fPIC -c $(TARGET).cpp $(CFLAGS)
Agent.o: Agent.cpp Agent.h
g++ -c -Wall Agent.cpp $(CFLAGS)
You never link with Agent.o anywhere.
First of all you need to build it like you build test_ext.o with the same flags. Then you need to actually link with Agent.o when creating the shared library.

When using dyn.load in R I do not get any error messages, but my .dll file does not load

I am following an example from the 'Writing R extensions manual' version 3.3.1. Specifically, I am working with c++ code, which I have saved in file out.cpp, that is on page 115 of that manual shown below:
#include <R.h>
#include <Rinternals.h>
SEXP out(SEXP x, SEXP y)
{
int nx = length(x), ny = length(y);
SEXP ans = PROTECT(allocMatrix(REALSXP, nx, ny));
double *rx = REAL(x), *ry = REAL(y), *rans = REAL(ans);
for(int i = 0; i < nx; i++) {
double tmp = rx[i];
for(int j = 0; j < ny; j++)
rans[i + nx*j] = tmp * ry[j];
}
UNPROTECT(1);
return ans;
}
I have a windows computer and use cygwin command line. There I typed R CMD SHLIB out.cpp and get the following result:
cygwin warning:
MS-DOS style path detected: C:/R-3.2.3/etc/x64/Makeconf
Preferred POSIX equivalent is: /cygdrive/c/R-3.2.3/etc/x64/Makeconf
CYGWIN environment variable option "nodosfilewarning" turns off this warning.
Consult the user's guide for more details about POSIX paths:
http://cygwin.com/cygwin-ug-net/using.html#using-pathnames
g++ -m64 -I"C:/R-3.2.3/include" -DNDEBUG -I"d:/RCompile/r- compiling/local/local323/include" -O2 -Wall -mtune=core2 -c out.cpp -o out.o
g++ -m64 -shared -s -static-libgcc -o out.dll tmp.def out.o -Ld:/RCompile/r-compiling/local/local323/lib/x64 -Ld:/RCompile/r-compiling/local/local323/lib -LC:/R-3.2.3/bin/x64 -lR
I think there is nothing wrong with this, right? Then when I go into Rstudio I type
dyn.load(paste("C:/Rextensions/out",
.Platform$dynlib.ext, sep = ""),local=FALSE)
which gives me no error message whatsoever. But when I run
is.loaded('out')
I get 'FALSE', so I am never able to upload out.dll. Could you help me with this please?
P.S.: I know how to run functions like this one using Rcpp, so please don't tell me to do that instead. What I am trying to do is understand really well the Writing Extensions manual so I hope I can go through all of their examples in that document.
Thank you.