I have a problem in which a function call during executing is calling a completely wrong implemented interface function that is overloaded. I call a function passing a bool parameter and the implemeted function actually being called is the one with a float paramater...
Here's my use case, may be a little bit confusing:
I'm writing a game in which a user will write arduino code in a Unreal Engine (UE) widget that will be saved into a .cpp file.
Behind the curtains, usercode.cpp (saved file) will get the user written code and "inject" some useful stuff, making a small code look like this:
#include "IArduino.h"
extern "C"
{
__declspec(dllexport) void setup();
__declspec(dllexport) void loop();
__declspec(dllexport) void InitArduinoPtrs(IArduino*);
};
IArduino* arduinoPtr = nullptr;
void InitArduinoPtrs(IArduino* InArduinoPtr)
{
arduinoPtr = InArduinoPtr;
}
void setup()
{
arduinoPtr->pinMode(true);
}
**Injected code: "IArduino.h", "extern C stuff", "arduinoPtr" and "InitArduinoPtrs()"
I have a interface called "IArduino" that has a function defined as virtual that is overloaded (test only):
class IArduino
{
public:
virtual void pinMode(float) = 0;
virtual void pinMode(bool) = 0;
};
I'm compiling this usercode.cpp as a shared library (usercode.cpp using IArduino.h):
g++ -c -I[IArduino.h_PATH] -Wall -fpic ./usercode.cpp -o ./usercode.o 2>&1
g++ -shared -o ./usercode.so ./usercode.o 2>&1
On my Unreal Engine project, I created a "ArduinoImpl[.h,.cpp]" that implements "IArduino" interface:
class ARDUCOMPILATIONTEST_API ArduinoImpl : public IArduino
{
public:
void pinMode(bool portNumBoolTest);
void pinMode(float portNumTest);
};
At runtime in Unreal Engine, I'm using GetProcAddress() to get setup() and InitArduinoPtrs() functions.
With linked InitArduinoPtrs(), I call it passing the pointer to a new object of "ArduinoImpl()" (that implements IArduino.h):
At another class that runs in runtime:
m_setup_function = (m_setup)FPlatformProcess::GetDllExport(m_dllHandle, TEXT("setup"));
m_init_function = (m_init)FPlatformProcess::GetDllExport(m_dllHandle, TEXT("InitArduinoPtrs"));
ArduinoImpl* arduinoImplPtr = new ArduinoImpl();
if (m_init_function)
m_init_function(arduinoImplPtr);
if (m_setup_function)
m_setup_function();
My problem is, usercode.cpp setup() is run, then:
When the "arduinoPtr->pinMode(true)" is called, it's calling "ArduinoImpl::pinMode(float)" instead of "ArduinoImpl::pinMode(bool)", completely messing up the overloaded call. Moreover, if I try to write "arduinoPtr->pinMode(2.5f)", it calls the bool overload instead of float overload... The same thing happens if I try with different parameters, such as int, string, etc.
Does anybody know why this is happening?
I tried logging the results, such as logging the float I receive when "arduinoPtr->pinMode(true)" is called and I get garbage value (such as 0.5, 512.803, 524288.187500, ...). Trying to log the "bool" value when "arduinoPtr->pinMode(2.5f)" is called, I get "208".
I tried forcing a cast on call like "arduinoPtr->pinMode(static_cast(true))" and it still calls the float param overloaded function.
I tried with different params type and the same thing keeps hapenning.
Related
First i will start with the reason i need name mangling on runtime.
I need to create a bridge between dll and its wrapper
namespace Wrapper
{
class __declspec(dllexport) Token
{
public:
virtual void release() {}
};
}
class __declspec(dllexport) Token
{
public:
virtual void release(){}
};
The idea is to use dumpin to generate all the mangled names of the dll holding class token and than demangled them.
?release#Token##UAEXXZ --> void Token::release(void)
after that i want to convert is to match the Wrapper so i will need to change the function name
void Token::release(void) --> void Wrapper::Token::release(void)
and then i need to mangle it again so i can create a def file that direct the old function to the new one.
?release#Token##UAEXXZ = ?release#Token#Wrapper##UAEXXZ
all this process needs to be on run time.
First and the easiest solution is to find a function that mangle strings but i couldn't find any...
any other solution?
The Clang compiler is ABI-compatible with MSVC, including name mangling.
The underlying infrastructure is part of the LLVM project, and I found llvm-undname which demangles MSVC names. Perhaps you can rework it to add the Wrapper:: namespace to symbols and re-mangle.
You can find inspiration about mangling names in this test code.
If you are allowed to change the DLL, I'd usually use a different approach, by exporting extern "C" getter function (that does not mangle thus doesn't need demangling) and using virtual interface to access the class (note that the virtual interface doesn't need to be dllexported then). Your Token interface seems to be virtual anyway.
Something along those lines (not tested, just to show the idea):
DLL access header:
class Token // notice no dllexport!
{
protected:
// should not be used to delete directly (DLL vs EXE heap issues)
virtual ~Token() {}
virtual void destroyImpl() = 0; // pure virtual
public:
static inline void destroy(Token* token) {
// need to check for NULL otherwise virtual call would segfault
if (token) token->destroyImpl();
}
virtual void doSomething() = 0; // pure virtual
};
extern "C" __declspec(dllexport) Token * createToken();
DLL implementation:
class TokenImpl: public Token
{
public:
virtual void destroyImpl() {
delete this;
}
virtual void doSomething() {
// implement here
}
};
extern "C" __declspec(dllexport) Token * createToken()
{
return new TokenImpl;
}
Usage:
// ideally wrap in RAII to be sure to always release
// (e.g. can use std::shared_ptr with custom deleter)
Token * token = createToken();
// use the token
token->doSomething();
// destroy
Token::destroy(token);
With shared::ptr (can also create a typedef/static inline convenience creator function in the Token interface):
std::shared_ptr<Token> token(createToken(),
// Use the custom destroy function
&Token::destroy);
token->doSomething()
// token->destroy() called automatically when last shared ptr reference removed
This way you only need to export the extern-C creator function (and the release function, if not part of the interface), which will then not be mangled thus easy to use via the runtime loading.
I have a c style macro in my code, which prints the logs. I want to alter the macro to print the this pointer. But, there are some portion of a code, which is not a member function of a class or some are static functions. So, in my macro I want to check, the current line of code is inside a member function or not. Possible?
No. The preprocessor, as the name says, runs first. Interpreting a sequence of tokens as a class definition is done by the compiler, which runs after the preprocessor. Therefore the preprocessor has no idea about classes, or functions, or variables, or any other C++ construct.
BTW, inside a class you still have static methods that don't have this pointers either.
Well, this doesn't exactly fit your demands but it may be doable to integrate that in a build environment. (Edit: I just realize that it fails for static member functions; hm.)
My idea is to define a function log() twice: one is in the global namespace; it is the obvious result of name resolution for calls from within freestandig functions. The other log() is a member function of a base class from which all classes which want to log must inherit. (That's the bad thing; for a big existing code base that's hardly doable.) The inheritance, function definitions and calls can be made dependent on a preprocessor define so that they have no influence on production code:
#include<cstdio>
// The following would go in a header which must be included by all source
// files which use one of the macros, i.e. which want to log errors.
#ifdef DEBUG
# define INHERIT_LOG() : virtual protected logT
# define LOG(s) log(s)
/** ::log which will be called by free-standing functions */
static void log(const char *err, const void *thisP = nullptr)
{
if(thisP) { fprintf(stderr, "this=%p: %s\n", thisP, err); }
else { fprintf(stderr, "free func: %s\n", err); }
}
/** A base class to inherit from when logging is required */
class logT
{ // this name "log" will be preferred over ::log
// from within all classes which inherit from logT.
protected: void log(const char *const err){ ::log(err, this); }
};
#else
// define the macros to do nothing
# define INHERIT_LOG()
# define LOG(s)
#endif
////////////// end of log header ///////////////
/** Inherits from logT only when DEBUG is defined */
struct T INHERIT_LOG() { void f(); };
void T::f() { LOG("message from T::f"); }// if LOG is expanded to log, calls logT::log
void f() { LOG("message from ::f"); } // if LOG is expanded to log, calls ::log
int main()
{
T().f();
f();
}
Sample session:
$ g++ -std=c++14 -Wall -o log log.cpp && ./log
$ g++ -DDEBUG -std=c++14 -Wall -o log log.cpp && ./log
this=0xffffcc00: message from T::f
free func: message from ::f
$
AFAIK this is not easily possible. There are no ways to check if a variable is defined or not in the preprocessor.
You can introduce a new logic (like inserting something into each
function which defines it type), but it is probably much easier
to have two of your macros for the case if this is defined or not.
This question already has answers here:
Calling C++ dll from Java
(3 answers)
Closed 8 years ago.
I have one c++ dll which is previously used for c# application. now we want to use the same dll for java . i know that we can use JNI technology for this but the problem we have to use the same method signature we don't want to change the method singnature. please advise me.
One option is using JNA instead of JNI. It eliminates the need for the boilerplate native code. An example would look something like this...
import com.sun.jna.Library;
import com.sun.jna.Native;
public class Example {
public interface NativeMath extends Library {
public bool isPrime(int x);
}
public static void main(String[] args) {
int x = 83;
NativeMath nm = (NativeMath) Native.loadLibrary("nm", NativeMath.class);
System.out.println(x + " is prime: " + nm.isPrime(x));
}
}
You don't have to change the method signature, you simply add a native method which then calls the native C++ code. Here is a simple example:
public class Someclass
{
public native void thisCallsCMethod(Someparams);
}
Now create the JNI wrappers:
javac Someclass.java
javah Someclass
This will create a Someclass.h, then you create Someclass.cpp and include the .h in it.
At this point all you have to do is write the C/C++ code for thiCallsCMethod
In the .h you'll see a method signature that you have to implement. Something along the lines of:
#include "clibraryHeader.h"
using namespace clibraryNamespace;
JNIEXPORT void JNICALL thisCallsCMethod(JNIEnv *, someparameters)
{
cout<<"Yeah C code is being called"<<endl;
someCfunction();
}
Obviously you have to massage the parameters in the JNI call, but you can create some temporary variables, then copy back the values you get from the C calls into the incoming parameters (if they need to be returned) etc.
Maybe:
#include "clibraryHeader.h"
using namespace clibraryNamespace;
JNIEXPORT void JNICALL thisCallsCMethod(JNIEnv *, someparameters)
{
cout<<"Yeah C code is being called"<<endl;
Cstruct temp;
temp1.somevar = param1.getSomeVal()
someCfunction(temp);
}
I'm using Xcode and C++ to make a simple game.
The problem is the following code:
#include <pthread.h>
void *draw(void *pt) {
// ...
}
void *input(void *pt) {
// ....
}
void Game::create_threads(void) {
pthread_t draw_t, input_t;
pthread_create(&draw_t, NULL, &Game::draw, NULL); // Error
pthread_create(&input_t, NULL, &Game::draw, NULL); // Error
// ...
}
But Xcode gives me the error: "No matching function call to 'pthread_create'". I haven't an idea 'cause of I've included pthread.h already.
What's wrong?
Thanks!
As Ken states, the function passed as the thread callback must be a (void*)(*)(void*) type function.
You can still include this function as a class function, but it must be declared static. You'll need a different one for each thread type (e.g. draw), potentially.
For example:
class Game {
protected:
void draw(void);
static void* game_draw_thread_callback(void*);
};
// and in your .cpp file...
void Game::create_threads(void) {
// pass the Game instance as the thread callback's user data
pthread_create(&draw_t, NULL, Game::game_draw_thread_callback, this);
}
static void* Game::game_draw_thread_callback(void *game_ptr) {
// I'm a C programmer, sorry for the C cast.
Game * game = (Game*)game_ptr;
// run the method that does the actual drawing,
// but now, you're in a thread!
game->draw();
}
compilation of threads using pthread is done by providing options -pthread.
Such as compiling abc.cpp would require you to compile like g++ -pthread abc.cpp else would
give you an error like undefined reference topthread_create collect2: ld returned 1 exit status` . There must be some similar way to provide pthread option.
You're passing a member function pointer (i.e. &Game::draw) where a pure function pointer is required. You need to make the function a class static function.
Edited to add: if you need to invoke member functions (which is likely) you need to make a class static function which interprets its parameter as a Game* and then invoke member functions on that. Then, pass this as the last parameter of pthread_create().
I have a class interface written in C++. I have a few classes that implement this interface also written in C++. These are called in the context of a larger C++ program, which essentially implements "main". I want to be able to write implementations of this interface in Python, and allow them to be used in the context of the larger C++ program, as if they had been just written in C++.
There's been a lot written about interfacing python and C++ but I cannot quite figure out how to do what I want. The closest I can find is here: http://www.cs.brown.edu/~jwicks/boost/libs/python/doc/tutorial/doc/html/python/exposing.html#python.class_virtual_functions, but this isn't quite right.
To be more concrete, suppose I have an existing C++ interface defined something like:
// myif.h
class myif {
public:
virtual float myfunc(float a);
};
What I want to be able to do is something like:
// mycl.py
... some magic python stuff ...
class MyCl(myif):
def myfunc(a):
return a*2
Then, back in my C++ code, I want to be able to say something like:
// mymain.cc
void main(...) {
... some magic c++ stuff ...
myif c = MyCl(); // get the python class
cout << c.myfunc(5) << endl; // should print 10
}
I hope this is sufficiently clear ;)
There's two parts to this answer. First you need to expose your interface in Python in a way which allows Python implementations to override parts of it at will. Then you need to show your C++ program (in main how to call Python.
Exposing the existing interface to Python:
The first part is pretty easy to do with SWIG. I modified your example scenario slightly to fix a few issues and added an extra function for testing:
// myif.h
class myif {
public:
virtual float myfunc(float a) = 0;
};
inline void runCode(myif *inst) {
std::cout << inst->myfunc(5) << std::endl;
}
For now I'll look at the problem without embedding Python in your application, i.e. you start excetion in Python, not in int main() in C++. It's fairly straightforward to add that later though.
First up is getting cross-language polymorphism working:
%module(directors="1") module
// We need to include myif.h in the SWIG generated C++ file
%{
#include <iostream>
#include "myif.h"
%}
// Enable cross-language polymorphism in the SWIG wrapper.
// It's pretty slow so not enable by default
%feature("director") myif;
// Tell swig to wrap everything in myif.h
%include "myif.h"
To do that we've enabled SWIG's director feature globally and specifically for our interface. The rest of it is pretty standard SWIG though.
I wrote a test Python implementation:
import module
class MyCl(module.myif):
def __init__(self):
module.myif.__init__(self)
def myfunc(self,a):
return a*2.0
cl = MyCl()
print cl.myfunc(100.0)
module.runCode(cl)
With that I was then able to compile and run this:
swig -python -c++ -Wall myif.i
g++ -Wall -Wextra -shared -o _module.so myif_wrap.cxx -I/usr/include/python2.7 -lpython2.7
python mycl.py
200.0
10
Exactly what you'd hope to see from that test.
Embedding the Python in the application:
Next up we need to implement a real version of your mymain.cc. I've put together a sketch of what it might look like:
#include <iostream>
#include "myif.h"
#include <Python.h>
int main()
{
Py_Initialize();
const double input = 5.0;
PyObject *main = PyImport_AddModule("__main__");
PyObject *dict = PyModule_GetDict(main);
PySys_SetPath(".");
PyObject *module = PyImport_Import(PyString_FromString("mycl"));
PyModule_AddObject(main, "mycl", module);
PyObject *instance = PyRun_String("mycl.MyCl()", Py_eval_input, dict, dict);
PyObject *result = PyObject_CallMethod(instance, "myfunc", (char *)"(O)" ,PyFloat_FromDouble(input));
PyObject *error = PyErr_Occurred();
if (error) {
std::cerr << "Error occured in PyRun_String" << std::endl;
PyErr_Print();
}
double ret = PyFloat_AsDouble(result);
std::cout << ret << std::endl;
Py_Finalize();
return 0;
}
It's basically just standard embedding Python in another application. It works and gives exactly what you'd hope to see also:
g++ -Wall -Wextra -I/usr/include/python2.7 main.cc -o main -lpython2.7
./main
200.0
10
10
The final piece of the puzzle is being able to convert the PyObject* that you get from creating the instance in Python into a myif *. SWIG again makes this reasonably straightforward.
First we need to ask SWIG to expose its runtime in a headerfile for us. We do this with an extra call to SWIG:
swig -Wall -c++ -python -external-runtime runtime.h
Next we need to re-compile our SWIG module, explicitly giving the table of types SWIG knows about a name so we can look it up from within our main.cc. We recompile the .so using:
g++ -DSWIG_TYPE_TABLE=myif -Wall -Wextra -shared -o _module.so myif_wrap.cxx -I/usr/include/python2.7 -lpython2.7
Then we add a helper function for converting the PyObject* to myif* in our main.cc:
#include "runtime.h"
// runtime.h was generated by SWIG for us with the second call we made
myif *python2interface(PyObject *obj) {
void *argp1 = 0;
swig_type_info * pTypeInfo = SWIG_TypeQuery("myif *");
const int res = SWIG_ConvertPtr(obj, &argp1,pTypeInfo, 0);
if (!SWIG_IsOK(res)) {
abort();
}
return reinterpret_cast<myif*>(argp1);
}
Now this is in place we can use it from within main():
int main()
{
Py_Initialize();
const double input = 5.5;
PySys_SetPath(".");
PyObject *module = PyImport_ImportModule("mycl");
PyObject *cls = PyObject_GetAttrString(module, "MyCl");
PyObject *instance = PyObject_CallFunctionObjArgs(cls, NULL);
myif *inst = python2interface(instance);
std::cout << inst->myfunc(input) << std::endl;
Py_XDECREF(instance);
Py_XDECREF(cls);
Py_Finalize();
return 0;
}
Finally we have to compile main.cc with -DSWIG_TYPE_TABLE=myif and this gives:
./main
11
Minimal example; note that it is complicated by the fact that Base is not pure virtual. There we go:
baz.cpp:
#include<string>
#include<boost/python.hpp>
using std::string;
namespace py=boost::python;
struct Base{
virtual string foo() const { return "Base.foo"; }
// fooBase is non-virtual, calling it from anywhere (c++ or python)
// will go through c++ dispatch
string fooBase() const { return foo(); }
};
struct BaseWrapper: Base, py::wrapper<Base>{
string foo() const{
// if Base were abstract (non-instantiable in python), then
// there would be only this->get_override("foo")() here
//
// if called on a class which overrides foo in python
if(this->get_override("foo")) return this->get_override("foo")();
// no override in python; happens if Base(Wrapper) is instantiated directly
else return Base::foo();
}
};
BOOST_PYTHON_MODULE(baz){
py::class_<BaseWrapper,boost::noncopyable>("Base")
.def("foo",&Base::foo)
.def("fooBase",&Base::fooBase)
;
}
bar.py
import sys
sys.path.append('.')
import baz
class PyDerived(baz.Base):
def foo(self): return 'PyDerived.foo'
base=baz.Base()
der=PyDerived()
print base.foo(), base.fooBase()
print der.foo(), der.fooBase()
Makefile
default:
g++ -shared -fPIC -o baz.so baz.cpp -lboost_python `pkg-config python --cflags`
And the result is:
Base.foo Base.foo
PyDerived.foo PyDerived.foo
where you can see how fooBase() (the non-virtual c++ function) calls virtual foo(), which resolves to the override regardless whether in c++ or python. You could derive a class from Base in c++ and it would work just the same.
EDIT (extracting c++ object):
PyObject* obj; // given
py::object pyObj(obj); // wrap as boost::python object (cheap)
py::extract<Base> ex(pyObj);
if(ex.check()){ // types are compatible
Base& b=ex(); // get the wrapped object
// ...
} else {
// error
}
// shorter, thrwos when conversion not possible
Base &b=py::extract<Base>(py::object(obj))();
Construct py::object from PyObject* and use py::extract to query whether the python object matches what you are trying to extract: PyObject* obj; py::extract<Base> extractor(py::object(obj)); if(!extractor.check()) /* error */; Base& b=extractor();
Quoting http://wiki.python.org/moin/boost.python/Inheritance
"Boost.Python also allows us to represent C++ inheritance relationships so that wrapped derived classes may be passed where values, pointers, or references to a base class are expected as arguments."
There are examples of virtual functions so that solves the first part (the one with class MyCl(myif))
For specific examples doing this, http://wiki.python.org/moin/boost.python/OverridableVirtualFunctions
For the line myif c = MyCl(); you need to expose your python (module) to C++. There are examples here http://wiki.python.org/moin/boost.python/EmbeddingPython
Based upon the (very helpful) answer by Eudoxos I've taken his code and extended it such that there is now an embedded interpreter, with a built-in module.
This answer is the Boost.Python equivalent of my SWIG based answer.
The headerfile myif.h:
class myif {
public:
virtual float myfunc(float a) const { return 0; }
virtual ~myif() {}
};
Is basically as in the question, but with a default implementation of myfunc and a virtual destructor.
For the Python implementation, MyCl.py I have basically the same as the question:
import myif
class MyCl(myif.myif):
def myfunc(self,a):
return a*2.0
This then leaves mymain.cc, most of which is based upon the answer from Eudoxos:
#include <boost/python.hpp>
#include <iostream>
#include "myif.h"
using namespace boost::python;
// This is basically Eudoxos's answer:
struct MyIfWrapper: myif, wrapper<myif>{
float myfunc(float a) const {
if(this->get_override("myfunc"))
return this->get_override("myfunc")(a);
else
return myif::myfunc(a);
}
};
BOOST_PYTHON_MODULE(myif){
class_<MyIfWrapper,boost::noncopyable>("myif")
.def("myfunc",&myif::myfunc)
;
}
// End answer by Eudoxos
int main( int argc, char ** argv ) {
try {
// Tell python that "myif" is a built-in module
PyImport_AppendInittab("myif", initmyif);
// Set up embedded Python interpreter:
Py_Initialize();
object main_module = import("__main__");
object main_namespace = main_module.attr("__dict__");
PySys_SetPath(".");
main_namespace["mycl"] = import("mycl");
// Create the Python object with an eval()
object obj = eval("mycl.MyCl()", main_namespace);
// Find the base C++ type for the Python object (from Eudoxos)
const myif &b=extract<myif>(obj)();
std::cout << b.myfunc(5) << std::endl;
} catch( error_already_set ) {
PyErr_Print();
}
}
The key part that I've added here, above and beyond the "how do I embed Python using Boost.Python?" and "how do I extend Python using Boost.python?" (which was answered by Eudoxos) is the answer to the question "How do I do both at once in the same program?". The solution to this lies with the PyImport_AppendInittab call, which takes the initialisation function that would normally be called when the module is loaded and registers it as a built-in module. Thus when mycl.py says import myif it ends up importing the built-in Boost.Python module.
Take a look at Boost Python, that is the most versatile and powerful tool to bridge between C++ and Python.
http://www.boost.org/doc/libs/1_48_0/libs/python/doc/
There's no real way to interface C++ code directly with Python.
SWIG does handle this, but it builds its own wrapper.
One alternative I prefer over SWIG is ctypes, but to use this you need to create a C wrapper.
For the example:
// myif.h
class myif {
public:
virtual float myfunc(float a);
};
Build a C wrapper like so:
extern "C" __declspec(dllexport) float myif_myfunc(myif* m, float a) {
return m->myfunc(a);
}
Since you are building using C++, the extern "C" allows for C linkage so you can call it easily from your dll, and __declspec(dllexport) allows the function to be called from the dll.
In Python:
from ctypes import *
from os.path import dirname
dlldir = dirname(__file__) # this strips it to the directory only
dlldir.replace( '\\', '\\\\' ) # Replaces \ with \\ in dlldir
lib = cdll.LoadLibrary(dlldir+'\\myif.dll') # Loads from the full path to your module.
# Just an alias for the void pointer for your class
c_myif = c_void_p
# This tells Python how to interpret the return type and arguments
lib.myif_myfunc.argtypes = [ c_myif, c_float ]
lib.myif_myfunc.restype = c_float
class MyCl(myif):
def __init__:
# Assume you wrapped a constructor for myif in C
self.obj = lib.myif_newmyif(None)
def myfunc(a):
return lib.myif_myfunc(self.obj, a)
While SWIG does all this for you, there's little room for you to modify things as you please without getting frustrated at all the changes you have to redo when you regenerate the SWIG wrapper.
One issue with ctypes is that it doesn't handle STL structures, since it's made for C. SWIG does handle this for you, but you may be able to wrap it yourself in the C. It's up to you.
Here's the Python doc for ctypes:
http://docs.python.org/library/ctypes.html
Also, the built dll should be in the same folder as your Python interface (why wouldn't it be?).
I am curious though, why would you want to call Python from inside C++ instead of calling the C++ implementation directly?