It seems to me that I saw something weird being done in a boost library and it ended up being exactly what I'm trying to do now. Can't find it though...
I want to create a macro that takes a signature and turns it into a function pointer:
void f(int,int) {}
...
void (*x)(int,int) = WHAT( (f(int,int)) );
x(2,4); // calls f()
I especially need this to work with member function pointers so that WHAT takes two params:
WHAT(ClassType, (f(int,int)); // results in static_cast<void (ClassType::*)(int,int)>(&ClassType::f)
It's not absolutely necessary in order to solve my problem, but it would make things a touch nicer.
This question has nothing, per-se, to do with function pointers. What needs to be done is to use the preprocessor to take "f(int,int)" and turn it into two different parts:
'f'
'(int,int)'
Why:
I've solved the problem brought up here: Generating Qt Q_OBJECT classes pragmatically
I've started a series of articles explaining how to do it:
http://crazyeddiecpp.blogspot.com/2011/01/quest-for-sane-signals-in-qt-step-1.html
http://crazyeddiecpp.blogspot.com/2011/01/quest-for-sane-signals-in-qt-step-2.html
The signature must be evaluated from, and match exactly, the "signal" that the user is attempting to connect with. Qt users are used to expressing this as SIGNAL(fun(param,param)), so something like connect_static(SIGINFO(object,fun(param,param)), [](int,int){}) wouldn't feel too strange.
In order to construct the signature I need to be able to pull it out of the arguments supplied. There's enough information to get the member function address (using C++0x's decltype) and fetch the signature in order to generate the appropriate wrapper but I can't see how to get it out. The closest I can come up with is SIGINFO(object, fun, (param,param)), which is probably good enough but I figured I'd ask here before considering it impossible to get the exact syntax I'd prefer.
What are you trying to do is impossible using standard preprocessor, unfortunately. There are a couple of reasons:
It is impossible to split parameters passed to a macro using custom character. They have to be comma delimited. Otherwise that could solve your problem instantly.
You cannot use preprocessor to define something that is not an identifier. Otherwise you could use double expansion where ( and ) is defined as , and split arguments on that as if it was passed as f, int, int,, then process it as variadic arguments.
Function pointer definition in C++ does not allow you to deduce the name given to defined type, unfortunately.
Going even further, even if you manage to create a function pointer, the code won't work for methods because in order to invoke a method, you need to have two pointers - pointer to the method and to the class instance. This means you have to have some wrapper around this stuff.
That is why QT is using its own tools like moc to generate glue code.
The closes thing you might have seen in Boost is probably Signals, Bind and Lambda libraries. It is ironic that those libraries are much more powerful than what you are trying to achieve, but at the same time they won’t allow you to achieve it the way you want it. For example, even if you could do what you want with the syntax you want, you won’t be able to “connect” a slot to a “signal” if signal has a different signature. At the same time, libraries from Boost I mentioned above totally allow that. For example, if your “slot” expects more parameters than “signal” provides, you can bind other objects to be passed when “slot” is invoked. Those libraries can also suppress extra parameters if “slot” does not expect them.
I’d say the best way from C++ prospective as for today is to use Boost Signal approach to implement event handling in GUI libraries. QT doesn’t use it for a number of reasons. First, it started in like 90-s when C++ was not that fancy. Plus, they have to parse your code in order to work with “slots” and “signals” in graphic designer.
It seems for me than instead of using macros or even worse – non-standard tools on top of C++ to generate code, and using the following:
void (*x)(int,int) = WHAT( (f(int,int)) );
It would be much better to do something like this:
void f (int x, int y, int z);
boost::function<void (int, int)> x = boost::bind (&f, _1, _2, 3);
x (1, 2);
Above will work for both functions and methods.
Related
I'm looking to make a general, lazy evaluation-esque procedure to streamline my code.
Right now, I have the ability to speed up the execution of mathematical functions - provided that I pre-process it by calling another method first. More concretely, given a function of the type:
const Eigen::MatrixXd<double, -1, -1> function_name(const Eigen::MatrixXd<double, -1, -1>& input)
I can pass this into another function, g, which will produce a new version of function_name g_p, which can be executed faster.
I would like to abstract all this busy-work away from the end-user. Ideally, I'd like to make a class such that when any function f matching function_name's method signature is called on any input (say, x), the following happens instead:
The class checks if f has been called before.
If it hasn't, it calls g(f), followed by g_p(x).
If it has, it just calls g_p(x)
This is tricky for two reasons. The first, is I don't know how to get a reference to the current method, or if that's even possible, and pass it to g. There might be a way around this, but passing one function to the other would be simplest/cleanest for me.
The second bigger issue is how to force the calls to g. I have read about the execute around pattern, which almost works for this purpose - except that, unless I'm understanding it wrong, it would be impossible to reference f in the surrounding function calls.
Is there any way to cleanly implement my dream class? I ideally want to eventually generalize beyond the type of function_name (perhaps with templates), but can take this one step at a time. I am also open to other solution to get the same functionality.
I don't think a "perfect" solution is possible in C++, for the following reasons.
If the calling site says:
result = object->f(x);
as compiled this will call into the unoptimized version. At this point you're pretty much hamstrung, since there's no way in C++ to change where a function call goes, that's determined at compile-time for static linkage, and at runtime via vtable lookup for virtual (dynamic) linkage. Whatever the case, it's not something you can directly alter. Other languages do allow this, e.g. Lua, and rather ironically C++'s great-grandfather BCPL also permits it. However C++ doesn't.
TL;DR to get a workable solution to this, you need to modify either the called function, or every calling site that uses one of these.
Long answer: you'll need to do one of two things. You can either offload the problem to the called class and make all functions look something like this:
const <return_type> myclass:f(x)
{
static auto unoptimized = [](x) -> <return_type>
{
// Do the optimizable heavy lifting here;
return whatever;
};
static auto optimized = g(unoptimized);
return optimized(x);
}
However I very strongly suspect this is exactly what you don't want to do, because assuming the end-user you're talking about is the author of the class, this fails your requirement to offload this from the end-user.
However, you can also solve it by using a template, but that requires modification to every place you call one of these. In essence you encapsulate the above logic in a template function, replacing unoptimized with the bare class member, and leaving most everything else alone. Then you just call the template function at the calling site, and it should work.
This does have the advantage of a relatively small change at the calling site:
result = object->f(x);
becomes either:
result = optimize(object->f, x);
or:
result = optimize(object->f)(x);
depending on how you set the optimize template up. It also has the advantage of no changes at all to the class.
So I guess it comes down to where you wan't to make the changes.
Yet another choice. Would it be an option to take the class as authored by the end user, and pass the cpp and h files through a custom pre-processor? That could go through the class and automatically make the changes outlined above, which then yields the advantage of no change needed at the calling site.
Recently I found Parameters library in the Boost. Honestly I didn't understand the reason why this is a part of Boost. When there is need to pass several parameters to the function you can make a structure from them, like:
struct Parameters
{
Parameters() : strParam("DEFAULT"), intParam(0) {}
string strParam;
int intParam;
};
void foo(const Parameters & params)
{
}
Parameters params;
params.intParam = 42;
foo(params);
This is very easy to write and understand.
Now example with using Boost Parameters:
BOOST_PARAMETER_NAME(param1)
BOOST_PARAMETER_NAME(param2)
BOOST_PARAMETER_FUNCTION(
(void), // 1. parenthesized return type
someCompexFunction, // 2. name of the function template
tag, // 3. namespace of tag types
(optional // optional parameters, with defaults
(param1, *, 42)
(param2, *, std::string("default")) )
)
{
std::cout << param1 << param2;
}
someCompexFunction(param1_=42);
I think it's really complex, and the benefit is not that significant..
But now I see that some of the Boost libraries (Asio) use this technique.
Is it considered a best practice to use this library to pass many arguments?
Or maybe there are real benefits of using this library that I don't see?
Would you recommend using this library in the project?
Your technique requires creating a lot of temporaries (given enough
parameters) and will be rather verbose in some cases. Something that
is even more tricky is documentation. If you go down the route of
configuration structs, you will have two places where you need to
explain your parameters. Documenting Boost.Parameter functions is easy
in comparison.
It also keeps the verbosity down and allows me to reuse arguments for
whole families of functions instead of composing a new configuration
carrier over and over again.
If you don't like the library, don't use it. It has several other
drawbacks you haven't mentioned (heavy includes, high compile times).
Also, why not just provide the best of two worlds? One function using Boost.Parameters and another using configuration structs, where both dispatch on a common implementation. Manage headers correctly and the "don't pay for what you don't use" promise will be kept. The price is maintainability. But you can always deprecate one interface if your users don't like it.
Well, I don't use this library, but the key is that you can pass parameters by name.
Imagine that you have a function with a lot of parameters, and in most cases you only want to use a few. Maybe not always the same few, so putting these in front of the list (so the others can be supplied as defaults) won't help. That's where the "named parameter" stuff comes in: you just give the names and values of the parameters you want to pass, in any order you like, and the others will be defaulted. You don't even have to know all the possible parameters; a later version of the function can add new parameters without breaking anything (provided the defaults for the new parameters are chosen to mimic the old behavior).
In comparison to structures, you could make a structure and initialize everything with defaults. That's pretty much how this kind of stuff works internally anway, if I'm not mistaken, by passing a parameter object around and setting values there before passing it into the actual function at the end.
So I have this huge tree that is basically a big switch/case with string keys and different function calls on one common object depending on the key and one piece of metadata.
Every entry basically looks like this
} else if ( strcmp(key, "key_string") == 0) {
((class_name*)object)->do_something();
} else if ( ...
where do_something can have different invocations, so I can't just use function pointers. Also, some keys require object to be cast to a subclass.
Now, if I were to code this in a higher level language, I would use a dictionary of lambdas to simplify this.
It occurred to me that I could use macros to simplify this to something like
case_call("key_string", class_name, do_something());
case_call( /* ... */ )
where case_call would be a macro that would expand this code to the first code snippet.
However, I am very much on the fence whether that would be considered good style. I mean, it would reduce typing work and improve the DRYness of the code, but then it really seems to abuse the macro system somewhat.
Would you go down that road, or rather type out the whole thing? And what would be your reasoning for doing so?
Edit
Some clarification:
This code is used as a glue layer between a simplified scripting API which accesses several different aspects of a C++ API as simple key-value properties. The properties are implemented in different ways in C++ though: Some have getter/setter methods, some are set in a special struct. Scripting actions reference C++ objects casted to a common base class. However, some actions are only available on certain subclasses and have to be cast down.
Further down the road, I may change the actual C++ API, but for the moment, it has to be regarded as unchangeable. Also, this has to work on an embedded compiler, so boost or C++11 are (sadly) not available.
I would suggest you slightly reverse the roles. You are saying that the object is already some class that knows how to handle a certain situation, so add a virtual void handle(const char * key) in your base class and let the object check in the implementation if it applies to it and do whatever is necessary.
This would not only eliminate the long if-else-if chain, but would also be more type safe and would give you more flexibility in handling those events.
That seems to me an appropriate use of macros. They are, after all, made for eliding syntactic repetition. However, when you have syntactic repetition, it’s not always the fault of the language—there are probably better design choices out there that would let you avoid this decision altogether.
The general wisdom is to use a table mapping keys to actions:
std::map<std::string, void(Class::*)()> table;
Then look up and invoke the action in one go:
object->*table[key]();
Or use find to check for failure:
const auto i = table.find(key);
if (i != table.end())
object->*(i->second)();
else
throw std::runtime_error(...);
But if as you say there is no common signature for the functions (i.e., you can’t use member function pointers) then what you actually should do depends on the particulars of your project, which I don’t know. It might be that a macro is the only way to elide the repetition you’re seeing, or it might be that there’s a better way of going about it.
Ask yourself: why do my functions take different arguments? Why am I using casts? If you’re dispatching on the type of an object, chances are you need to introduce a common interface.
I've started using OpenGL a while ago, using GLUT. You can't pass member functions to GLUT functions. (or pointers to members for that matter, though I did not explore that option really far).
I was wondering if there is a "decent" way, or what is the "most decent" way to solve this? I know you can use static member functions, but isn't there a better way?
I know there are other libraries, like SFML that are written in C++ and provide a C++ class-based interface, but I was wondering what the possibilities are concerning GLUT (freeglut to be exact).
First, GLUT is not for serious application work. It's for simple graphics demos. And for that purpose, it is fine. If you find yourself trying to do serious work in GLUT, you will find yourself spending lots of time working around its limitations. This limitation is only one of many that you will eventually encounter. GLFW, while still having this limitation (though the next version will not), is generally superior for serious application work.
Second, the "most decent" way to solve this depends on what you're doing. If you only have one window, then the correct solution is just a simple static function, which can global pointers (or functions that return global pointers) to whatever class you're interested in.
If you have multiple windows, then what you need is a global std::map that maps from GLUT's window identifiers to pointers to some object. Then you can get which window a particular function was called from and use the map to forward that call to the particular object that represents that window.
Passing member functions to glut, or any other library, is easy enough. GLUT is looking for a function pointer.
Let Controller be a class with a member function OnKeyPress that we want to send into glutKeyboardFunc. You might first be tempted to try something like
glutKeyboardFunc(&Controller::OnKeyPress);
Here, we are passing a function pointer, however this is incorrect, since you want to send the member function of that class object. In C++11 you can use the new std::bind, or if you are on an older compiler, I would recommend boost::bind. Either way the syntax is around the same.
using namespace std::placeholders; // for the _1, _2 placeholders
glutKeyboardFunc(std::bind(&Controller::OnKeyPress, &GLInput, _1, _2, _3));
From the documentation it looks like glutKeyboardFunc requires 3 parameters. First we fix the first argument memory address of your object, since its a member function, and then supply 3 placeholders.
For those new to std::bind, it feels odd, but for anyone who has done object oriented code in C, its obvious. The function is really just a C function, and needs the "this" pointer to the class. The bind would not be necessary if the callback was a simple function.
I'm in the process of writing a kind of runtime system/interpreter, and one of things that I need to be able to do is call c/c++ functions located in external libraries.
On linux I'm using the dlfcn.h functions to open a library, and call a function located within. The problem is that, when using dlsysm() the function pointer returned need to be cast to an appropriate type before being called so that the function arguments and return type are know, however if I’m calling some arbitrary function in a library then obviously I will not know this prototype at compile time.
So what I’m asking is, is there a way to call a dynamically loaded function and pass it arguments, and retrieve it’s return value without knowing it’s prototype?
So far I’ve come to the conclusion there is not easy way to do this, but some workarounds that I’ve found are:
Ensure all the functions I want to load have the same prototype, and provide some sort mechanism for these functions to retrieve parameters and return values. This is what I am doing currently.
Use inline asm to push the parameters onto the stack, and to read the return value. I really want to steer clear of doing this if possible!
If anyone has any ideas then it would be much appreciated.
Edit:
I have now found exactly what I was looking for:
http://sourceware.org/libffi/
"A Portable Foreign Function Interface Library"
(Although I’ll admit I could have been clearer in the original question!)
What you are asking for is if C/C++ supports reflection for functions (i.e. getting information about their type at runtime). Sadly the answer is no.
You will have to make the functions conform to a standard contract (as you said you were doing), or start implementing mechanics for trying to call functions at runtime without knowing their arguments.
Since having no knowledge of a function makes it impossible to call it, I assume your interpreter/"runtime system" at least has some user input or similar it can use to deduce that it's trying to call a function that will look like something taking those arguments and returning something not entirely unexpected. That lookup is hard to implement in itself, even with reflection and a decent runtime type system to work with. Mix in calling conventions, linkage styles, and platforms, and things get nasty real soon.
Stick to your plan, enforce a well-defined contract for the functions you load dynamically, and hopefully make due with that.
Can you add a dispatch function to the external libraries, e.g. one that takes a function name and N (optional) parameters of some sort of variant type and returns a variant? That way the dispatch function prototype is known. The dispatch function then does a lookup (or a switch) on the function name and calls the corresponding function.
Obviously it becomes a maintenance problem if there are a lot of functions.
I believe the ruby FFI library achieves what you are asking. It can call functions
in external dynamically linked libraries without specifically linking them in.
http://wiki.github.com/ffi/ffi/
You probably can't use it directly in your scripting language but perhapps the ideas are portable.
--
Brad Phelan
http://xtargets.heroku.com
I'm in the process of writing a kind of runtime system/interpreter, and one of things that I need to be able to do is call c/c++ functions located in external libraries.
You can probably check for examples how Tcl and Python do that. If you are familiar with Perl, you can also check the Perl XS.
General approach is to require extra gateway library sitting between your interpreter and the target C library. From my experience with Perl XS main reasons are the memory management/garbage collection and the C data types which are hard/impossible to map directly on to the interpreter's language.
So what I’m asking is, is there a way to call a dynamically loaded function and pass it arguments, and retrieve it’s return value without knowing it’s prototype?
No known to me.
Ensure all the functions I want to load have the same prototype, and provide some sort mechanism for these functions to retrieve parameters and return values. This is what I am doing currently.
This is what in my project other team is doing too. They have standardized API for external plug-ins on something like that:
typedef std::list< std::string > string_list_t;
string_list_t func1(string_list_t stdin, string_list_t &stderr);
Common tasks for the plug-ins is to perform transformation or mapping or expansion of the input, often using RDBMS.
Previous versions of the interface grew over time unmaintainable causing problems to both customers, products developers and 3rd party plug-in developers. Frivolous use of the std::string is allowed by the fact that the plug-ins are called relatively seldom (and still the overhead is peanuts compared to the SQL used all over the place). The argument stdin is populated with input depending on the plug-in type. Plug-in call considered failed if inside output parameter stderr any string starts with 'E:' ('W:' is for warnings, rest is silently ignored thus can be used for plug-in development/debugging).
The dlsym is used only once on function with predefined name to fetch from the shared library array with the function table (function public name, type, pointer, etc).
My solution is that you can define a generic proxy function which will convert the dynamic function to a uniform prototype, something like this:
#include <string>
#include <functional>
using result = std::function<std::string(std::string)>;
template <class F>
result proxy(F func) {
// some type-traits technologies based on func type
}
In user-defined file, you must add define to do the convert:
double foo(double a) { /*...*/ }
auto local_foo = proxy(foo);
In your runtime system/interpreter, you can use dlsym to define a foo-function. It is the user-defined function foo's responsibility to do calculation.