so basically I'm trying to find a way to use C++ functions in Lua that are not lua_CFunctions (don't return and int and take a lua_State as a parameter). Basically your regular old C++ function. The catch, though, is I'm trying to find a way to do it without writing it's own dedicated lua_CFunction (so basically imagine I already have a program or a bunch of functions in C++ that I want to use in Lua, and I don't want to have to write a new function for each of them).
So, say I have a very simple C++ function:
static int doubleInt(int a) {
return a*2;
}
(with or without the static, it shouldn't(?) matter).
Say I want to use this function in Lua by calling doubleInt(10) in a lua script. Is there a way to do this without writing a separate
static int callFunc(lua_State *L) {
//do stuff to make the function work in lua
}
for every individual function? So something along the lines of what luaBind does with their def() function (and I know it sucks, but I can't really use a separate dedicated binding library; have to write my own).
I know I have to write a class with templates for this but I don't even have the slightest idea about how to go about getting the function in Lua. I don't think there is a way in C++ to automatically generate a custom function (presumably at compile time) - that would be amazing - so I don't even know where to start.
This is a very open-ended question.
I have been working on a lua binding library recently, so I can explain how I did this, but there are many ways you could do it.
You didn't tag this question C++11. I'm going to assume however that you are using C++11. If not, then it is extremely difficult and I would say not at all practical to roll your own especially if you don't already know enough about boost::mpl to have some idea how to do it. You should definitely just use luabind in that case.
The first thing you need is, you need to create some basic infrastructure that tells you how to convert C++ types to corresponding lua types and back.
IMO the best way to do this is using a type trait, and not one massive overloaded function. So, you will define a primary template:
namespace traits {
template <typename T>
struct push;
template <>
struct push<int> {
static void to_stack(lua_State * L, int x) { lua_pushinteger(L, x); }
};
template <>
struct push<double> {
static void to_stack(lua_State * L, double d) { lua_pushnumber(L, d); }
};
...
} // end namespace traits
Etc. You probably also want to specialize it for std::string and things like that.
Then you can make a generic push function like this:
template <typename T>
void push(lua_State * L, const T & t) {
traits::push<T>::to_stack(L, t);
}
The advantage here is that implicit conversions are not considered when you call push. Either the type you passed exactly matches something you defined the trait for, or it fails. And you can't get ambiguous overload problems between double and int etc., which can be a big pain in the butt.
Then, you have to do the same thing for read, so you have a trait that tells you how to read values of a given type off the stack. Your read technique needs to signal failures somehow, you can decide if that should be using exceptions or a different technique.
Once you have this, you can try to make an adapt template that will take an arbitrary function pointer and try to adapt it into a lua_CFunction that does roughly the same thing.
Basically, you want to use variadic templates so that you can specialize against all the parameters of the function pointer. You pass those types one by one to your read method, and use an index sequence to read from the correct stack positions. You try to read them all, and if you can do it without errors, then you can call the target function, and then you return its results.
If you want to also push generic C++ objects back as the return value, then you can call your push function at the end.
First, to help, you need an "index sequence" facility. If you are in C++14 you can use std::make_integer_sequence, if not then you have to roll your own. Mine looks like this:
namespace detail {
/***
* Utility for manipulating lists of integers
*/
template <std::size_t... Ss>
struct SizeList {
static constexpr std::size_t size = sizeof...(Ss);
};
template <typename L, typename R>
struct Concat;
template <std::size_t... TL, std::size_t... TR>
struct Concat<SizeList<TL...>, SizeList<TR...>> {
typedef SizeList<TL..., TR...> type;
};
/***
* Count_t<n> produces a sizelist containing numbers 0 to n-1.
*/
template <std::size_t n>
struct Count {
typedef
typename Concat<typename Count<n - 1>::type, SizeList<n - 1>>::type type;
};
template <>
struct Count<0> {
typedef SizeList<> type;
};
template <std::size_t n>
using Count_t = typename Count<n>::type;
} // end namespace detail
Here's what your adapt class might look like:
// Primary template
template <typename T, T>
class adapt;
// Specialization for C++ functions: int (lua_State *, ...)
template <typename... Args, int (*target_func)(lua_State * L, Args...)>
class adapt<int (*)(lua_State * L, Args...), target_func> {
template <typename T>
struct impl;
template <std::size_t... indices>
struct impl<detail::SizeList<indices...>> {
static int adapted(lua_State * L) {
try {
return target_func(L, read<Args>(L, 1 + indices)...);
} catch (std::exception & e) {
return luaL_error(L, "Caught an exception: %s", e.what());
}
}
};
public:
static int adapted(lua_State * L) {
using I = detail::Count_t<sizeof...(Args)>;
return impl<I>::adapted(L);
}
};
The real code from my implementation is here. I decided to do it without using exceptions.
This technique also works at compile-time -- since you are passing a function pointer to an arbitrary C++ function as a non-type template parameter, and the adapt template produces a lua_CFunction as a static class member, when you take a pointer to adapt<...>::adapted, it has to all be resolved at compile-time. This means that all the different bits can be inlined by the compiler.
To work around the inability to deduce the type of a non-type template parameter like a function pointer (prior to C++17), I use a macro which looks like this:
#define PRIMER_ADAPT(F) &::primer::adapt<decltype(F), (F)>::adapted
So, I can take a complicated C++ function f, and then use PRIMER_ADAPT(&f) as if it were simply a lua_CFunction.
You should realize though that making all this stuff and testing it takes a really long time. I worked on this library more than a month, and it is refactored out from some code in another project where I had refined it longer. There are also a lot of pitfalls in lua related to "automating" stack operations like this, because it doesn't do any bounds checking for you and you need to call lua_checkstack to be strictly correct.
You should definitely use one of the existing libraries unless you have a really compelling need that prevents it.
If you're not limited to standard Lua lib, you can try LuaJIT. It has ffi support. Calling external function is as simple as:
local ffi = require("ffi")
ffi.cdef[[
int printf(const char *fmt, ...);
]]
ffi.C.printf("Hello %s!", "world")
Related
I'm trying to implement a trait-based policy subsystem and I have an issue which I don't really know how to tackle (if it's even possible). I have a traits that looks like this:
template <typename ValueT, typename TagT = void, typename EnableT = void>
struct TPolicyTraits
{
static void Apply(ValueT& value) { }
};
And this trait can be specialized as such:
struct MyPolicy {};
template <typename ValueT>
struct TPolicyTraits<ValueT, MyPolicy>
{
static void Apply(ValueT& value) { /* Implementation */ }
};
I would like to register policy at compile time in a sort of linked list. The policy system would be used like this:
namespace PolicyTraits
{
template <typename ValueT, typename TagT>
using TPolicyTraitsOf = TPolicyTraits<std::decay_t<ValueT>, TagT>;
template <typename ValueT>
void Apply(ValueT&& value)
{
// todo iterate through constexpr tag list and apply policies
}
template <typename TagT>
constexpr void Enable()
{
// todo add tag to constexpr list
}
}
int main()
{
PolicyTraits::Enable<MyPolicy>();
PolicyTraits::Apply(std::string());
}
Is there any way to achieve this?
Compile time metaprogramming is for the most part pure. This means each expression's result is determined by its arguments.
There are exceptions that can get around it by using argument dependent lookup and friend functions and SFINAE in quite frankly insane ways.
Don't do that.
Build the policy traits class from its policies, don't hack around to get global state.
TL;DR technically possible, but a horrible idea. Don't do it.
Possibly an adjacent problem to the one you drescribe has a clean and elegant solution.
No.
constexpr things may not allocate memory with new. This limitation may eventually be removed. But for now, that's how it is.
This precludes the use of any dynamically sized data type.
OTOH, constexpr allows you to create statically sized data types with computed sizes (provided that computation can be done at compile time) relatively easily. This can probably be leveraged to do something like what you want.
Except that building a type like this across several different compilation units is again not something that can be done. And that limitation is intrinsic to the C++ compile/link chain and will not be able to be removed without significantly changing it.
So, No.
I recently wrote about the function of class member function callbacks. I need to save the callback object and function pointer, then call the function pointer and fill in the appropriate parameters where the callback is needed.
I started out as a form of typedef void (AAA::*Function)(int a, int b);, but when I need to support different parameter lists of member function, I obviously need a dynamic way to implement it.
class AAA
{
public:
int add(int a, int b)
{
return (a + b);
}
};
class BBB
{
public:
void setValue(std::string value)
{
this->value = value;
}
private:
std::string value;
};
class CCC
{
public:
void bind(??? p) // Binding objects and callback functions.
{
this->p = p;
}
template <class... Args>
auto callback(Args&&... args) // Autofill parameter list.
{
return this->p(std::forward<Args>(args)...);
}
private:
??? p; // How is this function pointer implemented?
};
int main()
{
AAA aaa;
BBB bbb;
CCC ccc;
ccc.bind(???(aaa, &AAA::add));
int number = ccc.callback(5, 6);
ccc.bind(???(bbb, &BBB::setValue));
ccc.callback("Hello");
system("pause");
return 0;
}
I don't know how can I implement the function pointer "???".
You basically are asking to have fully dynamicly typed and checked function calls.
To have fully dynamic function calls, you basically have to throw out the C++ function call system.
This is a bad idea, but I'll tell you how to do it.
A dynamicly callable object looks roughly like this:
using dynamic_function = std::function< std::any( std::vector<std::any> ) >
where use use
struct nothing_t {};
when we want to return void.
Then you write machinery that takes an object and a specific signature, and wraps it up.
template<class R, class...Args, class F>
struct dynamic_function_maker {
template<std::size_t...Is>
dynamic_function operator()(std::index_sequence<Is...>, F&& f)const {
return [f=std::forward<F>(f)](std::vector<std::any> args)->std::any {
if (sizeof...(Is) != args.size())
throw std::invalid_argument("Wrong number of arguments");
if constexpr( std::is_same< std::invoke_result_t<F const&, Args... >, void >{} )
{
f( std::any_cast<Args>(args[Is])... );
return nothing_t{};
}
else
{
return f( std::any_cast<Args>(args[Is])... );
}
};
}
dynamic_function operator()(F&& f)const {
return (*this)(std::make_index_sequence<sizeof...(Args)>{}, std::forward<F>(f));
}
};
template<class R, class...Args, class F>
dynamic_function make_dynamic_function(F f){
return dynamic_function_maker<R,Args...,F>{}(std::forward<F>(f));
}
next you'll want to deduce signatures of function pointers and the like:
template<class R, class...Args>
dynamic_function make_dynamic_function(R(*f)(Args...)){
return dynamic_function_maker<R,Args...,F>{}(std::forward<F>(f));
}
template<class Tclass R, class...Args>
dynamic_function make_dynamic_function(T* t, R(T::*f)(Args...)){
return dynamic_function_maker<R,Args...,F>{}(
[t,f](auto&&...args)->decltype(auto){return (t->*f)(decltype(args)(args)...);}
);
}
then after fixing typos above you should be able to solve your original problem.
Again, as someone who can actually write and understand the above code, I strongly advise you not to use it. It is fragile and dangerous.
There is almost never a good reason to store callbacks in places where you don't know what the arguments you are going to call it with.
There should be a different type and instance of CCC for each set of arguments you want to call it with. 99/100 times when people ask this question, they are asking the wrong question.
C++ is a type-safe language. This means that you cannot do exactly what you've outlined in your question. A pointer to a function that takes specific parameters is a different type from a pointer to a function that takes different parameters. This is fundamental to C++.
std::bind can be use to type-erase different types to the same type, but you get a single type at the end, that can be called only with a matching set of parameters (if any). It is not possible to invoke the "underlying" bound function, with its real parameters. That's because the whole purpose of std::bind is to make them disappear, and inaccessible. That's what std::bind is for.
You only have a limited set options to make this work while staying with the bounds and constraints of C++'s type-safety.
Make use of a void *, in some fashion. Actually, don't. Don't do that. That will just cause more problems, and headache.
Have a separate list and classes of callbacks, one list for each set of callbacks that take a specific set of parameters. You must know, at the point of invoking a callback, what parameters you intend to pass. So, just get your callback from the appropriate list.
Make use of std::variant. The type-safe std::variant is C++17 only (but boost has a similar template that's mostly equivalent, and available with older C++ revisions). All your callbacks take a single std::variant parameter, a variant of every possible set of parameters (designated as a std::tuple of them, or some class/struct instance). Each callback will have to decide what to do if it receives a std::variant containing the wrong parameter value.
Alternatively, the std::variant can be a variant of different std::function types, thus shifting the responsibility of type-checking to the caller, instead of each callback.
The bottom line is that C++ is fundamentally a type-safe language; and this is precisely one of the reasons why one would choose to use C++ instead of a different language that does not have the same kind of type-safety.
But being a type-safe language, that means that you have certain limitations when it comes to juggling different types together. Specifically: you can't. Everything in C++ is always, and must be, a single type.
Jörg's answer to this question nicely delineates between "normal" templates (what the question refers to, perhaps erroneously, as generics) which operate on data and meta templates which operate on a program. Jörg then wisely mentions that programs are data so its really all one and the same. That said, meta-templates are still a different beast. Where do normal templates end and meta templates begin?
The best test I can come up with is if a template's arguments are exclusively class or typename the template is "normal" and meta otherwise. Is this test correct?
The boundary: Signature with Logical Behaviour
Well, in my opinion the boundary-line is to be drawn where a template's signature stops to be a simple signature yielding runtime-code and becomes a definition of explicit or implicit logic, which will be executed/resolved at compile-time.
Some examples and explanation
Regular Templates, i.e. with only typename, class or possibly value-type template parameters, produce executable cpp code, once instantiated during compile time.
The code is (important) not executed at compile time
E.g. (very simple and most likely unrealistic example, but explains the concept):
template<typename T>
T add(const T& lhs, const T& rhs) {
return(lhs + rhs);
}
template<>
std::string add<std::string>(
const std::string& lhs,
const std::string& rhs) {
return (lhs.append(rhs));
}
int main() {
double result = add(1.0, 2.0); // 3.0
std::string s = add("This is ", " the template specialization...");
}
Once compiled, the root-template will be used to instantiate the above code for the type double, but will not execute it.
In addition, the specialization-template will be instantiated for the text-concatenation, but also: not executed at compile time.
This example, however:
#include <iostream>
#include <string>
#include <type_traits>
class INPCWithVoice {
void doSpeak() { ; }
};
class DefaultNPCWithVoice
: public INPCWithVoice {
public:
inline std::string doSpeak() {
return "I'm so default, it hurts... But at least I can speak...";
}
};
class SpecialSnowflake
: public INPCWithVoice {
public:
inline std::string doSpeak() {
return "WEEEEEEEEEEEH~";
}
};
class DefaultNPCWithoutVoice {
public:
inline std::string doSpeak() {
return "[...]";
}
};
template <typename TNPC>
static inline void speak(
typename std::enable_if<std::is_base_of<INPCWithVoice, TNPC>::value, TNPC>::type& npc)
{
std::cout << npc.doSpeak() << std::endl;
};
int main()
{
DefaultNPCWithVoice npc0 = DefaultNPCWithVoice();
SpecialSnowflake npc1 = SpecialSnowflake();
DefaultNPCWithoutVoice npc2 = DefaultNPCWithoutVoice();
speak<DefaultNPCWithVoice>(npc0);
speak<SpecialSnowflake>(npc1);
// speak<DefaultNPCWithoutVoice>(npc2); // Won't compile, since DefaultNPCWithoutVoice does not derive from INPCWithVoice
}
This sample shows template meta programming (and in fact a simple sample...).
What happens here, is that the 'speak'-function has a templated parameter, which is resolved at compile time and decays to TNPC, if the type passed for it is derived from INPCWithVoice.
This in turn means, if it doesn't, the template will not have a candidate for instantiation and the compilation already fails.
Look up SFINAE for this technique: http://eli.thegreenplace.net/2014/sfinae-and-enable_if/
At this point there's some logic executed at compile time and the entire program, will be fully resolved once linked to the executable/library
Another very good example is: https://akrzemi1.wordpress.com/2012/03/19/meta-functions-in-c11/
Here you can see a template meta programming implementation of the factorial-function, demonstrating, that even the bytecode can be entirely equal to a fixed-value use, if the meta-template decays to a constant.
Finalizing example: Fibonacci
#include <iostream>
#include <string>
#include <type_traits>
template <intmax_t N>
static unsigned int fibonacci() {
return fibonacci<N - 1>() + fibonacci<N - 2>();
}
template <>
unsigned int fibonacci<1>() {
return 1;
}
template <>
unsigned int fibonacci<2>() {
return fibonacci<1>();
}
template <intmax_t MAX>
static void Loop() {
std::cout << "Fibonacci at " << MAX << ": " << fibonacci<MAX>() << std::endl;
Loop<MAX - 1>();
}
template <>
void Loop<0>() {
std::cout << "End" << std::endl;
}
int main()
{
Loop<10>();
}
This code implements scalar template argument only template meta programming for the fibonacci-sequence at position N.
In addition, it shows a compile-time for loop counting from 10 to 0!
Finally
I hope this clarifies things a bit.
Remember though: The loop and fibonacci examples instantiate the above templates for each index!!!
Consequently, there's a horrible amount of redundancy and binary bloat!!!
I'm not the expert myself and I'm sure there's a template meta programming kung fu master on stackoverflow, who can append any necessary information missing.
Attempt to differentiate and define the terms
Let's first try to roughly define the terms. I start with a hopefully good enough definition of "programming", and then repeatedly apply the "usual" meaning of meta- to it:
programming
Programming results in a program that transforms some data.
int add(int value) { return value + 42; }
I just wrote code that will result in a program which transforms some data - an integer - to some other data.
templates (meta programming)
Meta programming results in a "program" that transforms some program into another. With C++ templates, there's no tangible "program", it's an implicit part of the compiler's doings.
template<typename T>
std::pair<T,T> two_of_them(T thing) {
return std::make_pair(thing, thing);
}
I just wrote code to instruct the compiler to behave like a program that emits (code for) another program.
meta templates (meta meta programming?)
Writing a meta template results in a ""program"" that results in a "program" which results in a program. Thus, in C++, writing code that results in new templates. (From another answer of me:)
// map :: ([T] -> T) -> (T -> T) -> ([T] -> T)
// "List" "Mapping" result "type" (also a "List")
// --------------------------------------------------------
template<template<typename...> class List,
template<typename> class Mapping>
struct map {
template<typename... Elements>
using type = List<typename Mapping<Elements>::type...>;
};
That's a description of how the compiler can transform two given templates into a new template.
Possible objection
Looking at the other answers, one could argue that my example of meta programming is not "real" meta programming but rather "generic programming" because it does not implement any logic at the "meta" level. But then, can the example given for programming be considered "real" programming? It does not implement any logic either, it's a simple mapping from data to data, just as the meta programming example implements a simple mapping from code (auto p = two_of_them(42);) to code (the template "filled" with the correct type).
Thus, IMO, adding conditionals (via specialization for example) just makes a template more complex, but does not change it's nature.
Your test
Definitively no. Consider:
template<typename X>
struct foo {
template<typename Y>
using type = X;
};
foo is a template with a single typename parameter, but "results" in a template (named foo::type ... just for consistency) that "results" - no matter what parameter is given - to the type given to foo (and thus to the behavior, the program implemented by that type).
Let me start answering using a definition from dictionary.com
Definition
meta -
a prefix added to the name of a subject and designating another subject that analyzes the original one but at a more abstract, higher level: metaphilosophy; metalinguistics.
a prefix added to the name of something that consciously references or comments upon its own subject or features: a meta-painting of an
artist painting a canvas.
Template programming is formost used as a way to express relations in the type system of C++. I would argue it is therefore fair to say that template programming inherently makes use of the type system itself.
From this angle of perspective, we can rather directly apply the definition given above. The difference between template programming and meta (template-)programming lies the treatment of template arguments and the intended result.
Template code that inspects its arguments clearly falls into the former defintion while the creation of new types from template arguments arguably falls into the later. Note that this must also be combined with the intent of your code to operate on types.
Examples
Let's take a look at some examples:
Implementation of std::aligned_storage;
template<std::size_t Len, std::size_t Align /* default alignment not implemented */>
struct aligned_storage {
typedef struct {
alignas(Align) unsigned char data[Len];
} type;
};
This code fulfills the second condition, the type std::aligned_storage is used to create another type. We could make this ever clearer by creating a wrapper
template<typename T>
using storage_of = std::aligned_storage<sizeof(T), alignof(T)>::type;
Now we fulfill both of the above, we inspect the argument type T, to extract its size and aligment, then we use that information to construct a new type dependent on our argument. This clearly constitutes meta-programming.
The original std::aligned_storage is less clear but still quite pervasive. We provide a result in the form of a type, and both of the arguments are used to create a new type. The inspection arguably happens when the internal array type of type::data is create.
A counter examples for completeness of the argument:
template<
class T,
class Container = std::vector<T>,
class Compare = std::less<typename Container::value_type>
> class priority_queue { /*Implementation defined implementation*/ };
Here, you might have the question:
But doesn't priority queue also do type inspection, for example to retrieve the underlying Container, or to assess the type of its iterators?
And yes it does, but the goal is different. The type std::priority_queue itself does not constitute meta template programming, since it doesn't make use of the information to operate within the type system. Meanwhile the following would be meta template programming:
template<typename C>
using PriorityQueue = std::priority_queue<C>;
The intent here is to provide a type, not the operations on the data themselves. This gets clearer when we look at the changes we can make to each code.
We can change the implementation of std::priority_queue maybe to change the permitted operations. For example to support a faster access, additional operations or compact storage of the bits inside the container. But all of that is entirely for the actual runtime-functionality and not concerned with the type system.
In contrast look at what we can do to PriotityQueue. If we were to choose a different underlying implementation, for example if we found that we like Boost.Heap better or that we link against Qt anyways and want to choose their implementation, that's a single line change. This is what meta programming for, we make choices within the type system based arguments formed by other types.
(Meta-)Template signatures
Regarding your test, as we have seen above, storage_of has exclusively typename arguments but is very clearly meta programming. If you dig deaper, you will find that the type system itself is, with templates, Turing-complete. Without even needing to explicitely state any integral variables, we could for example easily replace them by recursively stacked templates (i.e. Zermelo construction of the natural numbers)
using Z = void;
template<typename> struct Zermelo;
template<typename N> using Successor = Zermelo<N>;
A better test in my eyes would be to ask if the given implementation has runtime effects. If a template struct or alias does not contain any definition with an effect only happening at runtime, it's probably template meta programming.
Closing words
Of course normal template programming might utilize meta template programming. You can use meta template programming to determine properties of normal template arguments.
For example you might choose different output strategies (assuming some meta-programming implementation of template<class Iterator> struct is_pointer_like;
template<class It> generateSomeData(It outputIterator) {
if constexpr(is_pointer_like<outputIterator>::value) {
generateFastIntoBuffer(static_cast<typename It::pointer> (std::addressof(*outputIterator));
} else {
generateOneByOne(outputIterator);
}
}
This constitutes template programming employing the feature implemented with meta template programming.
Where do normal templates end and meta templates begin?
When the code generated by templates rely on the fundamental aspects of programming, such as branching and looping, you have crossed the line from normal templates to template meta programming.
Following the description from the article you linked:
A regular function
bool greater(int a, int b)
{
return (a > b);
}
A regular function that works with only one type (ignoring implicit conversions for the time being).
A function template (generic programming)
template <typename T>
bool greater(T a, T b)
{
return (a > b);
}
By using a function template, you have created generic code that can be applied to many types. However, depending on its usage, it may not be correct for null terminated C strings.
Template Metaprogramming
// Generic implementation
template <typename T>
struct greater_helper
{
bool operator(T a, T b) const
{
return (a > b);
}
};
template <typename T>
bool greater(T a, T b)
{
return greater_helper<T>().(a > b);
}
// Specialization for char const*
template <>
struct greater_helper<char const*>
{
bool operator(char const* a, char const* b) const
{
return (strcmp(a, b) > 0);
}
};
Here, you have written code as if to say:
If T is char const*, use a special function.
For all other values of T, use the generic function.
Now you have crosses the threshold of normal templates to template metaprogramming. You have introduced the notion if-else branching using templates.
I have this template method:
template <class SomeLhs, class SomeRhs,
ResultType (SomeLhs::*callback)(SomeRhs&)>
void Add() {
struct Local {
static ResultType Trampoline(BaseLhs& lhs, BaseRhs& rhs) {
return (static_cast<SomeLhs&>(lhs).*callback)(static_cast<SomeRhs&>(rhs));
}
};
_back_end.template Add<SomeLhs,SomeRhs>(&Local::Trampoline);
}
Currently I'm calling it like this:
tracker.Add<Quad, Multi, &Quad::track>();
tracker.Add<Quad, Singl, &Quad::track>();
tracker.Add<Sext, Multi, &Sext::track>();
...
It is working fine, but I don't like to have to repeat two times the name of class SomeLhs. Is there a way to avoid that?
For people who may have recognized it: yes, this is related to the BasicFastDispatcher of Alexandrescu, in particular I'm writing a front end to operate with member functions.
I don't think it can't be improved particularly, which is unfortunate as I'd love to find a way to do this.
Template type deduction is only possible for function template arguments and you need to pass in the non-type member function pointer at compile time in order for it to be treated as a name rather than a varying quantity. Which means having to specify all the args.
i.e. you can do this:
template <class SomeLhs, class SomeRhs>
void Add(ResultType (SomeLhs::*callback)(SomeRhs&)) {
...
}
// nice syntax:
tracker.Add(&Sext::track);
// But ugly for overloaded functions, a cast is needed.
// p.s. not sure this is exactly the right syntax without compiling it.
tracker.Add((ResultType (Quad::*)(Multi&) &Quad::track);
But then you have an actual pointer that cannot subsequently be used as a template parameter.
The only thing I think you could do is to use a macro, though it is arguable if it really improves syntax here. I'd say it probably adds an unnecessary level of obfuscation.
e.g.
#define TMFN_ARGS(C, M, P1) C, P1, &C::M
tracker.Add<TMFN_ARGS(Quad, track, Multi)>();
EDIT:
However, if the name of the function is Always 'track', you could do something along the following lines:
template <typename C, typename P1>
void AddTrack() {
Add<C, P1, &C::track>();
}
tracker.AddTrack<Quad, Multi>();
Boost comes with an example file in
boost_1_41_0\libs\function_types\example
called interpreter.hpp and interpreter_example.hpp
I am trying to create a situation where I have a bunch of functions of different arguments, return types, etc all register and be recorded to a single location. Then have the ability to pull out a function and execute it with some params.
After reading a few questions here, and from a few other sources I think the design implemented in this example file is as good as I will be able to get. It takes a function of any type and allows you to call it using a string argument list, which is parsed into the right data types.
Basically its a console command interpreter, and thats probably what its meant to illustrate.
I have been studying the code and poking around trying to get the same implementation to accept class member functions, but have been unsuccessful so far.
I was wondering if someone could suggest the modifications needed, or maybe worked on something similar and have some same code.
In the example you'll see
interpreter.register_function("echo", & echo);
interpreter.register_function("add", & add);
interpreter.register_function("repeat", & repeat);
I want to do something like
test x;
interpreter.register_function("classFunc", boost::bind( &test::classFunc, &x ) );
But this breaks the any number of arguments feature.
So I am thinking some kind of auto generating boost::bind( &test::classFunc, &x, _1, _2, _3 ... ) would be the ticket, I just am unsure of the best way to implement it.
Thanks
I've been working on this issue and i've somewhat succeeded to make the boost interpreter accept the member function such as:
// Registers a function with the interpreter,
// will not compile if it's a member function.
template<typename Function>
typename boost::enable_if< ft::is_nonmember_callable_builtin<Function> >::type
register_function(std::string const& name, Function f);
// Registers a member function with the interpreter.
// Will not compile if it's a non-member function.
template<typename Function, typename TheClass>
typename boost::enable_if< ft::is_member_function_pointer<Function> >::type
register_function(std::string const& name, Function f, TheClass* theclass);
The enable_if statement is used to prevent the use of the wrong method at the compile time. Now, what you need to understand :
It uses the boost::mpl to parse trough the argument's parameter types of the callable builtin (which is basically a function pointer)
Then, prepares a fusion vector at the compile-time (which is a vector that can stock different objects of different types at the same time)
When the mpl is done parsing every arguments, the "parsing" apply method will fork in the "invoke" apply method, following the templates.
The main issue is that the first argument of a member callable builtin is the object which holds the called method.
As far a I know, the mpl cannot parse the arguments of something else than a callable builtin (i.e A Boost::Bind result)
So, what needs to be done is simply add one step to the "parsing" apply, which would be to add the concerned object to the apply loop! Here it goes:
template<typename Function, typename ClassT>
typename boost::enable_if< ft::is_member_function_pointer<Function> >::type
interpreter::register_function( std::string const& name,
Function f,
ClassT* theclass);
{
typedef invoker<Function> invoker;
// instantiate and store the invoker by name
map_invokers[name]
= boost::bind(&invoker::template apply_object<fusion::nil,ClassT>
,f,theclass,_1,fusion::nil());
}
in interpreter::invoker
template<typename Args, typename TheClass>
static inline
void
apply_object( Function func,
TheClass* theclass,
parameters_parser & parser,
Args const & args)
{
typedef typename mpl::next<From>::type next_iter_type;
typedef interpreter::invoker<Function, next_iter_type, To> invoker;
invoker::apply( func, parser, fusion::push_back(args, theclass) );
}
This way, it will simply skip the first argument type and parse everything correctly.
The method can be called this way: invoker.register_function("SomeMethod",&TheClass::TheMethod,&my_object);
I am not into fusion and therefore don't see how to fix it in a simple and elegant way (i mainly don't see how member functions are supposed to work), but i worked on something similar that might be an alternative for you.
If you want to take a look at the result, it is in the Firebreath repository.
In short:
MethodConverter.h contains the main functionality
the ugly dispatch_gen.py generates that header
ConverterUtils.h contains the utility functionality like conversion to the target types
TestJSAPIAuto.h and jsapiauto_test.h contain a unit test that shows it in action
The main changes would probably involve to strip the FB-specific types, tokenize the input sequence before invoking the functors and supply your own conversion functions.
One option is to make a set of templates
template <class T, class Ret>
void register_function(const char *name, Ret (T::*fn)()) { /* boost::bind or your way to register here */ }
template <class T, class Ret, class Arg1>
void register_function(const char *name, Ret (T::*fn)(Arg1)) { /*...*/ )
And so on.. Until C++0x come with its variadic templates, you can use Boost.Preprocessor to generate required amount of templates