I often have some prototype behaviour that generates output based on some design method. I template the design method, which enables a lot of functionality I need. However, sometimes the design method is given at runtime, so I'm usually required to write a huge switch statement. It usually looks like this:
enum class Operation
{
A, B
};
template<Operation O>
void execute();
template<>
void execute<A>()
{
// ...
}
template<>
void execute<B>()
{
// ...
}
void execute(Operation o)
{
switch (o)
{
case Operation::A: return execute<Operation::A>();
case Operation::B: return execute<Operation::B>();
}
}
I'm curious as to whether anyone has figured out a nice pattern for this system - the main drawbacks of this method is that one has to type out all the supported enumerations and do maintenance several places if new enumerations are implemented.
e: I should add that the reasons for messing with compile-time templates is to allow the compiler to inline methods in HPC as well as inherit constexpr properties.
e2: in effect, I guess what I'm asking is to have the compiler generate all the possible code paths using an implicit switch structure. Perhaps some recursive template magic?
If you really want to utilize templates for this task you can use technique similar to this one.
// Here second template argument default to the first enum value
template<Operation o, Operation currentOp = Operation::A>
// We use SFINAE here. If o is not equal to currentOp compiler will ignore this function.
auto execute() -> std::enable_if<o == currentOp, void>::type
{
execute<currentOp>();
}
// Again, SFINAE technique. Compiler will stop search if the template above has been instantiated and will ignore this one. But in other case this template will be used and it will try to call next handler.
template<Operation o, Operation currentOp = Operation::A>
void execute()
{
return execute<o, static_cast<Operation>(static_cast<int>(currentOp) + 1)(c);
}
template<class F, std::size_t...Is>
void magic_switch( std::size_t N, F&& f, std::index_sequence<Is...> ){
auto* pf = std::addressof(f);
using pF=decltype(pf);
using table_ptr = void(*)(pF);
static const table_ptr table[]={
[](pF){ std::forward<F>(*pf)( std::integral_constant<std::size_t, Is>{} ); }...
};
return table[N]( pf );
}
template<std::size_t Count, class F>
void magic_switch( std::size_t N, F&& f ){
return magic_switch( N, std::forward<F>(f), std::make_index_sequence<Count>{} );
}
This makes a jump table that invokes a lambda on a compile time constant, picking which entry based on a runtime constant. Which is very similar to how a switch case statement is compiled to sometimes.
void execute(Operation o) {
magic_switch<2>( std::size_t(o), [](auto I){
execute<Operation(I)>();
} );
}
Modifying it to return non-void is possible, but all branches must return the same type.
Related
I am attempting to build a clean and neat implementation of recursive-capable lambda self-scoping (which is basically a Y-combinator although I think technically not quite). It's a journey that's taken me to, among many others, this thread and this thread and this thread.
I've boiled down one of my issues as cleanly as I can: how do I pass around templated functors which take lambdas as their template parameters?
#include <string>
#include <iostream>
#define uint unsigned int
template <class F>
class Functor {
public:
F m_f;
template <class... Args>
decltype(auto) operator()(Args&&... args) {
return m_f(*this, std::forward<Args>(args)...);
}
};
template <class F> Functor(F)->Functor<F>;
class B {
private:
uint m_val;
public:
B(uint val) : m_val(val) {}
uint evaluate(Functor<decltype([](auto & self, uint val)->uint {})> func) const {
return func(m_val);
}
};
int main() {
B b = B(5u);
Functor f = Functor{[](auto& self, uint val) -> uint {
return ((2u * val) + 1u);
}};
std::cout << "f applied to b is " << b.evaluate(f) << "." << std::endl;
}
The code above does not work, with Visual Studio claiming that f (in the b.evaluate(f) call) does not match the parameter type.
My assumption is that auto & self is not clever enough to make this work. How do I get around this? How do I store and pass these things around when they are essentially undefinable? Is this why many of the Y-combinator implementations I've seen have the strange double-wrapped thing?
Any help or explanation would be enormously appreciated.
The only way I see is make evaluate() a template method; if you want to be sure to receive a Functor (but you can simply accept a callable: see Yakk's answer):
template <typename F>
uint evaluate(Functor<F> func) const {
return func(m_val);
}
Take in count that every lambda is a different type, as you can verify with the following trivial code
auto l1 = []{};
auto l2 = []{};
static_assert( not std::is_same_v<decltype(l1), decltype(l2)> );
so impose a particular lambda type to evaluate() can't work because if you call the method with (apparently) the same lambda function, the call doesn't match, as you can see in the following example
auto l1 = []{};
auto l2 = []{};
void foo (decltype(l1))
{ }
int main ()
{
foo(l2); // compilation error: no matching function for call to 'foo'
}
The easiest solution is:
uint evaluate(std::function<uint(uint)> func) const {
return func(m_val);
}
a step up would be to write a function_view.
uint evaluate(function_view<uint(uint)> func) const {
return func(m_val);
}
(there are dozens of implementations on the net, should be easy to find).
The easiest and most runtime efficient is:
template<class F>
uint evaluate(F&& func) const {
return func(m_val);
}
because we don't care what func is, we just want it to quack like a duck. If you want to check it early...
template<class F> requires (std::is_convertible_v< std::invoke_result_t< F&, uint >, uint >)
uint evaluate(F&& func) const {
return func(m_val);
}
using c++20, or using c++14
template<class F,
std::enable_if_t<(std::is_convertible_v< std::invoke_result_t< F&, uint >, uint >), bool> = true
>
uint evaluate(F&& func) const {
return func(m_val);
}
which is similar just more obscure.
You can write a fixes-signature type-erased Functor, but I think it is a bad idea. It looks like:
template<class R, class...Args>
using FixedSignatureFunctor = Functor< std::function<R( std::function<R(Args...)>, Args...) > >;
or slightly more efficient
template<class R, class...Args>
using FixedSignatureFunctor = Functor< function_view<R( std::function<R(Args...)>, Args...) > >;
but this is pretty insane; you'd want to forget what the F is, but not that you can replace the F!
To make this fully "useful", you'd have to add smart copy/move/assign operations to Functor, where it can be copied if the Fs inside each of them can be copied.
template <class F>
class Functor {
public:
// ...
Functor(Functor&&)=default;
Functor& operator=(Functor&&)=default;
Functor(Functor const&)=default;
Functor& operator=(Functor const&)=default;
template<class O> requires (std::is_constructible_v<F, O&&>)
Functor(Functor<O>&& o):m_f(std::move(o.m_f)){}
template<class O> requires (std::is_constructible_v<F, O const&>)
Functor(Functor<O> const& o):m_f(o.m_f){}
template<class O> requires (std::is_assignable_v<F, O&&>)
Functor& operator=(Functor<O>&& o){
m_f = std::move(o.mf);
return *this;
}
template<class O> requires (std::is_assignable_v<F, O const&>)
Functor& operator=(Functor<O> const& o){
m_f = o.mf;
return *this;
}
// ...
};
(c++20 version, replace requires clauses with std::enable_if_t SFINAE hack in c++17 and before).
How to decide
The core thing to remember here is that C++ has more than one kind of polymorphism, and using the wrong kind will make you waste a lot of time.
There is both compile time polymorphism and runtime polymorphism. Using runtime polymorphism when you only need compile time polymorphism is a waste.
Then in each category, there are even more subtypes.
std::function is a runtime polymorphic type erasure regular object. Inheritance based virtual functions is another runtime polymorphic technique.
Your Y-combinator is doing compile time polymorphism. It changes what it stores and exposed a more uniform interface.
Things talking to that interface don't care about the internal implementation details of your Y-combinator, and including them in their implementation is an abstraction failure.
evaluate takes a callable thing and pass it in uint and expects a uint in return. That is what it care about. It doesn't care if it is passed a Functor<Chicken> or a function pointer.
Making it care about it is a mistake.
If it takes a std::function, it does runtime polymorphism; if it takes a template<class F> with an argument of type F&&, it is compile time polymorphic. This is a choice, and they are different.
Taking a Functor<F> of any kind is putting contract requirements in its API it fundamentally shouldn't care about.
I haven't found a way to achieve what I want but I'm not knowledgeable enough to know if its impossible. Help would be appreciated.
The main data data container in our software behaves a bit like a std::variant or std::any: It has a base class BaseContainer that provides a type enum. The derived instance DataContainer holds the actual data in a typed tensor member variable. So a simplified example boils down to something like this:
BaseContainer* vContainer = new DataContainer<float>({1000000});
if (vContainer->getType() == DataTypes::FLOAT)
const Tensor<float>& vTensor = dynamic_cast<DataContainer<float>>(vContainer)->getData();
We have many methods that process data based on the underlying templated type and dimensions:
template<typename T>
void processData(const tensor<T>& aTensor, ...other arguments...);
The problem is, for every method like processData() that we want to call with a BaseContainer, we need to write a binding method that unravels the possible types to call the typed version of processData():
void processData(BaseContainer* aContainer) {
switch (vContainer->getType()) {
case DataTypes::INT8:
return processData(dynamic_cast<DataContainer<int8_t>>(vContainer)->getData());
case DataTypes::UINT8:
return processData(dynamic_cast<DataContainer<uint8_t>>(vContainer)->getData());
case DataTypes::INT16:
return processData(dynamic_cast<DataContainer<int16_t>>(vContainer)->getData());
case DataTypes::UINT16:
return processData(dynamic_cast<DataContainer<uint16_t>>(vContainer)->getData());
...
default:
throw(std::runtime_error("Type not supported"));
}
}
My question is: Is it possible to make a single "adapter" method (in any released version of c++) that can take a function (like processData()), a BaseContainer and potentially a list of arguments, and invoke the correct template binding of this function with the arguments?
I failed to bind a template function dynamically because I was not able to pass the name without the template type. Yet the template type would need to be dynamic based on the BaseContainer. But maybe there are other means to achieve what I want to do? I'm very curious about any solution, mostly also to extend my understanding, as long as the complexity of the solution is below writing hundreds of adapter methods.
If nothing else, would it be possible to generate the "adapter" methods using preprocessor macros?
You cannot pass overloads by name, but you can pass functor with overloaded operator() as generic lambda have.
So
template <typename F>
auto dispatch(BaseContainer& vContainer, F f) {
switch (vContainer.getType()) {
case DataTypes::INT8:
return f(dynamic_cast<DataContainer<int8_t>&>(vContainer).getData());
case DataTypes::UINT8:
return f(dynamic_cast<DataContainer<uint8_t>&>(vContainer).getData());
case DataTypes::INT16:
return f(dynamic_cast<DataContainer<int16_t>&>(vContainer).getData());
case DataTypes::UINT16:
return f(dynamic_cast<DataContainer<uint16_t>&>(vContainer).getData());
...
default:
throw (std::runtime_error("Type not supported"));
}
}
with usage
dispatch(vContainer, [](auto* data){ return processData(data); });
If you are willing to write a small wrapper class for each processData-like function, you could do something like this:
// One like this for each function.
struct ProcessDataWrapper {
template <typename... Args>
static auto run(Args&&... args) {
return processData(std::forward<Args>(args)...);
}
};
template <typename Wrapper>
auto ProcessGeneric(BaseContainer* aContainer) {
switch (vContainer->getType()) {
case DataTypes::INT8:
return Wrapper::run(dynamic_cast<DataContainer<int8_t>>(vContainer)->getData());
// ...
}
// Called as
ProcessGeneric<ProcessDataWrapper>(myContainer);
It is possible, but as the comments say, it might be worth conidering std::visit.
Here's a solution requiring c++17 that only requires two lines for each function template you want to wrap. You could use a simple macro to simplify the wrpping further.
The core idea is to have a cast function that maps from a DataType enum to the correspondng DataContainer and then to leverage c++17 fold expressions to wrap the switch statement in your code.
Here's the cast function, so we have exactly one place to map from DataType to the actiual DataContainer:
template<DataType t>
constexpr inline decltype(auto) cast(BaseContainer& c) {
if constexpr(t == INT) return static_cast<DataContainer<int>&>(c);
else if constexpr(t == FLOAT) return static_cast<DataContainer<float>&>(c);
... map all other enum values ...
}
This is rather a convenience helper to make the following code a bit more readable. The next code block uses the c++17 fold expression to dispatch the function based on the type of the container.
template<DataType... types>
auto dispatcher_impl = [](auto f) {
// NB: capture by value here only for sake of readbility.
return [=](BaseContainer& c, auto... args) {
([&]{ if(c.GetDataType() == types ) { std::invoke(f, cast<types>(c), args...); return true; } return false; }() || ...);
};
};
auto data_type_dispatcher = [](auto f) {
return dispatcher_impl<INT, FLOAT, ... other types here ...>(f);
};
The core idea is to wrap the function into a lambda that checks the DataContainer's DataType and calls the function only if it matches. The Fold expression over the || operator is used to unpack the DataTypes.
Usage example:
template<typename T>
void processData(DataContainer<T>& c, int arg) {
if constexpr(std::is_same_v<T, int>) std::cout << "int";
else if constexpr(std::is_same_v<T, float>) std::cout << "float";
std::cout << ", arg: " << arg << '\n';
}
// This needs to be done for each function:
auto pd = data_type_dispatcher([](auto& c, int arg) { processData(c, arg); });
int main() {
DataContainer<float> f;
DataContainer<int> i;
pd(f, 2); // prints float, 2
pd(i, 4); // prints int, 4
}
Full example here.
In order to throw an exception if the type is not supported, simply add a lambda that throws at the end of the fold expression:
([&]{ if(c.GetDataType() == types ) { std::invoke(f, cast<types>(c), args...); return true; } return false; }() || ... || []() -> bool{ throw (std::runtime_error("Type not supported")); }());
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
I'm investigating possible implementations of dynamic dispatch of unrelated types in modern C++ (C++11/C++14).
By "dynamic dispatch of types" I mean a case when in runtime we need to choose a type from list by its integral index and do something with it (call a static method, use a type trait and so on).
For example, consider stream of serialized data: there are several kinds of data values, which are serialized/deserialized differently; there are several codecs, which do serialization/deserialization; and our code read type marker from stream and then decide which codec it should invoke to read full value.
I'm interested in a case where are many operations, which could be invoked on types (several static methods, type traits...), and where could be different mapping from logical types to C++ classes and not only 1:1 (in example with serialization it means that there could be several data kinds all serialized by the same codec).
I also wish to avoid manual code repetition and to make the code more easily maintainable and less error-prone. Performance also is very important.
Currently I'm seeing those possible implementations, am I missing something? Can this be done better?
Manually write as many functions with switch-case as there are possible operations invocations on types.
size_t serialize(const Any & any, char * data)
{
switch (any.type) {
case Any::Type::INTEGER:
return IntegerCodec::serialize(any.value, data);
...
}
}
Any deserialize(const char * data, size_t size)
{
Any::Type type = deserialize_type(data, size);
switch (type) {
case Any::Type::INTEGER:
return IntegerCodec::deserialize(data, size);
...
}
}
bool is_trivially_serializable(const Any & any)
{
switch (any.type) {
case Any::Type::INTEGER:
return traits::is_trivially_serializable<IntegerCodec>::value;
...
}
}
Pros: it's simple and understandable; compiler could inline dispatched methods.
Cons: it requires a lot of manual repetition (or code generation by external tool).
Create dispatching table like this
class AnyDispatcher
{
public:
virtual size_t serialize(const Any & any, char * data) const = 0;
virtual Any deserialize(const char * data, size_t size) const = 0;
virtual bool is_trivially_serializable() const = 0;
...
};
class AnyIntegerDispatcher: public AnyDispatcher
{
public:
size_t serialize(const Any & any, char * data) const override
{
return IntegerCodec::serialize(any, data);
}
Any deserialize(const char * data, size_t size) const override
{
return IntegerCodec::deserialize(data, size);
}
bool is_trivially_serializable() const
{
return traits::is_trivially_serializable<IntegerCodec>::value;
}
...
};
...
// global constant
std::array<AnyDispatcher *, N> dispatch_table = { new AnyIntegerDispatcher(), ... };
size_t serialize(const Any & any, char * data)
{
return dispatch_table[any.type]->serialize(any, data);
}
Any deserialize(const char * data, size_t size)
{
return dispatch_table[any.type]->deserialize(data, size);
}
bool is_trivially_serializable(const Any & any)
{
return dispatch_table[any.type]->is_trivially_serializable();
}
Pros: it's a little more flexible - one needs to write a dispatcher class for each dispatched type, but then one could combine them in different dispatch tables.
Cons: it requires writing a lot of dispatching code. And there is some overhead due to virtual dispatching and impossibility to inline codec's methods into caller's site.
Use templated dispatching function
template <typename F, typename... Args>
auto dispatch(Any::Type type, F f, Args && ...args)
{
switch (type) {
case Any::Type::INTEGER:
return f(IntegerCodec(), std::forward<Args>(args)...);
...
}
}
size_t serialize(const Any & any, char * data)
{
return dispatch(
any.type,
[] (const auto codec, const Any & any, char * data) {
return std::decay_t<decltype(codec)>::serialize(any, data);
},
any,
data
);
}
bool is_trivially_serializable(const Any & any)
{
return dispatch(
any.type,
[] (const auto codec) {
return traits::is_trivially_serializable<std::decay_t<decltype(codec)>>::value;
}
);
}
Pros: it requires just one switch-case dispatching function and a little of code in each operation invocation (at least manually written). And compiler may inline what it finds apropriate.
Cons: it's more complicated, requires C++14 (to be such clean and compact) and relies on compiler ability to optimize away unused codec instance (which is used only to choose right overload for codec).
When for one set of logical types there may be several mapping to implementation classes (codecs in this example), it may be better to generalize solution #3 and write completely generic dispatch function, which receive compile-time mapping between type values and invoked types. Something like this:
template <typename Mapping, typename F, typename... Args>
auto dispatch(Any::Type type, F f, Args && ...args)
{
switch (type) {
case Any::Type::INTEGER:
return f(mpl::map_find<Mapping, Any::Type::INTEGER>(), std::forward<Args>(args)...);
...
}
}
I'm leaning on solution #3 (or #4). But I do wonder - is it possible to avoid manually writing of dispatch function? Its switch-case I mean. This switch-case is completely derived from compile-time mapping between type values and types - is there any method to handle its generation to compiler?
Tag dispatching, where you pass a type to pick an overload, is efficient. std libraries typically use it for algorithms on iterators, so different iterator categories get different implementations.
When I have a list of type ids, I ensure they are contiguous and write a jump table.
This is an array of pointers to functions that do the task at hand.
You can automate writing this in C++11 or better; I call it the magic switch, as it acts like a runtime switch, and it calls a function with a compile time value based off the runtime one. I make the functions with lambdas, and expand a parameter pack inside them so their bodies differ. They then dispatch to the passed-in function object.
Write that, then you can move your serialization/deserialization code into "type safe" code. Use traits to map from compile-time indexes to type tags, and/or dispatch based on the index to an overloaded function.
Here is a C++14 magic switch:
template<std::size_t I>using index=std::integral_constant<std::size_t, I>;
template<class F, std::size_t...Is>
auto magic_switch( std::size_t I, F&& f, std::index_sequence<Is...> ) {
auto* pf = std::addressof(f);
using PF = decltype(pf);
using R = decltype( (*pf)( index<0>{} ) );
using table_entry = R(*)( PF );
static const table_entry table[] = {
[](PF pf)->R {
return (*pf)( index<Is>{} );
}...
};
return table[I](pf);
}
template<std::size_t N, class F>
auto magic_switch( std::size_t I, F&& f ) {
return magic_switch( I, std::forward<F>(f), std::make_index_sequence<N>{} );
}
use looks like:
std::size_t r = magic_switch<100>( argc, [](auto I){
return sizeof( char[I+1] ); // I is a compile-time size_t equal to argc
});
std::cout << r << "\n";
live example.
If you can register your type enum to type map at compile time (via type traits or whatever), you can round trip through a magic switch to turn your runtime enum value into a compile time type tag.
template<class T> struct tag_t {using type=T;};
then you can write your serialize/deserialize like this:
template<class T>
void serialize( serialize_target t, void const* pdata, tag_t<T> ) {
serialize( t, static_cast<T const*>(pdata) );
}
template<class T>
void deserialize( deserialize_source s, void* pdata, tag_t<T> ) {
deserialize( s, static_cast<T*>(pdata) );
}
If we have an enum DataType, we write a traits:
enum DataType {
Integer,
Real,
VectorOfData,
DataTypeCount, // last
};
template<DataType> struct enum_to_type {};
template<DataType::Integer> struct enum_to_type:tag_t<int> {};
// etc
void serialize( serialize_target t, Any const& any ) {
magic_switch<DataType::DataTypeCount>(
any.type_index,
[&](auto type_index) {
serialize( t, any.pdata, enum_to_type<type_index>{} );
}
};
}
all the heavy lifting is now done by enum_to_type traits class specializations, the DataType enum, and overloads of the form:
void serialize( serialize_target t, int const* pdata );
which are type safe.
Note that your any is not actually an any, but rather a variant. It contains a bounded list of types, not anything.
This magic_switch ends up being used to reimplement std::visit function, which also gives you type-safe access to the type stored within the variant.
If you want it to contain anything, you have to determine what operations you want to support, write type-erasure code for it that runs when you store it in the any, store the type-erased operations along side the data, and bob is your uncle.
Here is a solution somewhere in between your #3 and #4. Maybe it gives some inspiration, not sure if it's really useful.
Instead of using a interface base class and virtual dispatch, you can just put your "codec" code into some unrelated trait structures:
struct AnyFooCodec
{
static size_t serialize(const Any&, char*)
{
// ...
}
static Any deserialize(const char*, size_t)
{
// ...
}
static bool is_trivially_serializable()
{
// ...
}
};
struct AnyBarCodec
{
static size_t serialize(const Any&, char*)
{
// ...
}
static Any deserialize(const char*, size_t)
{
// ...
}
static bool is_trivially_serializable()
{
// ...
}
};
You can then put these trait types into a type list, here I just use a std::tuple for that:
typedef std::tuple<AnyFooCodec, AnyBarCodec> DispatchTable;
Now we can write a generic dispatch function that passes the n'th type trait to a given functor:
template <size_t N>
struct DispatchHelper
{
template <class F, class... Args>
static auto dispatch(size_t type, F f, Args&&... args)
{
if (N == type)
return f(typename std::tuple_element<N, DispatchTable>::type(), std::forward<Args>(args)...);
return DispatchHelper<N + 1>::dispatch(type, f, std::forward<Args>(args)...);
}
};
template <>
struct DispatchHelper<std::tuple_size<DispatchTable>::value>
{
template <class F, class... Args>
static auto dispatch(size_t type, F f, Args&&... args)
{
// TODO: error handling (type index out of bounds)
return decltype(DispatchHelper<0>::dispatch(type, f, args...)){};
}
};
template <class F, class... Args>
auto dispatch(size_t type, F f, Args&&... args)
{
return DispatchHelper<0>::dispatch(type, f, std::forward<Args>(args)...);
}
This uses a linear search to find the proper trait, but with some effort one could at least make it a binary search. Also the compiler should be able to inline all the code as there is no virtual dispatch involved. Maybe the compiler is even smart enough to basically turn it into a switch.
Live example: http://coliru.stacked-crooked.com/a/1c597883896006c4
I'm dealing with a C system that offers a hook of this form:
int (*EXTENSIONFUNCTION)(NATIVEVALUE args[]);
It's possible to register an EXTENSIONFUNCTION and the number of arguments it takes.
My idea was that I'd make a class Extension to wrap up an extension. It would be able to be constructed from a std::function (or any Callable, ideally, but let's just say it contains a std::function for now). And the extension takes Value parameters, which wrap up NATIVEVALUE (but are larger). I'd automatically take care of the parameter count with sizeof...(Ts), for instance. It might look like this:
Extension<lib::Integer, lib::String> foo =
[](lib::Integer i, lib::String s) -> int {
std::cout << i;
std::cout << s;
return 0;
}
Problem is that in order for the C library to register and call it, it wants that array-based interface. :-/
I set out to try and get the compiler to write a little shim, but I don't see a way to do it. I can have a variadic operator() on Extension, and do a runtime loop over the NATIVEVALUE to get an array of Value[]. But what do I do with that? I can't call the std::function with it.
So it seems I need to make an EXTENSIONFUNCTION instance which calls my std::function, as a member of each Extension instance.
But basically I find myself up against a wall where I have a variadic templated class for the extension... and then a sort of "can't get there from here" in terms of taking this NATIVEVALUE args[] and being able to call the std::function with them. If std::function would be willing to be invoked with a std::array of arguments, that would solve it, but of course that isn't how it works.
Is it possible to build a shim of this type? The "ugly" thing I can do is just proxy to another array, like:
Extension<2> foo =
[](lib::Value args[]) -> int {
lib::Integer i (args[0]);
lib::String s (args[1]);
std::cout << i;
std::cout << s;
return 0;
}
But that's not as ergonomic. It seems impossible, without knowing the calling convention and doing some kind of inline assembly stuff to process the parameters and CALL the function (and even that would work for functions only, not Callables in general). But people here have proven the impossible possible before, usually by way of "that's not what you want, what you actually want is..."
UPDATE: I just found this, which seems promising...I'm still trying to digest its relevance:
"unpacking" a tuple to call a matching function pointer
( Note: There are a few cross-cutting issues in what I aim to do. Another point is type inference from lambdas. Answer here seems to be the best bet on that... it appears to work, but I don't know if it's "kosher": Initialize class containing a std::function with a lambda )
If I managed to reduce the problem to its simplest form, you need a way to call an std::function taking its argument from a fixed-sized C-style array without having to create a run-time loop. Then, these functions may solve your problem:
template<std::size_t N, typename T, typename F, std::size_t... Indices>
auto apply_from_array_impl(F&& func, T (&arr)[N], std::index_sequence<Indices...>)
-> decltype(std::forward<F>(func)(arr[Indices]...))
{
return std::forward<F>(func)(arr[Indices]...);
}
template<std::size_t N, typename T, typename F,
typename Indices = std::make_index_sequence<N>>
auto apply_from_array(F&& func, T (&arr)[N])
-> decltype(apply_from_array_impl(std::forward<F>(func), arr, Indices()))
{
return apply_from_array_impl(std::forward<F>(func), arr, Indices());
}
Here is an example demonstrating how it can be used:
auto foo = [](int a, int b, int c)
-> int
{
return a + b + c;
};
int main()
{
Value arr[] = { 1, 2, 3 };
std::cout << apply_from_array(foo, arr); // prints 6
}
Of course, with the signature int (*)(T args[]), args is just a T* and you don't know its size at compile time. However, if you know the compile time size from somewhere else (from the std::function for example), you can still tweak apply_from_array to manually give the compile-time size information:
template<std::size_t N, typename T, typename F, std::size_t... Indices>
auto apply_from_array_impl(F&& func, T* arr, std::index_sequence<Indices...>)
-> decltype(std::forward<F>(func)(arr[Indices]...))
{
return std::forward<F>(func)(arr[Indices]...);
}
template<std::size_t N, typename T, typename F,
typename Indices = std::make_index_sequence<N>>
auto apply_from_array(F&& func, T* arr)
-> decltype(apply_from_array_impl<N>(std::forward<F>(func), arr, Indices()))
{
return apply_from_array_impl<N>(std::forward<F>(func), arr, Indices());
}
And then use the function like this:
int c_function(NATIVEVALUE args[])
{
return apply_from_array<arity>(f, args);
}
In the example above, consider that f is an std::function and that arity is the arity of f that you managed to get, one way or another, at compile time.
NOTE: I used the C++14 std::index_sequence and std::make_index_sequence but if you need your code to work with C++11, you can still use handcrafted equivalents, like indices and make_indices in the old question of mine that you linked.
Aftermath: the question being about real code, it was of course a little bit more complicated than above. The extension mechanism is designed so that everytime an extension function is called, C++ proxys above the C API (lib::Integer, lib::String, etc...) are created on the fly then passed to the user-defined function. This required a new method, applyFunc in Extension:
template<typename Func, std::size_t... Indices>
static auto applyFuncImpl(Func && func,
Engine & engine,
REBVAL * ds,
utility::indices<Indices...>)
-> decltype(auto)
{
return std::forward<Func>(func)(
std::decay_t<typename utility::type_at<Indices, Ts...>::type>{
engine,
*D_ARG(Indices + 1)
}...
);
}
template <
typename Func,
typename Indices = utility::make_indices<sizeof...(Ts)>
>
static auto applyFunc(Func && func, Engine & engine, REBVAL * ds)
-> decltype(auto)
{
return applyFuncImpl(
std::forward<Func>(func),
engine,
ds,
Indices {}
);
}
applyFunc takes the function to call an calls it with instances of the appropriate types (Integer, String, etc...) on the fly from the underlying C API created on the fly with an Engine& and a REBVAL*.
If I have a function which is required to produce a hook from an input object, should I try to do that without casting to std::function? With these 2 options, which should I pick and is option 2 a meaningful improvement?
std::function<void()> CreateHook(std::function<void()> f)
{
return []()
{
return Hook(f);
};
}
// option 1:
void Hook(std::function<void()> f)
{
// do something
}
// option 2:
<typename F>
void Hook(F f)
{
// do something
}
Type erase only when you need to. Either:
template<class F>
std::function<void()> CreateHook(F f) { return []() { return Hook(f); }; }
or even in C++1y:
template<class F>
auto CreateHook(F f) { return [f=std::move(f)]() { return Hook(f); }; }
but the second is probably overkill (the f=std::move(f) is not, but auto is, as I am guessing you will just store the return value in a std::function anyhow).
And if your Hook function is simple and stable:
template <typename F> void Hook(F f) {
// do something
}
but if it is large and complex:
void Hook(std::function<void()> f) {
// do something
}
because it lets you split interface from implementation.
The downside to this strategy is that some compilers suck at eliminating identical code in different functions causing binary size bloat, and it can cause some compile time bloat.
But if you defer type erasure in both cases, you can eliminate both two virtual function calls, and allow the compiler to inline the passed in f within the Hook body.
If however the interface for CreateHook cannot be changed, and Hook is only called from it, the template version of Hook is pointless, as it is only called with a std::function anyhow.
In your example the template is pointless, because it is only ever instantiated for T = std::function<void()> - the template overload is chosen by static type of f, not the runtime type.