Is it safe to convert a template lambda to a `void *`? - c++

I'm working on implementing fibers using coroutines implemented in assembler. The coroutines work by cocall to change stack.
I'd like to expose this in C++ using a higher level interface, as cocall assembly can only handle a single void* argument.
In order to handle template lambdas, I've experimented with converting them to a void* and found that while it compiles and works, I was left wondering if it was safe to do so, assuming ownership semantics of the stack (which are preserved by fibers).
template <typename FunctionT>
struct Coentry
{
static void coentry(void * arg)
{
// Is this safe?
FunctionT * function = reinterpret_cast<FunctionT *>(arg);
(*function)();
}
static void invoke(FunctionT function)
{
coentry(reinterpret_cast<void *>(&function));
}
};
template <typename FunctionT>
void coentry(FunctionT function)
{
Coentry<FunctionT>::invoke(function);
}
int main(int argc, const char * argv[]) {
auto f = [&]{
std::cerr << "Hello World!" << std::endl;
};
coentry(f);
}
Is this safe and additionally, is it efficient? By converting to a void* am I forcing the compiler to choose a less efficient representation?
Additionally, by invoking coentry(void*) on a different stack, but the original invoke(FunctionT) has returned, is there a chance that the stack might be invalid to resume? (would be similar to, say invoking within a std::thread I guess).

Everything done above is defined behaviour. The only performance hit is that inlining something aliased thro7gh a void pointer could be slightly harder.
However, the lambda is an actual value, and if stored in automatic storage only lasts as long as the stored-in stack frame does.
You can fix this a number of ways. std::function is one, another is to store the lambda in a shared_ptr<void> or unique_ptr<void, void(*)(void*)>. If you do not need type erasure, you can even store the lambda in a struct with deduced type.
The first two are easy. The third;
template <typename FunctionT>
struct Coentry {
FunctionT f;
static void coentry(void * arg)
{
auto* self = reinterpret_cast<Coentry*>(arg);
(self->f)();
}
Coentry(FunctionT fin):f(sts::move(fin)){}
};
template<class FunctionT>
Coentry<FunctionT> make_coentry( FunctionT f ){ return {std::move(f)}; }
now keep your Coentry around long enough until the task completes.
The details of how you manage lifetime depend on the structure of the rest of your problem.

Related

Passing a locally-created lambda for use in a callback, then going out of scope

I'm using some (somewhat C-ish) library which involves a callback mechanism. The callback functions I can provide it take a void* as a parameter so you can pass arbitrary stuff to them. For the sake of this question let's assume the lambda doesn't take any parameters, but it does capture stuff.
Now, I need to have my callback function invoke a lambda - and it must get this lambda somehow via the void *, i.e. we have
void my_callback(void * arbitrary_stuff) {
/* magic... and somehow the lambda passed */
/* through `arbitrary_stuff` is invoked. */
}
// ...
template <T>
void adapted_add_callback(MagicTypeInvolvingT actual_callback) {
/* more magic */
libFooAddCallback(my_callback, something_based_on_actual_callback);
}
// ...
void baz();
void bar() {
int x;
adapted_add_callback([x]() { /* do something with x */ });
adapted_add_callback(baz);
}
and I want to know what to replace magic, more_magic and MagicTypeInvolvingT with.
Other than the typing challenge here, what I'm worried about, obviously, is how to make sure the data the lambda encapsulates is available on the stack for eventual use, as otherwise I should probably get some kind of segmentation fault.
Notes:
my_callback() should be synchronous, in the sense that it'll execute the lambda on whatever thread it is on and return when it returns. It's either the fooLibrary or the lambda itself which do asynchronicity.
the most straightforward way might be ( assuming the C function is guaranteed to invoke the callback exactly once, and that the lambda is valid at callback point )
void my_callback(void * arbitrary_stuff) {
(*std::unique_ptr{ static_cast<std::function<void()>*>(arbitrary_stuff) })();
}
void adapted_add_callback( std::function<void()> actual_callback ) {
libFooAddCallback(my_callback, new auto( std::move(actual_callback) ) );
}
if you don't want the function<> overhead you'll need to implement your own type erasure ...
You have a couple of issues here.
One is that you can't depend on passing the lambda itself as a void *, so you'll pretty much need to pass a pointer to the lambda (well, the closure created from the lambda, if you want to be precise). That means you'll need to ensure that the lambda remains valid until the callback completes.
The second is a question about how those captures happen - capture by value, or by reference? If you capture by value, everything's fine. If you capture by reference, you also need to ensure that anything you've captured remains valid until the callback completes. If you capture a global by reference, that should normally be fine--but if you capture a local by reference, then the local (even potentially) goes out of scope before the lambda is invoked, using the reference will cause undefined behavior.
I went in a way similar to Massimiliano Janes', but without the overhead of std::function. You have to ensure that the callback is called only once by the library.
using Callback = void(*)(void*);
// Probes the type of the argument and generates a suitable cast & invoke stub
// Caution: self-destructs after use!
template <class F>
Callback cbkWrap(F &) {
return [](void *data) {
std::unique_ptr<F> retrieved(static_cast<F*>(data));
(*retrieved)();
};
}
// Moves the functor into a dynamically-allocated one
template <class F>
void *cbkFunc(F &f) {
return new F{std::move(f)};
}
int main() {
int x = 42;
auto lambda = [&x] { std::cout << x << '\n'; };
libFooAddCallback(cbkWrap(lambda), cbkFunc(lambda));
}
See it live on Coliru
If you can ensure that the lambda outlives the potential calls, you can get rid of the dynamic memory allocations and simply pas a pointer to it as data:
// Probes the type of the argument and generates a suitable cast & invoke stub
template <class F>
Callback cbkWrap(F &) {
return [](void *data) {
auto retrieved = static_cast<F*>(data);
(*retrieved)();
};
}
int main() {
int x = 42;
auto lambda = [&x] { std::cout << x << '\n'; };
libFooAddCallback(cbkWrap(lambda), &lambda);
}
See it live on Coliru
There is unfortunately no way to give ownership of the lamba to the library without knowing exactly how many times it will be called.

Avoid memory allocation with std::function and member function

This code is just for illustrating the question.
#include <functional>
struct MyCallBack {
void Fire() {
}
};
int main()
{
MyCallBack cb;
std::function<void(void)> func = std::bind(&MyCallBack::Fire, &cb);
}
Experiments with valgrind shows that the line assigning to func dynamically allocates about 24 bytes with gcc 7.1.1 on linux.
In the real code, I have a few handfuls of different structs all with a void(void) member function that gets stored in ~10 million std::function<void(void)>.
Is there any way I can avoid memory being dynamically allocated when doing std::function<void(void)> func = std::bind(&MyCallBack::Fire, &cb); ? (Or otherwise assigning these member function to a std::function)
Unfortunately, allocators for std::function has been dropped in C++17.
Now the accepted solution to avoid dynamic allocations inside std::function is to use lambdas instead of std::bind. That does work, at least in GCC - it has enough static space to store the lambda in your case, but not enough space to store the binder object.
std::function<void()> func = [&cb]{ cb.Fire(); };
// sizeof lambda is sizeof(MyCallBack*), which is small enough
As a general rule, with most implementations, and with a lambda which captures only a single pointer (or a reference), you will avoid dynamic allocations inside std::function with this technique (it is also generally better approach as other answer suggests).
Keep in mind, for that to work you need guarantee that this lambda will outlive the std::function. Obviously, it is not always possible, and sometime you have to capture state by (large) copy. If that happens, there is no way currently to eliminate dynamic allocations in functions, other than tinker with STL yourself (obviously, not recommended in general case, but could be done in some specific cases).
As an addendum to the already existent and correct answer, consider the following:
MyCallBack cb;
std::cerr << sizeof(std::bind(&MyCallBack::Fire, &cb)) << "\n";
auto a = [&] { cb.Fire(); };
std::cerr << sizeof(a);
This program prints 24 and 8 for me, with both gcc and clang. I don't exactly know what bind is doing here (my understanding is that it's a fantastically complicated beast), but as you can see, it's almost absurdly inefficient here compared to a lambda.
As it happens, std::function is guaranteed to not allocate if constructed from a function pointer, which is also one word in size. So constructing a std::function from this kind of lambda, which only needs to capture a pointer to an object and should also be one word, should in practice never allocate.
Run this little hack and it probably will print the amount of bytes you can capture without allocating memory:
#include <iostream>
#include <functional>
#include <cstring>
void h(std::function<void(void*)>&& f, void* g)
{
f(g);
}
template<size_t number_of_size_t>
void do_test()
{
size_t a[number_of_size_t];
std::memset(a, 0, sizeof(a));
a[0] = sizeof(a);
std::function<void(void*)> g = [a](void* ptr) {
if (&a != ptr)
std::cout << "malloc was called when capturing " << a[0] << " bytes." << std::endl;
else
std::cout << "No allocation took place when capturing " << a[0] << " bytes." << std::endl;
};
h(std::move(g), &g);
}
int main()
{
do_test<1>();
do_test<2>();
do_test<3>();
do_test<4>();
}
With gcc version 8.3.0 this prints
No allocation took place when capturing 8 bytes.
No allocation took place when capturing 16 bytes.
malloc was called when capturing 24 bytes.
malloc was called when capturing 32 bytes.
Many std::function implementations will avoid allocations and use space inside the function class itself rather than allocating if the callback it wraps is "small enough" and has trivial copying. However, the standard does not require this, only suggests it.
On g++, a non-trivial copy constructor on a function object, or data exceeding 16 bytes, is enough to cause it to allocate. But if your function object has no data and uses the builtin copy constructor, then std::function won't allocate.
Also, if you use a function pointer or a member function pointer, it won't allocate.
While not directly part of your question, it is part of your example.
Do not use std::bind. In virtually every case, a lambda is better: smaller, better inlining, can avoid allocations, better error messages, faster compiles, the list goes on. If you want to avoid allocations, you must also avoid bind.
I propose a custom class for your specific usage.
While it's true that you shouldn't try to re-implement existing library functionality because the library ones will be much more tested and optimized, it's also true that it applies for the general case. If you have a particular situation like in your example and the standard implementation doesn't suite your needs you can explore implementing a version tailored to your specific use case, which you can measure and tweak as necessary.
So I have created a class akin to std::function<void (void)> that works only for methods and has all the storage in place (no dynamic allocations).
I have lovingly called it Trigger (inspired by your Fire method name). Please do give it a more suited name if you want to.
// helper alias for method
// can be used in user code
template <class T>
using Trigger_method = auto (T::*)() -> void;
namespace detail
{
// Polymorphic classes needed for type erasure
struct Trigger_base
{
virtual ~Trigger_base() noexcept = default;
virtual auto placement_clone(void* buffer) const noexcept -> Trigger_base* = 0;
virtual auto call() -> void = 0;
};
template <class T>
struct Trigger_actual : Trigger_base
{
T& obj;
Trigger_method<T> method;
Trigger_actual(T& obj, Trigger_method<T> method) noexcept : obj{obj}, method{method}
{
}
auto placement_clone(void* buffer) const noexcept -> Trigger_base* override
{
return new (buffer) Trigger_actual{obj, method};
}
auto call() -> void override
{
return (obj.*method)();
}
};
// in Trigger (bellow) we need to allocate enough storage
// for any Trigger_actual template instantiation
// since all templates basically contain 2 pointers
// we assume (and test it with static_asserts)
// that all will have the same size
// we will use Trigger_actual<Trigger_test_size>
// to determine the size of all Trigger_actual templates
struct Trigger_test_size {};
}
struct Trigger
{
std::aligned_storage_t<sizeof(detail::Trigger_actual<detail::Trigger_test_size>)>
trigger_actual_storage_;
// vital. We cannot just cast `&trigger_actual_storage_` to `Trigger_base*`
// because there is no guarantee by the standard that
// the base pointer will point to the start of the derived object
// so we need to store separately the base pointer
detail::Trigger_base* base_ptr = nullptr;
template <class X>
Trigger(X& x, Trigger_method<X> method) noexcept
{
static_assert(sizeof(trigger_actual_storage_) >=
sizeof(detail::Trigger_actual<X>));
static_assert(alignof(decltype(trigger_actual_storage_)) %
alignof(detail::Trigger_actual<X>) == 0);
base_ptr = new (&trigger_actual_storage_) detail::Trigger_actual<X>{x, method};
}
Trigger(const Trigger& other) noexcept
{
if (other.base_ptr)
{
base_ptr = other.base_ptr->placement_clone(&trigger_actual_storage_);
}
}
auto operator=(const Trigger& other) noexcept -> Trigger&
{
destroy_actual();
if (other.base_ptr)
{
base_ptr = other.base_ptr->placement_clone(&trigger_actual_storage_);
}
return *this;
}
~Trigger() noexcept
{
destroy_actual();
}
auto destroy_actual() noexcept -> void
{
if (base_ptr)
{
base_ptr->~Trigger_base();
base_ptr = nullptr;
}
}
auto operator()() const
{
if (!base_ptr)
{
// deal with this situation (error or just ignore and return)
}
base_ptr->call();
}
};
Usage:
struct X
{
auto foo() -> void;
};
auto test()
{
X x;
Trigger f{x, &X::foo};
f();
}
Warning: only tested for compilation errors.
You need to thoroughly test it for correctness.
You need to profile it and see if it has a better performance than other solutions. The advantage of this is because it's in house cooked you can make tweaks to the implementation to increase performance on your specific scenarios.
As #Quuxplusone mentioned in their answer-as-a-comment, you can use inplace_function here. Include the header in your project, and then use like this:
#include "inplace_function.h"
struct big { char foo[20]; };
static stdext::inplace_function<void(), 8> inplacefunc;
static std::function<void()> stdfunc;
int main() {
static_assert(sizeof(inplacefunc) == 16);
static_assert(sizeof(stdfunc) == 32);
inplacefunc = []() {};
// fine
struct big a;
inplacefunc = [a]() {};
// test.cpp:15:24: required from here
// inplace_function.h:237:33: error: static assertion failed: inplace_function cannot be constructed from object with this (large) size
// 237 | static_assert(sizeof(C) <= Capacity,
// | ~~~~~~~~~~^~~~~~~~~~~
// inplace_function.h:237:33: note: the comparison reduces to ‘(20 <= 8)’
}

Pointer-to-Function and Pointer-to-Object Semantics

I'm having issues with getting a partially-qualified function object to call later, with variable arguments, in another thread.
In GCC, I've been using a macro and typedef I made but I'm finishing up my project an trying to clear up warnings.
#define Function_Cast(func_ref) (SubscriptionFunction*) func_ref
typedef void(SubscriptionFunction(void*, std::shared_ptr<void>));
Using the Function_Cast macro like below results in "warning: casting between pointer-to-function and pointer-to-object is conditionally-supported"
Subscriber* init_subscriber = new Subscriber(this, Function_Cast(&BaseLoaderStaticInit::init), false);
All I really need is a pointer that I can make a std::bind<function_type> object of. How is this usually done?
Also, this conditionally-supported thing is really annoying. I know that on x86 my code will work fine and I'm aware of the limitations of relying on that sizeof(void*) == sizeof(this*) for all this*.
Also, is there a way to make clang treat function pointers like data pointers so that my code will compile? I'm interested to see how bad it fails (if it does).
Relevant Code:
#define Function_Cast(func_ref) (SubscriptionFunction*) func_ref
typedef void(SubscriptionFunction(void*, std::shared_ptr<void>));
typedef void(CallTypeFunction(std::shared_ptr<void>));
Subscriber(void* owner, SubscriptionFunction* func, bool serialized = true) {
this->_owner = owner;
this->_serialized = serialized;
this->method = func;
call = std::bind(&Subscriber::_std_call, this, std::placeholders::_1);
}
void _std_call(std::shared_ptr<void> arg) { method(_owner, arg); }
The problem here is that you are trying to use a member-function pointer in place of a function pointer, because you know that, under-the-hood, it is often implemented as function(this, ...).
struct S {
void f() {}
};
using fn_ptr = void(*)(S*);
void call(S* s, fn_ptr fn)
{
fn(s);
delete s;
}
int main() {
call(new S, (fn_ptr)&S::f);
}
http://ideone.com/fork/LJiohQ
But there's no guarantee this will actually work and obvious cases (virtual functions) where it probably won't.
Member functions are intended to be passed like this:
void call(S* s, void (S::*fn)())
and invoked like this:
(s->*fn)();
http://ideone.com/bJU5lx
How people work around this when they want to support different types is to use a trampoline, which is a non-member function. You can do this with either a static [member] function or a lambda:
auto sub = new Subscriber(this, [](auto* s){ s->init(); });
or if you'd like type safety at your call site, a templated constructor:
template<typename T>
Subscriber(T* t, void(T::*fn)(), bool x);
http://ideone.com/lECOp6
If your Subscriber constructor takes a std::function<void(void))> rather than a function pointer you can pass a capturing lambda and eliminate the need to take a void*:
new Subscriber([this](){ init(); }, false);
it's normally done something like this:
#include <functional>
#include <memory>
struct subscription
{
// RAII unsubscribe stuff in destructor here....
};
struct subscribable
{
subscription subscribe(std::function<void()> closure, std::weak_ptr<void> sentinel)
{
// perform the subscription
return subscription {
// some id so you can unsubscribe;
};
}
//
//
void notify_subscriber(std::function<void()> const& closure,
std::weak_ptr<void> const & sentinel)
{
if (auto locked = sentinel.lock())
{
closure();
}
}
};

Void Wrappers for Nonvoid Functions With An Argument

So, I have the following situation:
I'm coding for the mbed online compliler, on a low-memory microcontroller.
Real Time performance is very important, I want this to take less than a microsecond. 10 microseconds would be tolerable.
I'm using their timeout library, which provides an API for calling an ISR after a specified time, but requires that the ISR be a void/void function. (including a member function.
void TimeoutCallback(void) { do stuff that I want to do on timeout.} // ISR
Timeout to;
to.attach_us(&TimeoutCallback, 750) // Call TimeoutCallback in 750 us.
I created a vector of Timeout objects, which all get set at once, to the same function, with a different amount of time. I want to somehow pass into the TimeoutCallback which Timeout object called it.
My initial thought was to overload the Timeout class to allow it to accept int function(int) function pointers, and to accept a number in the overloaded attach function that gets passed to said function pointer. However, I'm unsure whether this is actually practical given the messy (and device-specific) inheritance of the Timeout class.
Now, I wonder whether there is a way to programatically create a void/void function that wraps a void/int function, and included a changeable reference int which is passed to the wrapped function.
While Tony D's solution is appropriate if using the mbed Ticker class, there is an alternative method using the mbed RtosTimer.
The RtosTimer constructor takes a void* argument that is passed to the handler on timeout. The handler has the signature:
void handler(void const* n)
Where n is the pointer argument passed to the constructor and can be used to ID the specific timeout.
Unlike Ticker where the timeout function runs in the interrupt context, for RtosTimer the handler runs as a thread, so gives greater flexibility, but potentially greater latency.
As your library can call member functions, you can create an adapter ala...
template <typename Func, Func func>
struct Adapter
{
Adapter(int n) : n_(n) { }
void f() { func(n_); }
int n_;
};
To use it:
Adapter<void(*)(int), My_Function_Expecting_An_Int> adapter(the_int);
to.attach_us(&adapter, &decltype(adapter)::f, timeout_us);
Make sure the adapter's lifetime lasts until the callback....
To call a member function:
#include <iostream>
#include <string>
#include <vector>
struct MyObj
{
void f(int n) { std::cout <<"hi " << n << "\n"; }
};
template <typename Class, typename PFunc>
struct Adapter
{
Adapter(Class& object, PFunc pFunc, int n) : object_(object), pFunc_(pFunc), n_(n) { }
void f() { (object_.*pFunc_)(n_); }
Class& object_;
PFunc pFunc_;
int n_;
};
int main()
{
MyObj myObj;
Adapter<MyObj, void(MyObj::*)(int)> adapter(myObj, &MyObj::f, 43);
adapter.f();
}

Calling templated function with type unknown until runtime

I have a this function to read 1d arrays from an unformatted fortran file:
template <typename T>
void Read1DArray(T* arr)
{
unsigned pre, post;
file.read((char*)&pre, PREPOST_DATA);
for(unsigned n = 0; n < (pre/sizeof(T)); n++)
file.read((char*)&arr[n], sizeof(T));
file.read((char*)&post, PREPOST_DATA);
if(pre!=post)
std::cout << "Failed read fortran 1d array."<< std::endl;
}
I call this like so:
float* new_array = new float[sizeof_fortran_array];
Read1DArray(new_array);
Assume Read1DArray is part of a class, which contains an ifstream named 'file', and sizeof_fortran_array is already known. (And for those not quite so familiar with fortran unformatted writes, the 'pre' data indicates how long the array is in bytes, and the 'post' data is the same)
My issue is that I have a scenario where I may want to call this function with either a float* or a double*, but this will not be known until runtime.
Currently what I do is simply have a flag for which data type to read, and when reading the array I duplicate the code something like this, where datatype is a string set at runtime:
if(datatype=="float")
Read1DArray(my_float_ptr);
else
Read1DArray(my_double_ptr);
Can someone suggest a method of rewriting this so that I dont have to duplicate the function call with the two types? These are the only two types it would be necessary to call it with, but I have to call it a fair few times and I would rather not have this duplication all over the place.
Thanks
EDIT:
In response to the suggestion to wrap it in a call_any_of function, this wouldnt be enough because at times I do things like this:
if(datatype=="float")
{
Read1DArray(my_float_ptr);
Do_stuff(my_float_ptr);
}
else
{
Read1DArray(my_double_ptr);
Do_stuff(my_double_ptr);
}
// More stuff happening in between
if(datatype=="float")
{
Read1DArray(my_float_ptr);
Do_different_stuff(my_float_ptr);
}
else
{
Read1DArray(my_double_ptr);
Do_different_stuff(my_double_ptr);
}
If you think about the title you will realize that there is a contradiction in that the template instantiation is performed at compile time but you want to dispatch based on information available only at runtime. At runtime you cannot instantiate a template, so that is impossible.
The approach you have taken is actually the right one: instantiate both options at compile time, and decide which one to use at runtime with the available information. That being said you might want to think your design.
I imagine that not only reading but also processing will be different based on that runtime value, so you might want to bind all the processing in a (possibly template) function for each one of the types and move the if further up the call hierarchy.
Another approach to avoid having to dispatch based on type to different instantiations of the template would be to loose some of the type safety and implement a single function that takes a void* to the allocated memory and a size argument with the size of the type in the array. Note that this will be more fragile, and it does not solve the overall problem of having to act on the different arrays after the data is read, so I would not suggest following this path.
Because you don't know which code path to take until runtime, you'll need to set up some kind of dynamic dispatch. Your current solution does this using an if-else which must be copied and pasted everywhere it is used.
An improvement would be to generate a function that performs the dispatch. One way to achieve this is by wrapping each code path in a member function template, and using an array of member function pointers that point to specialisations of that member function template. [Note: This is functionally equivalent to dynamic dispatch using virtual functions.]
class MyClass
{
public:
template <typename T>
T* AllocateAndRead1DArray(int sizeof_fortran_array)
{
T* ptr = new T[sizeof_fortran_array];
Read1DArray(ptr);
return ptr;
}
template <typename T>
void Read1DArrayAndDoStuff(int sizeof_fortran_array)
{
Do_stuff(AllocateAndRead1DArray<T>(sizeof_fortran_array));
}
template <typename T>
void Read1DArrayAndDoOtherStuff(int sizeof_fortran_array)
{
Do_different_stuff(AllocateAndRead1DArray<T>(sizeof_fortran_array));
}
// map a datatype to a member function that takes an integer parameter
typedef std::pair<std::string, void(MyClass::*)(int)> Action;
static const int DATATYPE_COUNT = 2;
// find the action to perform for the given datatype
void Dispatch(const Action* actions, const std::string& datatype, int size)
{
for(const Action* i = actions; i != actions + DATATYPE_COUNT; ++i)
{
if((*i).first == datatype)
{
// perform the action for the given size
return (this->*(*i).second)(size);
}
}
}
};
// map each datatype to an instantiation of Read1DArrayAndDoStuff
MyClass::Action ReadArrayAndDoStuffMap[MyClass::DATATYPE_COUNT] = {
MyClass::Action("float", &MyClass::Read1DArrayAndDoStuff<float>),
MyClass::Action("double", &MyClass::Read1DArrayAndDoStuff<double>),
};
// map each datatype to an instantiation of Read1DArrayAndDoOtherStuff
MyClass::Action ReadArrayAndDoOtherStuffMap[MyClass::DATATYPE_COUNT] = {
MyClass::Action("float", &MyClass::Read1DArrayAndDoOtherStuff<float>),
MyClass::Action("double", &MyClass::Read1DArrayAndDoOtherStuff<double>),
};
int main()
{
MyClass object;
// call MyClass::Read1DArrayAndDoStuff<float>(33)
object.Dispatch(ReadArrayAndDoStuffMap, "float", 33);
// call MyClass::Read1DArrayAndDoOtherStuff<double>(542)
object.Dispatch(ReadArrayAndDoOtherStuffMap, "double", 542);
}
If performance is important, and the possible set of types is known at compile time, there are a few further optimisations that could be performed:
Change the string to an enumeration that represents all the possible data types and index the array of actions by that enumeration.
Give the Dispatch function template parameters that allow it to generate a switch statement to call the appropriate function.
For example, this can be inlined by the compiler to produce code that is (generally) more optimal than both the above example and the original if-else version in your question.
class MyClass
{
public:
enum DataType
{
DATATYPE_FLOAT,
DATATYPE_DOUBLE,
DATATYPE_COUNT
};
static MyClass::DataType getDataType(const std::string& datatype)
{
if(datatype == "float")
{
return MyClass::DATATYPE_FLOAT;
}
return MyClass::DATATYPE_DOUBLE;
}
// find the action to perform for the given datatype
template<typename Actions>
void Dispatch(const std::string& datatype, int size)
{
switch(getDataType(datatype))
{
case DATATYPE_FLOAT: return Actions::FloatAction::apply(*this, size);
case DATATYPE_DOUBLE: return Actions::DoubleAction::apply(*this, size);
}
}
};
template<void(MyClass::*member)(int)>
struct Action
{
static void apply(MyClass& object, int size)
{
(object.*member)(size);
}
};
struct ReadArrayAndDoStuff
{
typedef Action<&MyClass::Read1DArrayAndDoStuff<float>> FloatAction;
typedef Action<&MyClass::Read1DArrayAndDoStuff<double>> DoubleAction;
};
struct ReadArrayAndDoOtherStuff
{
typedef Action<&MyClass::Read1DArrayAndDoOtherStuff<float>> FloatAction;
typedef Action<&MyClass::Read1DArrayAndDoOtherStuff<double>> DoubleAction;
};
int main()
{
MyClass object;
// call MyClass::Read1DArrayAndDoStuff<float>(33)
object.Dispatch<ReadArrayAndDoStuff>("float", 33);
// call MyClass::Read1DArrayAndDoOtherStuff<double>(542)
object.Dispatch<ReadArrayAndDoOtherStuff>("double", 542);
}