I am writing a little variadic summing function (using c++20, but my question would remain the same with c++17 syntax). I would like to make the following code as short and clear as possible (but without using folding expressions. This is only a toy problem, but in later applications I would like to avoid fold expressions):
Additive auto sum(Additive auto&& val, Additive auto&&... vals) {
auto add = [](Additive auto&& val1, Additive auto&& val2) {
return val1 + val2;
}; // neccessary??
if constexpr(sizeof...(vals) == 1) {
return add(val, std::forward<decltype(vals)>(vals)...); // (1)
//return val + std::forward<decltype(vals)>(vals)...; // (2)
}
else return val + sum(std::forward<decltype(vals)>(vals)...);
}
Using line (1) the above code compiles, but it makes the definition of the 'add' lambda neccessary. Line (2), however, does not compile, I get the following error with gcc: parameter packs not expanded with ‘...’. If I add parentheses around the std::forward expression in line (2), I get the following error: expected binary operator before ‘)’ token.
Is there any way to pass a parameter pack with length 1 to an operator?
Embrace the power of negative thinking and start induction with zero instead of one:
auto sum(auto &&val, auto &&...vals) {
if constexpr (sizeof...(vals) == 0)
return val;
else
return val + sum(std::forward<decltype(vals)>(vals)...);
}
The above definition has the side effect that sum(x) will now compile and return x. (In fact, you can even make the function work with no arguments, by having it return zero, but then the question arises: zero of which type? To avoid having to go there, I left this case undefined.) If you insist on sum being defined only from arity 2 upwards, you can use this instead:
auto sum(auto &&val0, auto &&val1, auto &&...vals) {
if constexpr (sizeof...(vals) == 0)
return val0 + val1;
else
return val0 + sum(std::forward<decltype(val1)>(val1),
std::forward<decltype(vals)>(vals)...);
}
However, you should probably allow the ‘vacuous’ case whenever it makes sense to do so: it makes for simpler and more general code. Notice for example how in the latter definition the addition operator appears twice: this is effectively duplicating the folding logic between the two cases (in this case it’s just one addition, so it’s relatively simple, but with more complicated operations it might be more burdensome), whereas handling the degenerate case is usually trivial and doesn’t duplicate anything.
(I omitted concept annotations, as they do not seem particularly relevant to the main problem.)
template<class... Additive> decltype(auto) sum(Additive &&...val) {
return (std::forward<Additive>(val) + ...);
}
?
Offtopic: unsure about Op's real needs, I've accidentally quickdesigned one thing I've been thinking of, from time to time. :D
#include <iostream>
#include <functional>
#include <type_traits>
template<class... Fs> struct Overloads;
template<class F, class... Fs> struct Overloads<F, Fs...>: Overloads<Fs...> {
using Fallback = Overloads<Fs...>;
constexpr Overloads(F &&f, Fs &&...fs): Fallback(std::forward<Fs>(fs)...), f(std::forward<F>(f)) {}
template<class... Args> constexpr decltype(auto) operator()(Args &&...args) const {
if constexpr(std::is_invocable_v<F, Args...>) return std::invoke(f, std::forward<Args>(args)...);
else return Fallback::operator()(std::forward<Args>(args)...);
}
private:
F f;
};
template<class... Fs> Overloads(Fs &&...fs) -> Overloads<Fs...>;
template<class F> struct Overloads<F> {
constexpr Overloads(F &&f): f(std::forward<F>(f)) {}
template<class... Args> constexpr decltype(auto) operator()(Args &&...args) const {
return std::invoke(f, std::forward<Args>(args)...);
}
private:
F f;
};
template<> struct Overloads<> {
template<class... Args> constexpr void operator()(Args &&...) const noexcept {}
};
constexpr int f(int x, int y) noexcept { return x + y; }
void g(int x) { std::cout << x << '\n'; }
template<class... Vals> decltype(auto) omg(Vals &&...vals) {
static constexpr auto fg = Overloads(f, g);
return fg(std::forward<Vals>(vals)...);
}
int main() {
omg(omg(40, 2));
}
>_<
You can unpack the one item into a variable and use that:
if constexpr (sizeof...(vals) == 1) {
auto&& only_value(std::forward<decltype(vals)>(vals)...);
return val + only_value;
}
Looking for a way to implement a universal generic memoization function which will take a function and return the memoized version of the same?
Looking for something like #memo (from Norving's site)decorator in python.
def memo(f):
table = {}
def fmemo(*args):
if args not in table:
table[args] = f(*args)
return table[args]
fmemo.memo = table
return fmemo
Going more general, is there a way to express generic and reusable decorators in C++, possibly using the new features of C++11?
A compact one returning a lambda:
template <typename R, typename... Args>
std::function<R (Args...)> memo(R (*fn)(Args...)) {
std::map<std::tuple<Args...>, R> table;
return [fn, table](Args... args) mutable -> R {
auto argt = std::make_tuple(args...);
auto memoized = table.find(argt);
if(memoized == table.end()) {
auto result = fn(args...);
table[argt] = result;
return result;
} else {
return memoized->second;
}
};
}
In C++14, one can use generalized return type deduction to avoid the extra indirection imposed by returning std::function.
Making this fully general, permitting passing arbitrary function objects without wrapping them in std::function first is left as an exercise for the reader.
The right way to do memoization in C++ is to mix the Y-combinator in.
Your base function needs a modification. Instead of calling itself directly, it takes a templateized reference to itself as its first argument (or, a std::function<Same_Signature> recursion as its first argument).
We start with a Y-combinator. Then we add in a cache on the operator() and rename it to memoizer, and give it a fixed signature (for the table).
The only thing left is to write a tuple_hash<template<class...>class Hash> that does a hash on a tuple.
The type of the function that can be memoized is (((Args...)->R), Args...) -> R, which makes the memoizer of type ( (((Args...) -> R), Args...) -> R ) -> ((Args...) -> R). Having a Y-combinator around to produce a 'traditional' recursive implementation can also be useful.
Note that if the function memoized modifies its args during a call, the memoizer will cache the results in the wrong spot.
struct wrap {};
template<class Sig, class F, template<class...>class Hash=std::hash>
struct memoizer;
template<class R, class...Args, class F, template<class...>class Hash>
struct memoizer<R(Args...), F, Hash> {
using base_type = F;
private:
F base;
mutable std::unordered_map< std::tuple<std::decay_t<Args>...>, R, tuple_hash<Hash> > cache;
public:
template<class... Ts>
R operator()(Ts&&... ts) const
{
auto args = std::make_tuple(ts...);
auto it = cache.find( args );
if (it != cache.end())
return it->second;
auto&& retval = base(*this, std::forward<Ts>(ts)...);
cache.emplace( std::move(args), retval );
return decltype(retval)(retval);
}
template<class... Ts>
R operator()(Ts&&... ts)
{
auto args = std::tie(ts...);
auto it = cache.find( args );
if (it != cache.end())
return it->second;
auto&& retval = base(*this, std::forward<Ts>(ts)...);
cache.emplace( std::move(args), retval );
return decltype(retval)(retval);
}
memoizer(memoizer const&)=default;
memoizer(memoizer&&)=default;
memoizer& operator=(memoizer const&)=default;
memoizer& operator=(memoizer&&)=default;
memoizer() = delete;
template<typename L>
memoizer( wrap, L&& f ):
base( std::forward<L>(f) )
{}
};
template<class Sig, class F>
memoizer<Sig, std::decay_t<F>> memoize( F&& f ) { return {wrap{}, std::forward<F>(f)}; }
live example with a hard-coded hash function based off this SO post.
auto fib = memoize<size_t(size_t)>(
[](auto&& fib, size_t i)->size_t{
if (i<=1) return 1;
return fib(i-1)+fib(i-2);
}
);
I struggled with the same problem. I created macro that also support (with small modification in recursive code) recursion. Here it is:
#include <map>
#include <tuple>
#define MEMOIZATOR(N, R, ...) \
R _ ## N (__VA_ARGS__); \
std::map<std::tuple<__VA_ARGS__>, R> _memo_ ## N; \
template <typename ... Args> \
R N (Args ... args) { \
auto& _memo = _memo_ ## N; \
auto result = _memo.find(std::make_tuple(args...)); \
if (result != _memo.end()) { \
return result->second; \
} \
else { \
auto result = _ ## N (args...); \
_memo[std::make_tuple(args...)] = result; \
return result; \
} \
}
The usage is really simple:
MEMOIZATOR(fibonacci, long int, int);
long int _fibonacci(int n) { // note the leading underscore
// this makes recursive function to go through wrapper
if (n == 1 or n == 2) {
return 1;
}
return fibonacci(n - 1) + fibonacci(n - 2);
}
fibonacci(40) // uses memoizator so it works in linear time
// (try it with and without memoizator)
See it in action: http://ideone.com/C3JEUT :)
Although #KerrekSB posted a link to another answer, I though I'd throw my answer in the ring as well (it's probably slightly less complicated than the linked answer, although in essence it's very similar):
#include <functional>
#include <map>
#include <tuple>
#include <utility>
/*! \brief A template functor class that can be utilized to memoize any
* given function taking any number of arguments.
*/
template <typename R, typename... Args>
struct memoize_wrapper
{
private:
std::map<std::tuple<Args...>, R> memo_;
std::function<R(Args...)> func_;
public:
/*! \brief Auto memoization constructor.
*
* \param func an the std::function to be memoized.
*/
memoize_wrapper(std::function<R(Args...)> func)
: func_(func)
{ }
/*! \brief Memoization functor implementation.
*
* \param a Argument values that match the argument types for the
* (previously) supplied function.
* \return A value of return type R equivalent to calling func(a...).
* If this function has been called with these parameters
* previously, this will take O(log n) time.
*/
R operator()(Args&&... a)
{
auto tup = std::make_tuple(std::forward<Args>(a)...);
auto it = memo_.find(tup);
if(it != memo_.end()) {
return it->second;
}
R val = func_(a...);
memo_.insert(std::make_pair(std::move(tup), val));
return val;
}
}; //end struct memoize_wrapper
Edit: Example usage:
Edit2: As pointed out, this doesn't work with recursive functions.
#include "utility/memoize_wrapper.hpp"
#include <memory>
#include <vector>
#include <algorithm>
#include <iostream>
long factorial(long i)
{
long result = 1;
long current = 2;
while(current <= i) {
result *= current;
++current;
}
return result;
}
int main()
{
std::vector<int> arg {10, 9, 8, 7, 6, 10, 9, 8, 7, 6};
std::transform(arg.begin(), arg.end(), arg.begin(), memoize_wrapper<long, long>(factorial));
for(long i : arg) {
std::cout << i << "\n";
}
}
Below is a (thread safe) C++17 function template that acts like std::invoke but memoizes the result:
/**
* #brief Drop-in replacement for std::invoke which memoizes the return
* result.
*
* #param[in] function The function whose result needs to be cached
* #param[in] args The function arguments
*
* #tparam Function The function type
* #tparam Args The argument types
*
* #return A value obtained either by evaluating the function, or by
* recalling it from a cache.
*
* #note The function provided must not be a type-erase function object
* like a raw function pointer or std::function, because this
* function depends on the uniqueness of the Function template
* parameter. If you were to call invoke_memoized(f, a) and
* invoke_memoized(g, b) in the same translation unit, where f and g
* were function pointers of the same type, and a and b were
* arguments of the same type, you'd end up using the same cache for
* both functions f and g. A reasonable attempt is made to detect
* these misuse cases via static_assert.
*/
template<typename Function, typename... Args>
auto invoke_memoized(Function function, Args... args)
{
using key_type = std::tuple<Args...>;
using value_type = std::invoke_result_t<Function, Args...>;
static_assert(! std::is_same_v<Function, std::function<value_type(Args...)>>,
"cannot memoize on std::function (use a lambda instead)");
static_assert(! std::is_same_v<Function, value_type(*)(Args...)>,
"cannot memoize on function pointer (use a lambda instead)");
static std::mutex mutex;
static std::map<key_type, value_type> cache;
auto key = std::tuple(args...);
auto lock = std::lock_guard<std::mutex>(mutex);
if (cache.count(key))
{
return cache[key];
}
return cache[key] = std::apply(function, key);
}
You can use it like this:
auto c = invoke_memoized(std::plus<>(), 1, 2.3);
A static cache is maintained for each combination of the function object and argument types. As noted std::function and raw function pointers are rejected, as type-erased functions would get their caches mixed up. You can easily modify this function to impose limits on the cache size.
I am new to C++11. I am writing the following recursive lambda function, but it doesn't compile.
sum.cpp
#include <iostream>
#include <functional>
auto term = [](int a)->int {
return a*a;
};
auto next = [](int a)->int {
return ++a;
};
auto sum = [term,next,&sum](int a, int b)mutable ->int {
if(a>b)
return 0;
else
return term(a) + sum(next(a),b);
};
int main(){
std::cout<<sum(1,10)<<std::endl;
return 0;
}
compilation error:
vimal#linux-718q:~/Study/09C++/c++0x/lambda> g++ -std=c++0x sum.cpp
sum.cpp: In lambda function:
sum.cpp:18:36: error: ‘((<lambda(int, int)>*)this)-><lambda(int, int)>::sum’ cannot be used as a function
gcc version
gcc version 4.5.0 20091231 (experimental) (GCC)
But if I change the declaration of sum() as below, it works:
std::function<int(int,int)> sum = [term,next,&sum](int a, int b)->int {
if(a>b)
return 0;
else
return term(a) + sum(next(a),b);
};
Could someone please throw light on this?
Think about the difference between the auto version and the fully specified type version. The auto keyword infers its type from whatever it's initialized with, but what you're initializing it with needs to know what its type is (in this case, the lambda closure needs to know the types it's capturing). Something of a chicken-and-egg problem.
On the other hand, a fully specified function object's type doesn't need to "know" anything about what is being assigned to it, and so the lambda's closure can likewise be fully informed about the types its capturing.
Consider this slight modification of your code and it may make more sense:
std::function<int(int, int)> sum;
sum = [term, next, &sum](int a, int b) -> int {
if (a > b)
return 0;
else
return term(a) + sum(next(a), b);
};
Obviously, this wouldn't work with auto. Recursive lambda functions work perfectly well (at least they do in MSVC, where I have experience with them), it's just that they aren't really compatible with type inference.
The trick is to feed in the lambda implementation to itself as a parameter, not by capture.
const auto sum = [term, next](int a, int b) {
auto sum_impl = [term, next](int a, int b, auto& sum_ref) mutable {
if (a > b) {
return 0;
}
return term(a) + sum_ref(next(a), b, sum_ref);
};
return sum_impl(a, b, sum_impl);
};
All problems in computer science can be solved by another level of indirection. I first found this easy trick at http://pedromelendez.com/blog/2015/07/16/recursive-lambdas-in-c14/
It does require C++14 while the question is on C++11, but perhaps interesting to most.
Here's the full example at Godbolt.
Going via std::function is also possible but can result in slower code. But not always. Have a look at the answers to std::function vs template
This is not just a peculiarity about C++,
it's directly mapping to the mathematics of lambda calculus. From Wikipedia:
Lambda calculus cannot express this as directly as some other
notations:
all functions are anonymous in lambda calculus, so we can't refer to a
value which is yet to be defined, inside the lambda term defining that
same value. However, recursion can still be achieved by arranging for a
lambda expression to receive itself as its argument value
With C++14, it is now quite easy to make an efficient recursive lambda without having to incur the additional overhead of std::function, in just a few lines of code:
template <class F>
struct y_combinator {
F f; // the lambda will be stored here
// a forwarding operator():
template <class... Args>
decltype(auto) operator()(Args&&... args) const {
// we pass ourselves to f, then the arguments.
return f(*this, std::forward<Args>(args)...);
}
};
// helper function that deduces the type of the lambda:
template <class F>
y_combinator<std::decay_t<F>> make_y_combinator(F&& f) {
return {std::forward<F>(f)};
}
with which your original sum attempt becomes:
auto sum = make_y_combinator([term,next](auto sum, int a, int b) -> int {
if (a>b) {
return 0;
}
else {
return term(a) + sum(next(a),b);
}
});
In C++17, with CTAD, we can add a deduction guide:
template <class F> y_combinator(F) -> y_combinator<F>;
Which obviates the need for the helper function. We can just write y_combinator{[](auto self, ...){...}} directly.
In C++20, with CTAD for aggregates, the deduction guide won't be necessary.
In C++23, with deducing this, you don't need a Y-combinator at all:
auto sum = [term,next](this auto const& sum, int a, int b) -> int {
if (a>b) {
return 0;
}
else {
return term(a) + sum(next(a),b);
}
}
I have another solution, but work only with stateless lambdas:
void f()
{
static int (*self)(int) = [](int i)->int { return i>0 ? self(i-1)*i : 1; };
std::cout<<self(10);
}
Trick here is that lambdas can access static variables and you can convert stateless ones to function pointer.
You can use it with standard lambdas:
void g()
{
int sum;
auto rec = [&sum](int i) -> int
{
static int (*inner)(int&, int) = [](int& _sum, int i)->int
{
_sum += i;
return i>0 ? inner(_sum, i-1)*i : 1;
};
return inner(sum, i);
};
}
Its work in GCC 4.7
To make lambda recursive without using external classes and functions (like std::function or fixed-point combinator) one can use the following construction in C++14 (live example):
#include <utility>
#include <list>
#include <memory>
#include <iostream>
int main()
{
struct tree
{
int payload;
std::list< tree > children = {}; // std::list of incomplete type is allowed
};
std::size_t indent = 0;
// indication of result type here is essential
const auto print = [&] (const auto & self, const tree & node) -> void
{
std::cout << std::string(indent, ' ') << node.payload << '\n';
++indent;
for (const tree & t : node.children) {
self(self, t);
}
--indent;
};
print(print, {1, {{2, {{8}}}, {3, {{5, {{7}}}, {6}}}, {4}}});
}
prints:
1
2
8
3
5
7
6
4
Note, result type of lambda should be specified explicitly.
You can make a lambda function call itself recursively. The only thing you need to do is to is to reference it through a function wrapper so that the compiler knows it's return and argument type (you can't capture a variable -- the lambda itself -- that hasn't been defined yet).
function<int (int)> f;
f = [&f](int x) {
if (x == 0) return 0;
return x + f(x-1);
};
printf("%d\n", f(10));
Be very careful not to run out of the scope of the wrapper f.
I ran a benchmark comparing a recursive function vs a recursive lambda function using the std::function<> capture method. With full optimizations enabled on clang version 4.1, the lambda version ran significantly slower.
#include <iostream>
#include <functional>
#include <chrono>
uint64_t sum1(int n) {
return (n <= 1) ? 1 : n + sum1(n - 1);
}
std::function<uint64_t(int)> sum2 = [&] (int n) {
return (n <= 1) ? 1 : n + sum2(n - 1);
};
auto const ITERATIONS = 10000;
auto const DEPTH = 100000;
template <class Func, class Input>
void benchmark(Func&& func, Input&& input) {
auto t1 = std::chrono::high_resolution_clock::now();
for (auto i = 0; i != ITERATIONS; ++i) {
func(input);
}
auto t2 = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(t2-t1).count();
std::cout << "Duration: " << duration << std::endl;
}
int main() {
benchmark(sum1, DEPTH);
benchmark(sum2, DEPTH);
}
Produces results:
Duration: 0 // regular function
Duration: 4027 // lambda function
(Note: I also confirmed with a version that took the inputs from cin, so as to eliminate compile time evaluation)
Clang also produces a compiler warning:
main.cc:10:29: warning: variable 'sum2' is uninitialized when used within its own initialization [-Wuninitialized]
Which is expected, and safe, but should be noted.
It's great to have a solution in our toolbelts, but I think the language will need a better way to handle this case if performance is to be comparable to current methods.
Note:
As a commenter pointed out, it seems latest version of VC++ has found a way to optimize this to the point of equal performance. Maybe we don't need a better way to handle this, after all (except for syntactic sugar).
Also, as some other SO posts have outlined in recent weeks, the performance of std::function<> itself may be the cause of slowdown vs calling function directly, at least when the lambda capture is too large to fit into some library-optimized space std::function uses for small-functors (I guess kinda like the various short string optimizations?).
Here is a refined version of the Y-combinator solution based on one proposed by #Barry.
template <class F>
struct recursive {
F f;
template <class... Ts>
decltype(auto) operator()(Ts&&... ts) const { return f(std::ref(*this), std::forward<Ts>(ts)...); }
template <class... Ts>
decltype(auto) operator()(Ts&&... ts) { return f(std::ref(*this), std::forward<Ts>(ts)...); }
};
template <class F> recursive(F) -> recursive<F>;
auto const rec = [](auto f){ return recursive{std::move(f)}; };
To use this, one could do the following
auto fib = rec([&](auto&& fib, int i) {
// implementation detail omitted.
});
It is similar to the let rec keyword in OCaml, although not the same.
In C++23 deducing this (P0847) will be added:
auto f = [](this auto& self, int i) -> int
{
return i > 0 ? self(i - 1) + i : 0;
}
For now its only available in EDG eccp and (partially) available in MSVC:
https://godbolt.org/z/f3E3xT3fY
This is a slightly simpler implementation of the fixpoint operator which makes it a little more obvious exactly what's going on.
#include <iostream>
#include <functional>
using namespace std;
template<typename T, typename... Args>
struct fixpoint
{
typedef function<T(Args...)> effective_type;
typedef function<T(const effective_type&, Args...)> function_type;
function_type f_nonr;
T operator()(Args... args) const
{
return f_nonr(*this, args...);
}
fixpoint(const function_type& p_f)
: f_nonr(p_f)
{
}
};
int main()
{
auto fib_nonr = [](const function<int(int)>& f, int n) -> int
{
return n < 2 ? n : f(n-1) + f(n-2);
};
auto fib = fixpoint<int,int>(fib_nonr);
for (int i = 0; i < 6; ++i)
{
cout << fib(i) << '\n';
}
}
C++ 14:
Here is a recursive anonymous stateless/no capture generic set of lambdas
that outputs all numbers from 1, 20
([](auto f, auto n, auto m) {
f(f, n, m);
})(
[](auto f, auto n, auto m) -> void
{
cout << typeid(n).name() << el;
cout << n << el;
if (n<m)
f(f, ++n, m);
},
1, 20);
If I understand correctly this is using the Y-combinator solution
And here is the sum(n, m) version
auto sum = [](auto n, auto m) {
return ([](auto f, auto n, auto m) {
int res = f(f, n, m);
return res;
})(
[](auto f, auto n, auto m) -> int
{
if (n > m)
return 0;
else {
int sum = n + f(f, n + 1, m);
return sum;
}
},
n, m); };
auto result = sum(1, 10); //result == 55
Here's the proof that a recursive lambda with a small body almost has the same performance like a usual recursive fuction which can call itself directly.
#include <iostream>
#include <chrono>
#include <type_traits>
#include <functional>
#include <atomic>
#include <cmath>
using namespace std;
using namespace chrono;
unsigned recursiveFn( unsigned x )
{
if( x ) [[likely]]
return recursiveFn( x - 1 ) + recursiveFn( x - 1 );
else
return 0;
};
atomic_uint result;
int main()
{
auto perf = []( function<void ()> fn ) -> double
{
using dur_t = high_resolution_clock::duration;
using urep_t = make_unsigned_t<dur_t::rep>;
high_resolution_clock::duration durMin( (urep_t)-1 >> 1 );
for( unsigned r = 10; r--; )
{
auto start = high_resolution_clock::now();
fn();
dur_t dur = high_resolution_clock::now() - start;
if( dur < durMin )
durMin = dur;
}
return durMin.count() / 1.0e9;
};
auto recursiveLamdba = []( auto &self, unsigned x ) -> unsigned
{
if( x ) [[likely]]
return self( self, x - 1 ) + self( self, x - 1 );
else
return 0;
};
constexpr unsigned DEPTH = 28;
double
tLambda = perf( [&]() { ::result = recursiveLamdba( recursiveLamdba, DEPTH ); } ),
tFn = perf( [&]() { ::result = recursiveFn( DEPTH ); } );
cout << trunc( 1000.0 * (tLambda / tFn - 1.0) + 0.5 ) / 10.0 << "%" << endl;
}
For my AMD Zen1 CPU with current MSVC the recursiveFn is about 10% faster. For my Phenom II x4 945 with g++ 11.1.x both functions have the same performance.
Keep in mind that this is almost the worst case since the body of the funtion is very small. If it is larger the part of the recursive function call itself is smaller.
You're trying to capture a variable (sum) you're in the middle of defining. That can't be good.
I don't think truely self-recursive C++0x lambdas are possible. You should be able to capture other lambdas, though.
Here is the final answer for the OP. Anyway, Visual Studio 2010 does not support capturing global variables. And you do not need to capture them because global variable is accessable globally by define. The following answer uses local variable instead.
#include <functional>
#include <iostream>
template<typename T>
struct t2t
{
typedef T t;
};
template<typename R, typename V1, typename V2>
struct fixpoint
{
typedef std::function<R (V1, V2)> func_t;
typedef std::function<func_t (func_t)> tfunc_t;
typedef std::function<func_t (tfunc_t)> yfunc_t;
class loopfunc_t {
public:
func_t operator()(loopfunc_t v)const {
return func(v);
}
template<typename L>
loopfunc_t(const L &l):func(l){}
typedef V1 Parameter1_t;
typedef V2 Parameter2_t;
private:
std::function<func_t (loopfunc_t)> func;
};
static yfunc_t fix;
};
template<typename R, typename V1, typename V2>
typename fixpoint<R, V1, V2>::yfunc_t fixpoint<R, V1, V2>::fix = [](tfunc_t f) -> func_t {
return [f](fixpoint<R, V1, V2>::loopfunc_t x){ return f(x(x)); }
([f](fixpoint<R, V1, V2>::loopfunc_t x) -> fixpoint<R, V1, V2>::func_t{
auto &ff = f;
return [ff, x](t2t<decltype(x)>::t::Parameter1_t v1,
t2t<decltype(x)>::t::Parameter1_t v2){
return ff(x(x))(v1, v2);
};
});
};
int _tmain(int argc, _TCHAR* argv[])
{
auto term = [](int a)->int {
return a*a;
};
auto next = [](int a)->int {
return ++a;
};
auto sum = fixpoint<int, int, int>::fix(
[term,next](std::function<int (int, int)> sum1) -> std::function<int (int, int)>{
auto &term1 = term;
auto &next1 = next;
return [term1, next1, sum1](int a, int b)mutable ->int {
if(a>b)
return 0;
else
return term1(a) + sum1(next1(a),b);
};
});
std::cout<<sum(1,10)<<std::endl; //385
return 0;
}
This answer is inferior to Yankes' one, but still, here it goes:
using dp_type = void (*)();
using fp_type = void (*)(dp_type, unsigned, unsigned);
fp_type fp = [](dp_type dp, unsigned const a, unsigned const b) {
::std::cout << a << ::std::endl;
return reinterpret_cast<fp_type>(dp)(dp, b, a + b);
};
fp(reinterpret_cast<dp_type>(fp), 0, 1);
You need a fixed point combinator. See this.
or look at the following code:
//As decltype(variable)::member_name is invalid currently,
//the following template is a workaround.
//Usage: t2t<decltype(variable)>::t::member_name
template<typename T>
struct t2t
{
typedef T t;
};
template<typename R, typename V>
struct fixpoint
{
typedef std::function<R (V)> func_t;
typedef std::function<func_t (func_t)> tfunc_t;
typedef std::function<func_t (tfunc_t)> yfunc_t;
class loopfunc_t {
public:
func_t operator()(loopfunc_t v)const {
return func(v);
}
template<typename L>
loopfunc_t(const L &l):func(l){}
typedef V Parameter_t;
private:
std::function<func_t (loopfunc_t)> func;
};
static yfunc_t fix;
};
template<typename R, typename V>
typename fixpoint<R, V>::yfunc_t fixpoint<R, V>::fix =
[](fixpoint<R, V>::tfunc_t f) -> fixpoint<R, V>::func_t {
fixpoint<R, V>::loopfunc_t l = [f](fixpoint<R, V>::loopfunc_t x) ->
fixpoint<R, V>::func_t{
//f cannot be captured since it is not a local variable
//of this scope. We need a new reference to it.
auto &ff = f;
//We need struct t2t because template parameter
//V is not accessable in this level.
return [ff, x](t2t<decltype(x)>::t::Parameter_t v){
return ff(x(x))(v);
};
};
return l(l);
};
int _tmain(int argc, _TCHAR* argv[])
{
int v = 0;
std::function<int (int)> fac =
fixpoint<int, int>::fix([](std::function<int (int)> f)
-> std::function<int (int)>{
return [f](int i) -> int{
if(i==0) return 1;
else return i * f(i-1);
};
});
int i = fac(10);
std::cout << i; //3628800
return 0;
}