Taking predicates by value [duplicate] - c++

I'm wondering why functors are passed by copy to the algorithm functions:
template <typename T> struct summatory
{
summatory() : result(T()) {}
void operator()(const T& value)
{ result += value; std::cout << value << "; ";};
T result;
};
std::array<int, 10> a {{ 1, 1, 2, 3, 5, 8, 13, 21, 34, 55 }};
summatory<int> sum;
std::cout << "\nThe summation of: ";
std::for_each(a.begin(), a.end(), sum);
std::cout << "is: " << sum.result;
I was expecting the following output:
The summation of: 1; 1; 2; 3; 5; 8; 13; 21; 34; 55; is: 143
But sum.result contains 0, that is the default value assigned in the ctor. The only way to achieve the desired behaviour is capturing the return value of the for_each:
sum = std::for_each(a.begin(), a.end(), sum);
std::cout << "is: " << sum.result;
This is happening because the functor is passed by copy to the for_each instead of by reference:
template< class InputIt, class UnaryFunction >
UnaryFunction for_each( InputIt first, InputIt last, UnaryFunction f );
So the outer functor remains untouched, while the inner one (which is a copy of the outer) is updated and is returned after perform the algorithm (live demo), so the result is copied (or moved) again after doing all the operations.
There must be a good reason to do the work this way, but I don't really realize the rationale in this design, so my questions are:
Why the predicates of the sequence-operation algorithms are passed by copy instead of reference?
What advantages offers the pass-by-copy approach in front of the pass-by-reference one?

It's mostly for historic reasons. At '98 when the whole algo stuff made it into the standard references had all kind of problems. That got eventually resolved through core and library DRs by C++03 and beyond. Also sensible ref-wrappers and actually working bind only arrived only in TR1.
Those who tried use algos with early C++98 having functions using ref params or returns can recall all kind of trouble. Self-written algos were also prone to hit the dreaded 'reference to reference' problem.
Passing by value at least worked fine, and hardly created many problems -- and boost had ref and cref early on to help out where you needed to tweak.

This is purely a guess, but...
...lets for a moment assume it takes by reference to const. This would mean that all of your members must be mutable and the operator must be const. That just doesn't feel "right".
... lets for a moment assume it takes by reference to non-const. It would call a non-const operator, members can just be worked on fine. But what if you want to pass an ad-hoc object? Like the result of a bind operation (even C++98 had -- ugly and simple -- bind tools)? Or the type itself just does everything you need and you don't need the object after that and just want to call it like for_each(b,e,my_functor());? That won't work since temporaries can not bind to non-const references.
So maybe not the best, but the least bad option is here to take by value, copy it around in the process as much as needed (hopefully not too often) and then when done with it, return it from for_each. This works fine with the rather low complexity of your summatory object, doesn't need added mutable stuff like the reference-to-const approach, and works with temporaries too.
But YMMV, and so likely did those of the committee members, and I would guess it was in the end a vote on what they thought is the most likely to fit the most use-cases.

Maybe this could be a workaround. Capture the functor as reference and call it in a lambda
std::for_each(a.begin(), a.end(), [&sum] (T& value)
{
sum(value);
});
std::cout << "is: " << sum.result;

Related

Confusion about C++ functor/lambda argument-passing in STL algorithms

I like C++11 and its ability to combine STL algorithms to lambdas; it makes the STL much more approachable and useful to everybody. But one thing that I don't understand is what happens inside an STL algorithm (like std::accumulate) regarding object copying or referencing inside the lambda (or wherever functor you give to it).
My three questions are:
Are there any guidelines regarding wherever I should care about pass-by-reference vs. pass-by-value in lambdas/functors?
Does it matter at all if you declare a lambda inside an algorithm that takes its arguments by reference ([](Type &a, Type &b){}), and will it be more optimal than a regular variant; or is it just syntax sugar, the compiler will optimize it anyway, and I could simply omit the ampersands?
Does the C++ Standard has any provision about this?
As for question #2, a quick experiment in Godbolt's GCC page (using compilation flags -stc=c++11 -Os) seems to suggest the latter, as the generated assembly from the code below is identical wherever I use [](T i1, T i2) or [](T &i1, T &i2); I don't know if those results could be generalized to more complex types/objects, however.
Example #1:
#include<array>
#include<numeric>
template <typename T>
T vecSum(std::array<T, 4> &a){
return std::accumulate(a.begin(), a.end(), T(0),
[](T i1, T i2) {
return std::abs(i1) + std::abs(i2);
}
);
}
void results() {
std::array<int, 4> a = {1,-2, 3,-4};
std::array<int, 4> b = {1,-2,-3, 4};
volatile int c = vecSum(a) + vecSum(b);
}
Example #2:
#include<string>
#include<array>
#include<numeric>
struct FatObject {
std::array<int, 1024*1024> garbage;
std::string string;
FatObject(const std::string &str) : string(str) {
std::fill(garbage.begin(),garbage.end(),0xCAFEDEAD);
}
std::string operator+(const FatObject &rhs) const {
return string + rhs.string;
}
};
template <typename T>
T vecSum(std::array<T, 4> &a){
return std::accumulate(a.begin(),a.end(),T(0),
[](T i1, T i2) {
return i1 + i2;
}
);
}
void results() {
std::array<FatObject, 4> a = {
FatObject("The "),
FatObject("quick "),
FatObject("brown "),
FatObject("fox")
};
std::array<FatObject, 4> b = {
FatObject("jumps "),
FatObject("over "),
FatObject("the "),
FatObject("dog ")
};
volatile std::string c = vecSum(a) + vecSum(b);
}
Your question is quite broad, but here is my concise answer.
1) In general, the guidelines for passing by-value vs by-reference in lambdas or functors are the same as they are for any regular function or method (a lambda is a functor created on the fly for you, which is a an object with an operator()(T)). The choice is mostly specific to your case, for example if the lambda/functor needs read-only access to its arguments you tipycally would pass a const reference.
2) Inside an algorithm that accepts a callable object as an argument (and as a template parameter) the compiler is bound to respect the rules of the language. Therefore parameters will be passed by value or by reference internally as per the signature of the lambda/functor.
Keep in mind that copy elision may enter into play, but that is a separate issue, not directly related to the fact that you are calling a lambda inside an standard library algorithm.
The example with int is too simple. I suggest you to experiment with actual objects.
3) C++ Standard provides precise definitions for the conditions where copy elision occurs, as well as the requirements on the signature of a lambda/functor parameter for a particular Standard Library algorithm.
However, it will not be easy in general to know if the internal implementation of a particular algorithm is going to call the lambda/functor in a way that meets copy elision conditions.
Note that the requirements on signature have some degree of flexibility, for example in the std::accumulate documentation we have
The signature of the function should be equivalent to the following:
Ret fun(const Type1 &a, const Type2 &b); The signature does not need
to have const &.
so you can choose to pass by value or by reference as you see fit.
In lambdas/functions are the same rules as in all C++.
You should use non-const reference if the intent of the function is to modify the object for the caller. The function should use const& if it is just using the object without changing it. And it should pass by value if it is going to copy/move the object into its internal storage.
If you pass a small object like int it makes no difference if you pass by value or by reference.
When you start to pass a big object it makes a big impact on performance.

Is There a Reason Standard Algorithms Take Lambdas by Value? [duplicate]

This question already has answers here:
Why the sequence-operation algorithms predicates are passed by copy?
(3 answers)
Closed 6 years ago.
So I asked a question here: Lambda Works on Latest Visual Studio, but Doesn't Work Elsewhere to which I got the response, that my code was implementation defined since the standard's 25.1 [algorithms.general] 10 says:
Unless otherwise specified, algorithms that take function objects as arguments are permitted to copy
those function objects freely. Programmers for whom object identity is important should consider using a
wrapper class that points to a noncopied implementation object such as reference_wrapper<T>
I'd just like a reason why this is happening? We're told our whole lives to take objects by reference, why then is the standard taking function objects by value, and even worse in my linked question making copies of those objects? Is there some advantage that I don't understand to doing it this way?
std assumes function objects and iterators are free to copy.
std::ref provides a method to turn a function object into a pseudo-reference with a compatible operator() that uses reference instead of value semantics. So nothing of large value is lost.
If you have been taught all your life to take objects by reference, reconsider. Unless there is a good reason otherwise, take objects by value. Reasoning about values is far easier; references are pointers into any state anywhere in your program.
The conventional use of references, as a pointer to a local object which is not referred to by any other active reference in the context where it is used, is not something someone reading your code nor the compiler can presume. If you reason about references this way, they don't add a ridiculous amount of complexity to your code.
But if you reason about them that way, you are going to have bugs when your assumption is violated, and they will be subtle, gross, unexpected, and horrible.
A classic example is the number of operator= that break when this and the argument refer to the same object. But any function that takes two references or pointers of the same type has the same issue.
But even one reference can break your code. Let's look at sort. In pseudo-code:
void sort( Iterator start, Iterator end, Ordering order )
Now, let's make Ordering a reference:
void sort( Iterator start, Iterator end, Ordering const& order )
How about this one?
std::function< void(int, int) > alice;
std::function< void(int, int) > bob;
alice = [&]( int x, int y ) { std:swap(alice, bob); return x<y; };
bob = [&]( int x, int y ) { std:swap(alice, bob); return x>y; };
Now, call sort( begin(vector), end(vector), alice ).
Every time < is called, the referred-to order object swaps meaning. Now this is pretty ridiculous, but when you took Ordering by const&, the optimizer had to take into account that possibility and rule it out on every invokation of your ordering code!
You wouldn't do the above (and in fact this particular implementation is UB as it would violate any reasonable requisites on std::sort); but the compiler has to prove you didn't do something "like that" (change the code in ordering) every time it follows order or invokes it! Which means constantly reloading the state of order, or inlining and proving you did nonesuch insanity.
Doing this when taking by-value is an order of magnitude harder (and basically requires something like std::ref). The optimizer has a function object, it is local, and its state is local. Anything stored within it is local, and the compiler and optimizer know who exactly can modify it legally.
Every function you write taking a const& that ever leaves its "local scope" (say, called a C library function) can not assume the state of the const& remained the same after it got back. It must reload the data from wherever the pointer points to.
Now, I did say pass by value unless there is a good reason. And there are many good reasons; your type is very expensive to move or copy, for example, is a great reason. You are writing data to it. You actually want it to change as you read it each time. Etc.
But the default behavior should be pass-by-value. Only move to references if you have a good reason, because the costs are distributed and hard to pin down.
I'm not sure I have an answer for you, but if I have got my object lifetimes correct I think this is portable, safe and adds zero overhead or complexity:
#include <iostream>
#include <vector>
#include <algorithm>
#include <iterator>
// #pre f must be an r-value reference - i.e. a temporary
template<class F>
auto resist_copies(F &&f) {
return std::reference_wrapper<F>(f);
};
void removeIntervals(std::vector<double> &values, const std::vector<std::pair<int, int>> &intervals) {
values.resize(distance(
begin(values),
std::remove_if(begin(values), end(values),
resist_copies([i = 0U, it = cbegin(intervals), end = cend(intervals)](const auto&) mutable
{
return it != end && ++i > it->first && (i <= it->second || (++it, true));
}))));
}
int main(int argc, char **args) {
// Intervals of indices I have to remove from values
std::vector<std::pair<int, int>> intervals = {{1, 3},
{7, 9},
{13, 13}};
// Vector of arbitrary values.
std::vector<double> values = {4.2, 6.4, 2.3, 3.4, 9.1, 2.3, 0.6, 1.2, 0.3, 0.4, 6.4, 3.6, 1.4, 2.5, 7.5};
removeIntervals(values, intervals);
// intervals should contain 4.2,9.1,2.3,0.6,6.4,3.6,1.4,7.5
std:
copy(values.begin(), values.end(), std::ostream_iterator<double>(std::cout, ", "));
std::cout << '\n';
}

Can the use of C++11's 'auto' improve performance?

I can see why the auto type in C++11 improves correctness and maintainability. I've read that it can also improve performance (Almost Always Auto by Herb Sutter), but I miss a good explanation.
How can auto improve performance?
Can anyone give an example?
auto can aid performance by avoiding silent implicit conversions. An example I find compelling is the following.
std::map<Key, Val> m;
// ...
for (std::pair<Key, Val> const& item : m) {
// do stuff
}
See the bug? Here we are, thinking we're elegantly taking every item in the map by const reference and using the new range-for expression to make our intent clear, but actually we're copying every element. This is because std::map<Key, Val>::value_type is std::pair<const Key, Val>, not std::pair<Key, Val>. Thus, when we (implicitly) have:
std::pair<Key, Val> const& item = *iter;
Instead of taking a reference to an existing object and leaving it at that, we have to do a type conversion. You are allowed to take a const reference to an object (or temporary) of a different type as long as there is an implicit conversion available, e.g.:
int const& i = 2.0; // perfectly OK
The type conversion is an allowed implicit conversion for the same reason you can convert a const Key to a Key, but we have to construct a temporary of the new type in order to allow for that. Thus, effectively our loop does:
std::pair<Key, Val> __tmp = *iter; // construct a temporary of the correct type
std::pair<Key, Val> const& item = __tmp; // then, take a reference to it
(Of course, there isn't actually a __tmp object, it's just there for illustration, in reality the unnamed temporary is just bound to item for its lifetime).
Just changing to:
for (auto const& item : m) {
// do stuff
}
just saved us a ton of copies - now the referenced type matches the initializer type, so no temporary or conversion is necessary, we can just do a direct reference.
Because auto deduces the type of the initializing expression, there is no type conversion involved. Combined with templated algorithms, this means that you can get a more direct computation than if you were to make up a type yourself – especially when you are dealing with expressions whose type you cannot name!
A typical example comes from (ab)using std::function:
std::function<bool(T, T)> cmp1 = std::bind(f, _2, 10, _1); // bad
auto cmp2 = std::bind(f, _2, 10, _1); // good
auto cmp3 = [](T a, T b){ return f(b, 10, a); }; // also good
std::stable_partition(begin(x), end(x), cmp?);
With cmp2 and cmp3, the entire algorithm can inline the comparison call, whereas if you construct a std::function object, not only can the call not be inlined, but you also have to go through the polymorphic lookup in the type-erased interior of the function wrapper.
Another variant on this theme is that you can say:
auto && f = MakeAThing();
This is always a reference, bound to the value of the function call expression, and never constructs any additional objects. If you didn't know the returned value's type, you might be forced to construct a new object (perhaps as a temporary) via something like T && f = MakeAThing(). (Moreover, auto && even works when the return type is not movable and the return value is a prvalue.)
There are two categories.
auto can avoid type erasure. There are unnamable types (like lambdas), and almost unnamable types (like the result of std::bind or other expression-template like things).
Without auto, you end up having to type erase the data down to something like std::function. Type erasure has costs.
std::function<void()> task1 = []{std::cout << "hello";};
auto task2 = []{std::cout << " world\n";};
task1 has type erasure overhead -- a possible heap allocation, difficulty inlining it, and virtual function table invocation overhead. task2 has none. Lambdas need auto or other forms of type deduction to store without type erasure; other types can be so complex that they only need it in practice.
Second, you can get types wrong. In some cases, the wrong type will work seemingly perfectly, but will cause a copy.
Foo const& f = expression();
will compile if expression() returns Bar const& or Bar or even Bar&, where Foo can be constructed from Bar. A temporary Foo will be created, then bound to f, and its lifetime will be extended until f goes away.
The programmer may have meant Bar const& f and not intended to make a copy there, but a copy is made regardless.
The most common example is the type of *std::map<A,B>::const_iterator, which is std::pair<A const, B> const& not std::pair<A,B> const&, but the error is a category of errors that silently cost performance. You can construct a std::pair<A, B> from a std::pair<const A, B>. (The key on a map is const, because editing it is a bad idea)
Both #Barry and #KerrekSB first illustrated these two principles in their answers. This is simply an attempt to highlight the two issues in one answer, with wording that aims at the problem rather than being example-centric.
The existing three answers give examples where using auto helps “makes it less likely to unintentionally pessimize” effectively making it "improve performance".
There is a flip side to the the coin. Using auto with objects that have operators that don't return the basic object can result in incorrect (still compilable and runable) code. For example, this question asks how using auto gave different (incorrect) results using the Eigen library, i.e. the following lines
const auto resAuto = Ha + Vector3(0.,0.,j * 2.567);
const Vector3 resVector3 = Ha + Vector3(0.,0.,j * 2.567);
std::cout << "resAuto = " << resAuto <<std::endl;
std::cout << "resVector3 = " << resVector3 <<std::endl;
resulted in different output. Admittedly, this is mostly due to Eigens lazy evaluation, but that code is/should be transparent to the (library) user.
While performance hasn't been greatly affected here, using auto to avoid unintentional pessimization might be classified as premature optimization, or at least wrong ;).

Why the sequence-operation algorithms predicates are passed by copy?

I'm wondering why functors are passed by copy to the algorithm functions:
template <typename T> struct summatory
{
summatory() : result(T()) {}
void operator()(const T& value)
{ result += value; std::cout << value << "; ";};
T result;
};
std::array<int, 10> a {{ 1, 1, 2, 3, 5, 8, 13, 21, 34, 55 }};
summatory<int> sum;
std::cout << "\nThe summation of: ";
std::for_each(a.begin(), a.end(), sum);
std::cout << "is: " << sum.result;
I was expecting the following output:
The summation of: 1; 1; 2; 3; 5; 8; 13; 21; 34; 55; is: 143
But sum.result contains 0, that is the default value assigned in the ctor. The only way to achieve the desired behaviour is capturing the return value of the for_each:
sum = std::for_each(a.begin(), a.end(), sum);
std::cout << "is: " << sum.result;
This is happening because the functor is passed by copy to the for_each instead of by reference:
template< class InputIt, class UnaryFunction >
UnaryFunction for_each( InputIt first, InputIt last, UnaryFunction f );
So the outer functor remains untouched, while the inner one (which is a copy of the outer) is updated and is returned after perform the algorithm (live demo), so the result is copied (or moved) again after doing all the operations.
There must be a good reason to do the work this way, but I don't really realize the rationale in this design, so my questions are:
Why the predicates of the sequence-operation algorithms are passed by copy instead of reference?
What advantages offers the pass-by-copy approach in front of the pass-by-reference one?
It's mostly for historic reasons. At '98 when the whole algo stuff made it into the standard references had all kind of problems. That got eventually resolved through core and library DRs by C++03 and beyond. Also sensible ref-wrappers and actually working bind only arrived only in TR1.
Those who tried use algos with early C++98 having functions using ref params or returns can recall all kind of trouble. Self-written algos were also prone to hit the dreaded 'reference to reference' problem.
Passing by value at least worked fine, and hardly created many problems -- and boost had ref and cref early on to help out where you needed to tweak.
This is purely a guess, but...
...lets for a moment assume it takes by reference to const. This would mean that all of your members must be mutable and the operator must be const. That just doesn't feel "right".
... lets for a moment assume it takes by reference to non-const. It would call a non-const operator, members can just be worked on fine. But what if you want to pass an ad-hoc object? Like the result of a bind operation (even C++98 had -- ugly and simple -- bind tools)? Or the type itself just does everything you need and you don't need the object after that and just want to call it like for_each(b,e,my_functor());? That won't work since temporaries can not bind to non-const references.
So maybe not the best, but the least bad option is here to take by value, copy it around in the process as much as needed (hopefully not too often) and then when done with it, return it from for_each. This works fine with the rather low complexity of your summatory object, doesn't need added mutable stuff like the reference-to-const approach, and works with temporaries too.
But YMMV, and so likely did those of the committee members, and I would guess it was in the end a vote on what they thought is the most likely to fit the most use-cases.
Maybe this could be a workaround. Capture the functor as reference and call it in a lambda
std::for_each(a.begin(), a.end(), [&sum] (T& value)
{
sum(value);
});
std::cout << "is: " << sum.result;

use of boost::ref for passing reference to functions that take values

I am confused about the use of boost::ref. I dont understand why any one would want to do the following -
void f(int x)
{
cout << x<<endl;
x++;
}
int main(int argc, char *argv[])
{
int aaa=2;
f(boost::ref(aaa));
cout << aaa<<endl;
exit(0);
}
What is the use of passing a ref to a function. I can always pass by value instead. Also it not that the ref is actually passed. In the above the value of aaa in main remains 2 only.
Where exactly is boost ref useful?
Is it possible to use boost::ref in this scenario.
I want to pass iterator refernce to std::sort function. normally the sort works on iterator copies - will boost::ref make it work for references also? (without any changes to std::sort)
I dont understand why any one would want to do the following
They wouldn't. That's not what boost::ref (or these days std::ref) is for. If a function takes an argument by value, then there's no way to force it to take it by reference instead.
Where exactly is boost ref useful?
It can be used to make a function template act as if it takes an argument by reference, by instantiating the template for the reference (wrapper) type, rather than the value type:
template <typename T>
void f(T x) {++x;}
f(aaa); cout << aaa << endl; // increments a copy: prints 0
f(ref(aaa)); cout << aaa << endl; // increments "a" itself: prints 1
A common specific use is for binding arguments to functions:
void f(int & x) {++x;}
int aaa = 0;
auto byval = bind(f, aaa); // binds a copy
auto byref = bind(f, ref(aaa)); // binds a reference
byval(); cout << aaa << endl; // increments a copy: prints 0
byref(); cout << aaa << endl; // increments "a" itself: prints 1
Is it possible to use boost:;ref in this scenario. I want to pass iterator refernce to std::sort function. normally the sort works on iterator copies - will boost::ref make it work for references also?
No; the reference wrapper doesn't meet the iterator requirements, so you can't use it in standard algorithms. If you could, then many algorithms would go horribly wrong if they needed to make independent copies of iterators (as many, including most sort implementations, must do).
Did you read the documentation?
It says:
The Ref library is a small library that is useful for passing references to function templates (algorithms) that would usually take copies of their arguments.
In your first example, void f(int) isn't a function template, so there is no effect: an anonymous temporary boost::reference_wrapper<int> is created and immediately converted back to an int& which is then dereferenced to pass an int to your function. Hopefully the compiler will discard all this nonsense, since it has no effect.
You say:
... I can always pass by value instead.
which is true, but they do different things. Passing by reference:
avoids a copy, which is probably pointless for integers but is useful for large or expensive-to-copy objects
allows the caller to see what the called function did with its argument. This isn't likely to be useful for std::sort, but in general allowing in-out parameters can be useful
It may allow std::sort to act on iterator references, avoiding a copy (although iterators are typically cheap-to-copy value types anyway), but I haven't tried it. If you think it'll help you with something, why not try it and see?