We had a function that used a non-capturing lambda internal to itself, e.g.:
void foo() {
auto bar = [](int a, int b){ return a + b; }
// code using bar(x,y) a bunch of times
}
Now the functionality implemented by the lambda became needed elsewhere, so I am going to lift the lambda out of foo() into the global/namespace scope. I can either leave it as a lambda, making it a copy-paste option, or change it to a proper function:
auto bar = [](int a, int b){ return a + b; } // option 1
int bar(int a, int b){ return a + b; } // option 2
void foo() {
// code using bar(x,y) a bunch of times
}
Changing it to a proper function is trivial, but it made me wonder if there is some reason not to leave it as a lambda? Is there any reason not to just use lambdas everywhere instead of "regular" global functions?
There's one very important reason not to use global lambdas: because it's not normal.
C++'s regular function syntax has been around since the days of C. Programmers have known for decades what said syntax means and how they work (though admittedly that whole function-to-pointer decay thing sometimes bites even seasoned programmers). If a C++ programmer of any skill level beyond "utter newbie" sees a function definition, they know what they're getting.
A global lambda is a different beast altogether. It has different behavior from a regular function. Lambdas are objects, while functions are not. They have a type, but that type is distinct from the type of their function. And so forth.
So now, you've raised the bar in communicating with other programmers. A C++ programmer needs to understand lambdas if they're going to understand what this function is doing. And yes, this is 2019, so a decent C++ programmer should have an idea what a lambda looks like. But it is still a higher bar.
And even if they understand it, the question on that programmer's mind will be... why did the writer of this code write it that way? And if you don't have a good answer for that question (for example, because you explicitly want to forbid overloading and ADL, as in Ranges customization points), then you should use the common mechanism.
Prefer expected solutions to novel ones where appropriate. Use the least complicated method of getting your point across.
I can think of a few reasons you'd want to avoid global lambdas as drop-in replacements for regular functions:
regular functions can be overloaded; lambdas cannot (there are techniques to simulate this, however)
Despite the fact that they are function-like, even a non-capturing lambda like this will occupy memory (generally 1 byte for non-capturing).
as pointed out in the comments, modern compilers will optimize this storage away under the as-if rule
"Why shouldn't I use lambdas to replace stateful functors (classes)?"
classes simply have fewer restrictions than lambdas and should therefore be the first thing you reach for
(public/private data, overloading, helper methods, etc.)
if the lambda has state, then it is all the more difficult to reason about when it becomes global.
We should prefer to create an instance of a class at the narrowest possible scope
it's already difficult to convert a non-capturing lambda into a function pointer, and it is impossible for a lambda that specifies anything in its capture.
classes give us a straightforward way to create function pointers, and they're also what many programmers are more comfortable with
Lambdas with any capture cannot be default-constructed (in C++20. Previously there was no default constructor in any case)
Is there any reason not to just use lambdas everywhere instead of "regular" global functions?
A problem of a certain level of complexity requires a solution of at least the same complexity. But if there is a less complex solution for the same problem, then there is really no justification for using the more complex one. Why introduce complexity you don't need?
Between a lambda and a function, a function is simply the less complex kind of entity of the two. You don't have to justify not using a lambda. You have to justify using one. A lambda expression introduces a closure type, which is an unnamed class type with all the usual special member functions, a function call operator, and, in this case, an implicit conversion operator to function pointer, and creates an object of that type. Copy-initializing a global variable from a lambda expression simply does a lot more than just defining a function. It defines a class type with six implicitly-declared functions, defines two more operator functions, and creates an object. The compiler has to do a lot more. If you don't need any of the features of a lambda, then don't use a lambda…
After asking, I thought of a reason to not do this: Since these are variables, they are prone to Static Initialization Order Fiasco (https://isocpp.org/wiki/faq/ctors#static-init-order), which could cause bugs down the line.
if there is some reason not to leave it as a lambda? Is there any reason not to just use lambdas everywhere instead of "regular" global functions?
We used to use functions instead of global functor, so it breaks the coherency and the Principle of least astonishment.
The main differences are:
functions can be overloaded, whereas functors cannot.
functions can be found with ADL, not functors.
Lambdas are anonymous functions.
If you are using a named lambda, it means you are basically using a named anonymous function. To avoid this oxymoron, you might as well use a function.
Related
The definition and application of boost::bind are clearly outlined in the boost website, yet I hardly could find what is the benefit of using it over using a normal function call? Or to put it simply in which scenarios it might come in handy?
Sometimes you have a set of arguments that you are going to pass to the function, but you wish to call the function later without needing to pass the arguments that are already known. One reason to need this may be because the call may need to conform to an interface that doesn't allow those arguments. This is typical in the (functor style) "callback" idiom.
That situation can be solved by defining a class that stores the arguments as members, and defines function call operator overload that delegates to the original function and passes the arguments stored as members.
boost::bind is a structured way to represent such "argument binding" without needing to define the class yourself. The standard library used to have std::bind1st and std::bind2nd which were more limited, less generic forms of bind.
boost::bind is rarely needed anymore since it was introduced to the standard library as std::bind in C++11, and furthermore lambdas were introduced in C++11 and improved in C++14 and they have largely obsoleted bind.
bind provides a way to take a function or a function object with a certain arity and transform it to another function with lesser arity by precisely binding one or more arguments. And you can do it in place.
bind and functions don't have a good comparison.
bind is more comparable to simple lambdas that call a function and fix certain parameters in their implementation.
The big difference between boost::bind and a modern lambda is that the bind object has a certain degree of instrospection associated with it that the lambda doesn't have.
For example you could in principle recover the original function and reconstruct what is the argument bound.
In a lambda everything is private, even the simplest implementation.
In other words, the result of boost::bind is an "expression" and the type has well defined pattern (e.g. boost::bind_t<...> or something, and that can be matched in a template function argument).
Lambdas instead are each their own unknowable sui generis type.
Admittedly, few people maybe interested in the difference, but it is there and I played with it once or twice to implement a symbolic system (for derivatives).
I can't say the same about std::bind because the object returned is unspecified by the standard and it could be more difficult to reconstruct the full bind "expression".
I'm having lambdas that don't capture anything, like
[](){};
I have a template class, that contains such a lambda. Since the lambda does not contain non-static data members, nor virtual functions, it should be an empty class and DefaultConstructible. It's only a kind of policy class usable for template metaprogramming. I would like to know, why such a class is not default constructible by the C++ standard.
Sidenote: Understanding how Lambda closure type has deleted default constructor is asking a different question, though the title seems to be very similar. It is asking how a stateless lambda-object is created without usable default constructor. I'm asking why there is no usable default constructor.
Lambdas are intended to be created then used. The standard thus says "no, they don't have a default constructor". The only way to make one is via a lambda expression, or copies of same.
They are not intended for their types to be something you keep around and use. Doing so risks ODR violations, and requiring compilers to avoid ODR violations would make symbol mangling overly complex.
However, in C++17 you can write a stateless wrapper around a function pointer:
template<auto fptr>
struct function_pointer_t {
template<class...Args>
// or decltype(auto):
std::result_of_t< std::decay_t<decltype(fptr)>(Args...) >
operator()(Args&&...args)const
return fptr(std::forward<Args>(args)...);
}
};
And as operator void(*)() on [](){} is constexpr in C++17, function_pointer_t<+[](){}> is a do-nothing function object that is DefaultConstructible.
This doesn't actually wrap the lambda, but rather the pointer-to-function that the lambda produces.
With C++20, stateless lambdas are default constructible. So, now this kind of operations are valid:
auto func = []{};
decltype(func) another_func;
I'll assume that you're familiar with the difference between types, objects and expressions. In C++, lambda specifically refers to a lambda expression. This is a convenient way to denote a non-trivial object. However, it's convenience: you could create a similar object yourself by writing out the code.
Now per the C++ rules every expression has a type, but that type not what lambda expressions are intended for. This is why it's an unnamed and unique type - the C++ committee didn't think it worthwhile to define those properties. Similarly, if it was defined to have a default ctor, the Standard should define the behavior. With the current rule, there's no need to define the behavior of the default ctor.
As you note, for the special case of [](){} it's trivial to define a default ctor. But there's no point in that. You immediately get to the first hard question: for what lambda's should the default ctor be defined? What subset of lambda's is simple enough to have a decent definition, yet complex enough to be interesting? Without a consensus, you can't expect this to be standardized.
Note that compiler vendors, as an extension, could already offer this. Standardization often follows existing practice, see Boost. But if no compiler vendor individually thinks it worthwhile, why would they think so in unison?
There are situations that I have to choose local classes over lambda when overloading operator() is not enough or when I need virtual functions or something else.
um.. for example:
I need a object that captures local variables, and holds more than one functions, which are, unfortunately of same signature.
Overloading lambdas can solve such problem if the functions are of different signatures. And I think this is one common problem since there is the lambda-overloading trick.
I need a object that capture local variables and inherits some other class or have member variables.
This is something happens everyday in the java world. Dynamic polymorphism has its usefulness, at least sometimes.
What I'm doing now, is defining some helper macros like these:
#define CAPTURE_VAL(Var) decltype(Var) Var
#define CAPTURE_REF(Var) decltype(Var) &Var
#define INIT_CAPTURE(Var) Var(Var)
And put them into the local class:
struct Closure
{
Closure(CAPTURE_VAL(var1), CAPTURE_REF(var2)) : INIT_CAPTURE(var1), INIT_CAPTURE(var2)
{
}
int foo()
{
return some_expression;
}
private:
CAPTURE_VAL(var1);
CAPTURE_REF(var2);
} closure(var1, var2);
And this is ugly.
I have to refer a same variable three times.
I have to give the class a name for ctor.
I have to give a variable name for the closure, though I think this cannot be helped under the current standard.
At least in VC++11, captured variables of a lambda are private, so I cannot simply inherit the lambda class. And inheriting lambda class, at least in VC++11, needs a variable (or maybe some other placeholders for the evaluation) for the lambda, which is ugly.
And I think I don't even know if the standard allows me to capture the type of local variables in a local class this way...
Tested on GCC 4.6, local variable's type can't be captured as in VC++. And captured variables are not exposed as in VC++. LOL
Ah, my bad. I forgot to turn C++11 on. This works fine on G++. But lambda types can't be inherited, and captured variables are still not exposed.
Not quite fine... Have to leave -fpermissive on. Or G++ think the member variables conflict with local variables used in decltype().
I've been wandering why C++ has chosen such a high-leveled lambda for closure instead of more generic local class that can capture local variables.
This is going to be more than fits into a simple comment on your question, so I'll make it an answer.
There are indeed cases where you want something else and more complex than a simple functor or lambda. But these cases are very different and diverse, there is no "one fits all" solution, especially none that fits into a few lines and that will not blow the scope of a single function.
Creating complex behavior on the fly locally inside a function is not a good idea in terms of readability and simplicity, and most surely will violate the SRP for the function.
In many cases, if you have to write more than a single operator(), that means you will want to reuse the code you have written, which cannot be done if it is inside a local class.
Meaning: In most cases it will be the best solution to write a proper class, outside the function, with proper constructors and so on.
As you probably know, C++11 introduces the constexpr keyword.
C++11 introduced the keyword constexpr, which allows the user to
guarantee that a function or object constructor is a compile-time
constant.
[...]
This allows the compiler to understand, and verify, that [function name] is a
compile-time constant.
My question is why are there such strict restrictions on form of the functions that can be declared. I understand desire to guarantee that function is pure, but consider this:
The use of constexpr on a function imposes some limitations on what
that function can do. First, the function must have a non-void return
type. Second, the function body cannot declare variables or define new
types. Third, the body may only contain declarations, null statements
and a single return statement. There must exist argument values such
that, after argument substitution, the expression in the return
statement produces a constant expression.
That means that this pure function is illegal:
constexpr int maybeInCppC1Y(int a, int b)
{
if (a>0)
return a+b;
else
return a-b;
//can be written as return (a>0) ? (a+b):(a-b); but that isnt the point
}
Also you cant define local variables... :(
So I'm wondering is this a design decision, or do compilers suck when it comes to proving function a is pure?
The reason you'd need to write statements instead of expressions is that you want to take advantage of the additional capabilities of statements, particularly the ability to loop. But to be useful, that would require the ability to declare variables (also banned).
If you combine a facility for looping, with mutable variables, with logical branching (as in if statements) then you have the ability to create infinite loops. It is not possible to determine if such a loop will ever terminate (the halting problem). Thus some sources would cause the compiler to hang.
By using recursive pure functions it is possible to cause infinite recursion, which can be shown to be equivalently powerful to the looping capabilities described above. However, C++ already has that problem at compile time - it occurs with template expansion - and so compilers already have to have a switch for "template stack depth" so they know when to give up.
So the restrictions seem designed to ensure that this problem (of determining if a C++ compilation will ever finish) doesn't get any thornier than it already is.
The rules for constexpr functions are designed such that it's impossible to write a constexpr function that has any side-effects.
By requiring constexpr to have no side-effects it becomes impossible for a user to determine where/when it was actually evaluated. This is important since constexpr functions are allowed to happen at both compile time and run time at the discretion of the compiler.
If side-effects were allowed then there would need to be some rules about the order in which they would be observed. That would be incredibly difficult to define - even harder than the static initialisation order problem.
A relatively simple set of rules for guaranteeing these functions to be side-effect free is to require that they be just a single expression (with a few extra restrictions on top of that). This sounds limiting initially and rules out the if statement as you noted. Whilst that particular case would have no side-effects it would have introduced extra complexity into the rules and given that you can write the same things using the ternary operator or recursively it's not really a huge deal.
n2235 is the paper that proposed the constexpr addition in C++. It discusses the rational for the design - the relevant quote seems to be this one from a discussion on destructors, but relevant generally:
The reason is that a constant-expression is intended to be evaluated by the compiler
at translation time just like any other literal of built-in type; in particular no
observable side-effect is permitted.
Interestingly the paper also mentions that a previous proposal suggested the the compiler figured out automatically which functions were constexpr without the new keyword, but this was found to be unworkably complex, which seems to support my suggestion that the rules were designed to be simple.
(I suspect there will be other quotes in the references cited in the paper, but this covers the key point of my argument about the no side-effects)
Actually the C++ standardization committee is thinking about removing several of these constraints for c++14. See the following working document http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2013/n3597.html
The restrictions could certainly be lifted quite a bit without enabling code which cannot be executed during compile time, or which cannot be proven to always halt. However I guess it wasn't done because
it would complicate the compiler for minimal gain. C++ compilers are quite complex as is
specifying exactly how much is allowed without violating the restrictions above would have been time consuming, and given that desired features have been postponed in order to get the standard out of the door, there probably was little incentive to add more work (and further delay of the standard) for little gain
some of the restrictions would have been either rather arbitrary or rather complicated (especially on loops, given that C++ doesn't have the concept of a native incrementing for loop, but both the end condition and the increment code have to be explicitly specified in the for statement, making it possible to use arbitrary expressions for them)
Of course, only a member of the standards committee could give an authoritative answer whether my assumptions are correct.
I think constexpr is just for const objects. I mean; you can now have static const objects like String::empty_string constructs statically(without hacking!). This may reduce time before 'main' called. And static const objects may have functions like .length(), operator==,... so this is why 'expr' is needed. In 'C' you can create static constant structs like below:
static const Foos foo = { .a = 1, .b = 2, };
Linux kernel has tons of this type classes. In c++ you could do this now with constexpr.
note: I dunno but code below should not be accepted so like if version:
constexpr int maybeInCppC1Y(int a, int b) { return (a > 0) ? (a + b) : (a - b); }
We obviously can't make everything constexpr. And if we don't make anything constexpr, well, there won't be any big problems. Lots of code have been written without it so far.
But is it a good idea to slap constexpr in anything that can possibly have it? Is there any potential problem with this?
It won't bother the compiler. The compiler will (or should anyway) give you a diagnostic when/if you use it on code that doesn't fit the requirements of a constexpr.
At the same time, I'd be a bit hesitant to just slap it on there because you could. Even though it doesn't/won't bother the compiler, your primary audience is other people reading the code. At least IMO, you should use constexpr to convey a fairly specific meaning to them, and just slapping it on other expressions because you can will be misleading. I think it would be fair for a reader to wonder what was going on with a function that's marked as a constexpr, but only used as a normal run-time function.
At the same time, if you have a function that you honestly expect to use at compile time, and you just haven't used it that way yet, marking it as constexpr might make considerably more sense.
Why I don't bother to try and put constexpr at every opportunity in list form, and in no particular order:
I don't write one-liner functions that often
when I write a one-liner it usually delegates to a non-constexpr function (e.g. std::get has come up several times recently)
the types they operate on aren't always literal types; yes, references are literal types, but if the referred type is not literal itself I can't really have any instance at compile-time anyway
the type they return aren't always literal
they simply are not all useful or even meaningful at compile-time in terms of their semantics
I like separating implementation from declaration
Constexpr functions have so many restrictions that they are a niche for special use only. Not an optimization, or a desirable super-set of functions in general. When I do write one, it's often because a metafunction or a regular function alone wouldn't have cut it and I have a special mindset for it. Constexpr functions don't taste like other functions.
I don't have a particular opinion or advice on constexpr constructors because I'm not sure I can fully wrap my mind around them and user-defined literals aren't yet available.
I tend to agree with Scott Meyers on this (as for most things): "Use constexpr whenever possible" (from Item 15 of Effective Modern C++), particularly if you are providing an API for others to use. It can be really disappointing when you wish to perform a compile-time initialization using a function, but can't because the library did not declare it constexpr. Furthermore, all classes and functions are part of an API, whether used by the world or just your team. So use it whenever you can, to widen its scope of usage.
// Free cup of coffee to the API author, for using constexpr
// on Rect3 ctor, Point3 ctor, and Point3::operator*
constexpr Rect3 IdealSensorBounds = Rect3(Point3::Zero, MaxSensorRange * 0.8);
That said, constexpr is part of the interface, so if the interface does not naturally fit something that can be constexpr, don't commit to it, lest you have to break the API later. That is, don't commit constexpr to the interface just because the current, only implementation can handle it.
Yes. I believe putting such constness is always a good practice wherever you can. For example in your class if a given method is not modifying any member then you always tend to put a const keyword in the end.
Apart from the language aspect, mentioning constness is also a good indication to the future programmer / reviewer that the expression is having const-ness within that region. It relates to good coding practice and adds to readability also. e.g. (from #Luc)
constexpr int& f(int& i) { return get(i); }
Now putting constexpr suggests that get() must also be a constexpr.
I don't see any problem or implication due constexpr.
Edit: There is an added advantage of constexpr is that you can use them as template argument in some situations.