Does using constexpr have diminishing returns? - c++

Let's say I have a constexpr int like this:
constexpr int i = 42;
But so far my program doesn't have any situations in which i can be used at compile-time, so it's fairly useless. But in the future, if I want to use i in a compile time context, I don't have to backtrack and add constexpr. Can this be like const correctness where you put const in the right places and get it "right the first time"? Or is it just unnecessary? What are some negative drawbacks of making everything constexpr?

The only drawbacks of marking things as constexpr are one and the same as the drawbacks of using constexpr in the first place.
The two biggest drawbacks that come to mind are that you lose compiler portability (VC++ is very late to the game in terms of supporting constexpr) and you can't use "non-trivial" objects in a constexpr, such as std::string, even though you can make them const.
Other than that though, there is no particular penalty for using constexpr in your code other than the size of the code. If you think it might be useful to use constexpr qualified named constants, by all means go ahead. It's a better approach than using preprocessor constants.

Related

Is boost::typeindex::ctti_type_index a standard compliant way for compile-time type ids for some cases?

I'm currently evaluating possibilities in changing several classes/structs of a project in order to have them usable within a constexpr-context at compile time. A current game stopper are the cases where typeid() and std::type_index (both seem to be purely rtti-based?) are used that cannot be used within a constexpr-context.
So I came across with boost's boost::typeindex::ctti_type_index
They say:
boost::typeindex::ctti_type_index class can be used as a drop-in
replacement for std::type_index.
So far so good. The only exceptional case I was able to find so far, that one should be aware of when using it, is
With RTTI off different classes with same names in anonymous namespace
may collapse. See 'RTTI emulation limitations'.
which is currently relevant at least for gcc, clang and Intel compilers and not really surprising. I could live with that restriction so far. So my first question here is: Besides the issue with anonymous namespaces, does boost fully refer to standard compliant mechanisms in achieving that constexpr typeid generation? It's quite hard to analyze that from scratch due to too many compiler dependent switches. Did anybody gain experience with that already for several scenarios and might mention some further drawbacks I do not see here a priori?
And my second question, quite directly related with the first one, is about the details: How does that implementation work at "core level", especially for the comparison context?
For the comparison, they use
BOOST_CXX14_CONSTEXPR inline bool ctti_type_index::equal(const ctti_type_index& rhs) const BOOST_NOEXCEPT {
const char* const left = raw_name();
const char* const right = rhs.raw_name();
return /*left == right ||*/ !boost::typeindex::detail::constexpr_strcmp(left, right);
}
Why did they out comment the raw "string" inner comparison? The raw name member (inline referred from raw_name()) itself is simply defined as
const char* data_;
So my guess is, that at least within a fully constexpr context if initialized with a constexpr char*, the simple comparison should be standard compliant (ensured unique pointer adresses for inline-objects, i.e. for constexpr respectively?)? Is that already fully guaranteed by the standard (here I focus on C++17, relevant changes for C++20?) and not used here yet due to common compiler limitations only? (BTW: I'm generally struggling with non-trivial non self-explanatory out commented sections in code...) With their constexpr_strcmp, they apply a trivial but expensive character-wise comparison what would have been my custom way too. Trivial to see here, that the simple pointer comparison would be the preferred one further on.
Update due to rereading my own question: So at least for the comparison case, I currently understand the mechanisms for the enabled code but are interested in the out-commented approach.

Any reason to not use auto& for C++ range-based for-loops?

For example, the loop:
std::vector<int> vec;
...
for (auto& c : vec) { ... }
will iterate over vec and copy each element by reference.
Would there ever be a reason to do this?
for (int& c : vec) { ... }
The two code snippets will result in the same code being generated: with auto, the compiler will figure out that the underlying type is int, and do exactly the same thing.
However, the option with auto is more "future-proof": if at some later date you decide that int should be replaced with, say, uint8_t to save space, you wouldn't need to go through your code looking for references to the underlying type that may need to be changed, because the compiler will do it for you automatically.
Use auto wherever it makes the code better, but nowhere else. Understand the impact using auto overly-liberally has on maintainability.
The question here really is if there is any reason why you shouldn't use auto for anything you can.
Well, let's get one thing out of the way first of all. The fundamental reason why auto was introduced in the first place was two-fold.
First, it makes declarations of complex-type variables simpler and easier to read and understand. This is especially true when declaring an iterator in a for loop. Consider the C++03 psudocode:
for (std::vector <Foo>::const_iterator it = myFoos.begin(); it != myFoos.end(); ++it)
This can become much more complex as the complexity of myFoos's type becomes more complex. Moreover if the type of myFoos is changed in a subtle way, but in a way that's insignifigant to the loop just written, the complex declaration must be revisited. This is a maintennance problem made simpler in C++11:
for (auto it = myFoos.begin(); it != myFoos.end(); ++it)
Second, there are situations which arise that are impossible to deal with without the facilities provided by auto and it's sibling, decltype. This comes up in templates. Consider (source):
template<typename T, typename S>
void foo(T lhs, S rhs) {
auto prod = lhs * rhs;
//...
}
In C++03 the type of prod cannot always be inferred if the types of lhs and rhs are not the same. In C++11 this is possible using auto. Alas it is also possible using decltype, but this was also added to C++11 along with auto.
Many of the C++ elite suggest that you should use auto anywhere possible. The reason for this was stated by Herb Sutter in a recent conference:
It's shorter. It's more convinient. It's more future-proof, because
if you change your function's return type, auto just works.
However they also acknowledge the "sharp edges." There are many cases where auto doesn't do what you want or expect. These sharp edges can cut you when you want a type conversion.
So on the one hand we have a highly respected camp telling us "use auto everywhere possible, but nowhere else." This doesn't feel right to me however. Most of the benefits that auto provide are provided at what I'm going to call "first-write time." The time when you are first writing a piece of code. As you're writing a big chink of brand-new code you can go ahead and use auto virtually everywhere and get exactly the behavior you expect. As you're writing the code you know exactly what's going on with your types and variables. I don't know about you, but as I write code there is a constant stream of thoughts going through my head. How do I want to create this loop? What kind of thing do I want that function to return so I can use it here? How can I write this so that is fast, correct, and easy to understand 6 months from now? Most of this is never expressed directly in the code that I write, except that the code that I write is a direct result of these thoughts.
At that time, using auto would make writing this code simpler and easier. I don't have to burden my mind with all the little minute of signed versus unsigned, reference vs value, 32 bit vs 64 bit, etc. I just write auto and everything works.
But here's my problem with auto. 6 months later when revisiting this code to add some major new functionality, that buzz of thought that was going through my mind as I first write the code has long been extinguished. My buffers were flushed long ago, and none of those thought are with me anymore. If I have done my job well, then the essence of those thoughts are expressed directly in the code I wrote. I can reverse-engineer what I was thinking by just looking at the structure of my functions and data types.
If auto is sprinkled everywhere, a big part of that cognizance is lost. I don't know what I was thinking would happen with this datatype because now it's inferred. If there was a subtle relationship taking place with an operator, that relationship is no longer expressed by the datatypes -- it's all just auto.
Maintenance becomes more difficult because no I have to re-create much of that thought. Subtle relationships become more clouded, and everything is just harder to understand.
So I'm not a fan of using auto everywhere possible. I think that makes maintenance harder than it has to be. That's not to say I believe that auto should only be used where it's required. Rather, it's a balancing act. The four criteria I use to judge the quality of my (or anyone's code) are: efficiency, correctness, robustness, and maintainability. In no particular order. If we design a spectrum of auto use where one side is "purely optional" and the other side is "strictly required", I feel that in general the closer to "purely optional" we get, the more maintainability suffers.
All this to say, finally, that my philosophy can be nutshelled as:
Use auto wherever it makes the code better, but nowhere else.
Understand the impact using auto overly-liberally has on
maintainability.
That depends on what you want to do with c:
You want to work with copies? Use auto c
You want to work with original items and possibly modify them? Use auto& c
You want to work with original items and not modify them? Use const auto& c
There are two conflicting insterests here. On the one side, it is just simpler to type auto& everywhere than figuring the exact type en each loop. Some people will claim that it is also more future-proof if the type stored in the container changes in the future --I don't particularly agree, I'd rather have the compiler complain* and let me figure out if the assumptions made in the loop about the type still hold.
On the other side, by using auto& (instead of the more explicit int&) you are hiding the type from the reader. The maintainer reading the code will have to think what auto means in this particular context, while int clearly has a single meaning. The same people, or at least a subset of them, will claim that you don't need to think, that the IDE will tell you the type if you hover the mouse over the variable... but I don't use an IDE nor particularly like the mouse...
Over all, this is mainly a matter of style. I prefer to see the types in the code rather than have the compiler infer them for me, as I find it easier to understand when I go back to the code some time later. But for quick coding when I don't envision having to maintain this code a week from now auto is more than sufficient.
* This assumes that you use a standard compliant compiler of which VisualStudio is not an example. The assumption is that if the type is wrong, a non-const reference won't bind to the value returned by dereferencing the iterator. In VS, the compiler will gladly convert types and bind a non-const reference to the rvalue! Maybe this is why Sutter, comming from the VS world suggests using auto everywhere!
I agree with dasblinkenlight's answer, but since you are asking when int& is better than auto&, I can paraphrase it this way:
Use int& when you would like your code to break if/when someone decides to change the type of your vector.
For example: your vectors usually contain int16_t, but this particular one requires greater precision (assuming int has 32-bit or greater precision). You don't want someone to change the type from int to int16_t and get a program that contains a silent overflow in calculations.
Another example: your code looks like this:
namespace joes_lib
{
int frobnicate(int);
}
for (int& c : vec) { c = frobnicate(c); }
Here, if someone changes vec to something like vector<int16_t> or, worse, vector<unsigned>, auto will silently succeed and lead to loss of precision in joe's library function. Compiler may or may not generate warnings about this.
These examples look clumsy and obscure, so you may want to comment such usage of non-auto types in loops.

Why is C++11 constexpr so restrictive?

As you probably know, C++11 introduces the constexpr keyword.
C++11 introduced the keyword constexpr, which allows the user to
guarantee that a function or object constructor is a compile-time
constant.
[...]
This allows the compiler to understand, and verify, that [function name] is a
compile-time constant.
My question is why are there such strict restrictions on form of the functions that can be declared. I understand desire to guarantee that function is pure, but consider this:
The use of constexpr on a function imposes some limitations on what
that function can do. First, the function must have a non-void return
type. Second, the function body cannot declare variables or define new
types. Third, the body may only contain declarations, null statements
and a single return statement. There must exist argument values such
that, after argument substitution, the expression in the return
statement produces a constant expression.
That means that this pure function is illegal:
constexpr int maybeInCppC1Y(int a, int b)
{
if (a>0)
return a+b;
else
return a-b;
//can be written as return (a>0) ? (a+b):(a-b); but that isnt the point
}
Also you cant define local variables... :(
So I'm wondering is this a design decision, or do compilers suck when it comes to proving function a is pure?
The reason you'd need to write statements instead of expressions is that you want to take advantage of the additional capabilities of statements, particularly the ability to loop. But to be useful, that would require the ability to declare variables (also banned).
If you combine a facility for looping, with mutable variables, with logical branching (as in if statements) then you have the ability to create infinite loops. It is not possible to determine if such a loop will ever terminate (the halting problem). Thus some sources would cause the compiler to hang.
By using recursive pure functions it is possible to cause infinite recursion, which can be shown to be equivalently powerful to the looping capabilities described above. However, C++ already has that problem at compile time - it occurs with template expansion - and so compilers already have to have a switch for "template stack depth" so they know when to give up.
So the restrictions seem designed to ensure that this problem (of determining if a C++ compilation will ever finish) doesn't get any thornier than it already is.
The rules for constexpr functions are designed such that it's impossible to write a constexpr function that has any side-effects.
By requiring constexpr to have no side-effects it becomes impossible for a user to determine where/when it was actually evaluated. This is important since constexpr functions are allowed to happen at both compile time and run time at the discretion of the compiler.
If side-effects were allowed then there would need to be some rules about the order in which they would be observed. That would be incredibly difficult to define - even harder than the static initialisation order problem.
A relatively simple set of rules for guaranteeing these functions to be side-effect free is to require that they be just a single expression (with a few extra restrictions on top of that). This sounds limiting initially and rules out the if statement as you noted. Whilst that particular case would have no side-effects it would have introduced extra complexity into the rules and given that you can write the same things using the ternary operator or recursively it's not really a huge deal.
n2235 is the paper that proposed the constexpr addition in C++. It discusses the rational for the design - the relevant quote seems to be this one from a discussion on destructors, but relevant generally:
The reason is that a constant-expression is intended to be evaluated by the compiler
at translation time just like any other literal of built-in type; in particular no
observable side-effect is permitted.
Interestingly the paper also mentions that a previous proposal suggested the the compiler figured out automatically which functions were constexpr without the new keyword, but this was found to be unworkably complex, which seems to support my suggestion that the rules were designed to be simple.
(I suspect there will be other quotes in the references cited in the paper, but this covers the key point of my argument about the no side-effects)
Actually the C++ standardization committee is thinking about removing several of these constraints for c++14. See the following working document http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2013/n3597.html
The restrictions could certainly be lifted quite a bit without enabling code which cannot be executed during compile time, or which cannot be proven to always halt. However I guess it wasn't done because
it would complicate the compiler for minimal gain. C++ compilers are quite complex as is
specifying exactly how much is allowed without violating the restrictions above would have been time consuming, and given that desired features have been postponed in order to get the standard out of the door, there probably was little incentive to add more work (and further delay of the standard) for little gain
some of the restrictions would have been either rather arbitrary or rather complicated (especially on loops, given that C++ doesn't have the concept of a native incrementing for loop, but both the end condition and the increment code have to be explicitly specified in the for statement, making it possible to use arbitrary expressions for them)
Of course, only a member of the standards committee could give an authoritative answer whether my assumptions are correct.
I think constexpr is just for const objects. I mean; you can now have static const objects like String::empty_string constructs statically(without hacking!). This may reduce time before 'main' called. And static const objects may have functions like .length(), operator==,... so this is why 'expr' is needed. In 'C' you can create static constant structs like below:
static const Foos foo = { .a = 1, .b = 2, };
Linux kernel has tons of this type classes. In c++ you could do this now with constexpr.
note: I dunno but code below should not be accepted so like if version:
constexpr int maybeInCppC1Y(int a, int b) { return (a > 0) ? (a + b) : (a - b); }

Should we use constexpr everywhere we can?

We obviously can't make everything constexpr. And if we don't make anything constexpr, well, there won't be any big problems. Lots of code have been written without it so far.
But is it a good idea to slap constexpr in anything that can possibly have it? Is there any potential problem with this?
It won't bother the compiler. The compiler will (or should anyway) give you a diagnostic when/if you use it on code that doesn't fit the requirements of a constexpr.
At the same time, I'd be a bit hesitant to just slap it on there because you could. Even though it doesn't/won't bother the compiler, your primary audience is other people reading the code. At least IMO, you should use constexpr to convey a fairly specific meaning to them, and just slapping it on other expressions because you can will be misleading. I think it would be fair for a reader to wonder what was going on with a function that's marked as a constexpr, but only used as a normal run-time function.
At the same time, if you have a function that you honestly expect to use at compile time, and you just haven't used it that way yet, marking it as constexpr might make considerably more sense.
Why I don't bother to try and put constexpr at every opportunity in list form, and in no particular order:
I don't write one-liner functions that often
when I write a one-liner it usually delegates to a non-constexpr function (e.g. std::get has come up several times recently)
the types they operate on aren't always literal types; yes, references are literal types, but if the referred type is not literal itself I can't really have any instance at compile-time anyway
the type they return aren't always literal
they simply are not all useful or even meaningful at compile-time in terms of their semantics
I like separating implementation from declaration
Constexpr functions have so many restrictions that they are a niche for special use only. Not an optimization, or a desirable super-set of functions in general. When I do write one, it's often because a metafunction or a regular function alone wouldn't have cut it and I have a special mindset for it. Constexpr functions don't taste like other functions.
I don't have a particular opinion or advice on constexpr constructors because I'm not sure I can fully wrap my mind around them and user-defined literals aren't yet available.
I tend to agree with Scott Meyers on this (as for most things): "Use constexpr whenever possible" (from Item 15 of Effective Modern C++), particularly if you are providing an API for others to use. It can be really disappointing when you wish to perform a compile-time initialization using a function, but can't because the library did not declare it constexpr. Furthermore, all classes and functions are part of an API, whether used by the world or just your team. So use it whenever you can, to widen its scope of usage.
// Free cup of coffee to the API author, for using constexpr
// on Rect3 ctor, Point3 ctor, and Point3::operator*
constexpr Rect3 IdealSensorBounds = Rect3(Point3::Zero, MaxSensorRange * 0.8);
That said, constexpr is part of the interface, so if the interface does not naturally fit something that can be constexpr, don't commit to it, lest you have to break the API later. That is, don't commit constexpr to the interface just because the current, only implementation can handle it.
Yes. I believe putting such constness is always a good practice wherever you can. For example in your class if a given method is not modifying any member then you always tend to put a const keyword in the end.
Apart from the language aspect, mentioning constness is also a good indication to the future programmer / reviewer that the expression is having const-ness within that region. It relates to good coding practice and adds to readability also. e.g. (from #Luc)
constexpr int& f(int& i) { return get(i); }
Now putting constexpr suggests that get() must also be a constexpr.
I don't see any problem or implication due constexpr.
Edit: There is an added advantage of constexpr is that you can use them as template argument in some situations.

Whyever **not** declare a function to be `constexpr`?

Any function that consists of a return statement only could be declared
constexpr and thus will allow to be evaluated at compile time if all
arguments are constexpr and only constexpr functions are called in its body. Is there any reason not to declare any such function constexpr ?
Example:
constexpr int sum(int x, int y) { return x + y; }
constexpr i = 10;
static_assert(sum(i, 13) == 23, "sum correct");
Could anyone provide an example where declaring a function constexpr
would do any harm?
Some initial thoughts:
Even if there should be no good reason for ever declaring a function
not constexpr I could imagine that the constexpr keyword has a
transitional role: its absence in code that does not need compile-time
evaluations would allow compilers that do not implement compile-time
evaluations still to compile that code (but to fail reliably on code
that needs them as made explict by using constexpr).
But what I do not understand: if there should be no good reason for
ever declaring a function not constexpr, why is not every function
in the standard library declared constexpr? (You cannot argue
that it is not done yet because there was not sufficient time yet to
do it, because doing it for all is a no-brainer -- contrary to deciding for every single function if to make it constexpr or not.)
--- I am aware that N2976
deliberately not requires cstrs for many standard library types such
as the containers as this would be too limitating for possible
implementations. Lets exclude them from the argument and just wonder:
once a type in the standard library actually has a constexpr cstr, why is not every function operating on it declared constexpr?
In most cases you also cannot argue that you may prefer not to declare a function constexpr simply because you do not envisage any compile-time usage: because if others evtl. will use your code, they may see such a use that you do not. (But granted for type trait types and stuff alike, of course.)
So I guess there must be a good reason and a good example for deliberately not declaring a function constexpr?
(with "every function" I always mean: every function that meets the
requirements for being constexpr, i.e., is defined as a single
return statement, takes only arguments of types with constexpr
cstrs and calls only constexpr functions. Since C++14, much more is allowed in the body of such function: e.g., C++14 constexpr functions may use local variables and loops, so an even wider class of functions could be declared constexpr.)
The question Why does std::forward discard constexpr-ness? is a special case of this one.
Functions can only be declared constexpr if they obey the rules for constexpr --- no dynamic casts, no memory allocation, no calls to non-constexpr functions, etc.
Declaring a function in the standard library as constexpr requires that ALL implementations obey those rules.
Firstly, this requires checking for each function that it can be implemented as constexpr, which is a long job.
Secondly, this is a big constraint on the implementations, and will outlaw many debugging implementations. It is therefore only worth it if the benefits outweigh the costs, or the requirements are sufficiently tight that the implementation pretty much has to obey the constexpr rules anyway. Making this evaluation for each function is again a long job.
I think what you're referring to is called partial evaluation. What you're touching on is that some programs can be split into two parts - a piece that requires runtime information, and a piece that can be done without any runtime information - and that in theory you could just fully evaluate the part of the program that doesn't need any runtime information before you even start running the program. There are some programming languages that do this. For example, the D programming language has an interpreter built into the compiler that lets you execute code at compile-time, provided that it meets certain restrictions.
There are a few main challenges in getting partial evaluation working. First, it dramatically complicates the logic of the compiler because the compiler will need to have the ability to simulate all of the operations that you could put into an executable program at compile-time. This, in the worst case, requires you to have a full interpreter inside of the compiler, making a difficult problem (writing a good C++ compiler) and making it orders of magnitude harder to do.
I believe that the reason for the current specification about constexpr is simply to limit the complexity of compilers. The cases it's limited to are fairly simple to check. There's no need to implement loops in the compiler (which could cause a whole other slew of problems, like what happens if you get an infinite loop inside the compiler). It also avoids the compiler potentially having to evaluate statements that could cause segfaults at runtime, such as following a bad pointer.
Another consideration to keep in mind is that some functions have side-effects, such as reading from cin or opening a network connection. Functions like these fundamentally can't be optimized at compile-time, since doing so would require knowledge only available at runtime.
To summarize, there's no theoretical reason you couldn't partially evaluate C++ programs at compile-time. In fact, people do this all the time. Optimizing compilers, for example, are essentially programs that try to do this as much as possible. Template metaprogramming is one instance where C++ programmers try to execute code inside the compiler, and it's possible to do some great things with templates partially because the rules for templates form a functional language, which the compiler has an easier time implementing. Moreover, if you think of the tradeoff between compiler author hours and programming hours, template metaprogramming shows that if you're okay making programmers bend over backwards to get what they want, you can build a pretty weak language (the template system) and keep the language complexity simple. (I say "weak" as in "not particularly expressive," not "weak" in the computability theory sense).
Hope this helps!
If the function has side effects, you would not want to mark it constexpr. Example
I can't get any unexpected results from that, actually it looks like gcc 4.5.1 just ignores constexpr