C++ Compiler Optimization - Why is constexpr Needed? - c++

I am wondering why are the benefits of constexpr not optimized away by the compiler?
The compiler should be able to deduce that a value is known at compile time much better than a human (for example if all inputs to a function call are known), and then it should be able to mark that value as known for subsequent analysis until it becomes ambiguous. So why is it that we have to manually tell the compiler when that is the case (instead of just using the const keyword and leaving it to the compiler to evaluate the value at compile time).
Is this because the compiler technology is not yet capable of doing this or is there some sort of inherent limitation that prevents the compiler to do so?

If you omit constexpr the compiler may well be able to still compute the value at compile time.
The main idea is that you can tell the compiler that you want something evaluated at compile time so that the compiler can emit an error when you make a mistake and use something that it isn't able to evaluate at compile time.
It obviously also allows use of the values in places where you are only allowed to use compile time constants like array sizes.

Related

c++ why do constexpr functions need to be marked constexpr? [duplicate]

To the best of my knowledge, the inline keyword in c++ can be traced back to old compilers (then known as "optimizing compilers") not being able to optimize as well as modern ones, so marking a function as inline told the compiler that this should be inlined, and as a side effect prevented ODR issues. As compilers got better, someone realized that the compilers can do a much better job of optimizing than the programmer, and so the inline requirement of the compiler became more of a 'hint' that most (all?) modern compilers ignore.
Enter C++11 and subsequent versions. constexpr seems to me to be in a similar situation, at least for some of its uses, specifically functions and variables. As I understand it, it tells the compiler that a certain function may be evaluated at compile time. But that is something the compiler should be able to figure out on its own. Is this feature also going to become a 'hint' once compilers get better at optimizing?
Note: I am not asking about other uses of constexpr, such as with if statements. I understand those are needed.
As I understand it, it tells the compiler that a certain function may be evaluated at compile time.
Not "may", but "can". The constexpr keyword does not tell the compiler what it is allowed to do (it may evaluate anything it wants at compile time). Rather the keyword tells the compiler a desired quality of the variable or function, specifically that it can be used in constant expressions. The compiler will complain (error or warning) if the program fails to live up to that desire. You get a more relevant error message than you would have gotten otherwise – the compiler can tell you why your entity does not qualify for compile-time evaluation since it knows that your intent was for the entity to be a compile-time constant.
For example, if you defined const unsigned a, it is an error to use std::array<int, a> if the value of a is not known at compile time. The error might be in the initialization of a, or it might be that the template parameter was supposed to be b instead of a. The compiler would have to report the error as "a is not a constant expression" and let the programmer investigate. On the other hand, if a was declared constexpr, the compiler would instead complain about the reason the value of a is not known at compile time, leading to less time debugging.
Without constexpr, the following code produces a possibly weak error message.
{
const unsigned a = foo();
const unsigned b = 42;
std::array<int, a> stuff; // Error: 'a' is not a constant expression.
// ...
}
After declaring both a and foo() to be constexpr, the error disappears. Why? Because last week when you wrote foo(), the compiler was told that the function had to be usable in constant expressions. As a result, the compiler pointed out why foo() could not be evaluated at compile time, and you fixed the bug right away. That was last week, while the implementation of foo() was still fresh in your mind. Not this week, after doing a dozen other things, including the hour spent arguing with the compiler because you believed a had to be a constant expression since it was initialized with foo().
An ideal compiler could maybe figure out which functions are actually constexprand in that sense one could view that keyword as a hint to the compiler.
But I think it makes more sense to compare const and constexpr in terms of what they tell the compiler and the human reader. An ideal compiler could also figure out, which variables and member functions should be const. As you probably know, there are other good reasons to mark everything possible const (compiler finds bugs for you, much easier to read, helps the compiler in optimization).
The same is true for constexpr. If you declare a variable constexpr, that cannot be computed at compile time, you get an error, you have documented that the variable can be computed at compile time and it helps the compiler in optimization.
Also note that ignoring constexpr does not make sense for runtime performance, which is not true for inline.
But that is something the compiler should be able to figure out on its
own. Is this feature also going to become a 'hint' once compilers get
better at optimizing?
constexpr is not merely an optimization - without
it, the compiler is not allowed to use a function in contexts where a constant expression is required, e.g. in non-type template arguments.
But I am sure you already know that much. The real question is: should a future C++ standard allow using a function in constant expression context even though it is not explicitly marked constexpr - in case if it satisfies contexpr requirements?
No, I think it is the opposite direction from C++ development. Consider C++20 concept. One of its major goals is to improve error messages: instead of going through nested template definitions, the compiler knows early that the template argument does not meet a requirement. Keyword constexpr serves the same goal: the compiler, instead of going through a function call tree and finding that a function deep in the call stack cannot be evaluated at compile-time, reports the error early.

Is constexpr the new inline?

To the best of my knowledge, the inline keyword in c++ can be traced back to old compilers (then known as "optimizing compilers") not being able to optimize as well as modern ones, so marking a function as inline told the compiler that this should be inlined, and as a side effect prevented ODR issues. As compilers got better, someone realized that the compilers can do a much better job of optimizing than the programmer, and so the inline requirement of the compiler became more of a 'hint' that most (all?) modern compilers ignore.
Enter C++11 and subsequent versions. constexpr seems to me to be in a similar situation, at least for some of its uses, specifically functions and variables. As I understand it, it tells the compiler that a certain function may be evaluated at compile time. But that is something the compiler should be able to figure out on its own. Is this feature also going to become a 'hint' once compilers get better at optimizing?
Note: I am not asking about other uses of constexpr, such as with if statements. I understand those are needed.
As I understand it, it tells the compiler that a certain function may be evaluated at compile time.
Not "may", but "can". The constexpr keyword does not tell the compiler what it is allowed to do (it may evaluate anything it wants at compile time). Rather the keyword tells the compiler a desired quality of the variable or function, specifically that it can be used in constant expressions. The compiler will complain (error or warning) if the program fails to live up to that desire. You get a more relevant error message than you would have gotten otherwise – the compiler can tell you why your entity does not qualify for compile-time evaluation since it knows that your intent was for the entity to be a compile-time constant.
For example, if you defined const unsigned a, it is an error to use std::array<int, a> if the value of a is not known at compile time. The error might be in the initialization of a, or it might be that the template parameter was supposed to be b instead of a. The compiler would have to report the error as "a is not a constant expression" and let the programmer investigate. On the other hand, if a was declared constexpr, the compiler would instead complain about the reason the value of a is not known at compile time, leading to less time debugging.
Without constexpr, the following code produces a possibly weak error message.
{
const unsigned a = foo();
const unsigned b = 42;
std::array<int, a> stuff; // Error: 'a' is not a constant expression.
// ...
}
After declaring both a and foo() to be constexpr, the error disappears. Why? Because last week when you wrote foo(), the compiler was told that the function had to be usable in constant expressions. As a result, the compiler pointed out why foo() could not be evaluated at compile time, and you fixed the bug right away. That was last week, while the implementation of foo() was still fresh in your mind. Not this week, after doing a dozen other things, including the hour spent arguing with the compiler because you believed a had to be a constant expression since it was initialized with foo().
An ideal compiler could maybe figure out which functions are actually constexprand in that sense one could view that keyword as a hint to the compiler.
But I think it makes more sense to compare const and constexpr in terms of what they tell the compiler and the human reader. An ideal compiler could also figure out, which variables and member functions should be const. As you probably know, there are other good reasons to mark everything possible const (compiler finds bugs for you, much easier to read, helps the compiler in optimization).
The same is true for constexpr. If you declare a variable constexpr, that cannot be computed at compile time, you get an error, you have documented that the variable can be computed at compile time and it helps the compiler in optimization.
Also note that ignoring constexpr does not make sense for runtime performance, which is not true for inline.
But that is something the compiler should be able to figure out on its
own. Is this feature also going to become a 'hint' once compilers get
better at optimizing?
constexpr is not merely an optimization - without
it, the compiler is not allowed to use a function in contexts where a constant expression is required, e.g. in non-type template arguments.
But I am sure you already know that much. The real question is: should a future C++ standard allow using a function in constant expression context even though it is not explicitly marked constexpr - in case if it satisfies contexpr requirements?
No, I think it is the opposite direction from C++ development. Consider C++20 concept. One of its major goals is to improve error messages: instead of going through nested template definitions, the compiler knows early that the template argument does not meet a requirement. Keyword constexpr serves the same goal: the compiler, instead of going through a function call tree and finding that a function deep in the call stack cannot be evaluated at compile-time, reports the error early.

c++ function inling decided at compile time

It is often said a function can be inlined only if it is known at the compile time how it will be called, or something to that effect. (pls correct/clarify if I am wrong)
So say if I have a function like this
void Calling()
{
if (m_InputState == State::Fast) //m_inputstate is a class member and is set by user
{
CallFastFunction();
}
else if (m_InputState == State::Slow)
{
CallSlowFunction();
}
}
as m_inputstate is set by end-user, can we say this variable is not known at the compile time hence calling() cant be inlined?
Inlining has nothing to do with the function input being known at compile time.
The compiler can (and almost certainly WILL) inline your Calling function. Depending on the availability of the source of CallFastFunction and CallSlowFunction, these may also be inlined.
If the compiler can determine the value of m_InputState, it will remove the if - but only if it's definitive that the value is one value.
For example, thing.m_InputState = State::Slow; thing.Calling(); will only compile in the "slow" call, without any conditonal code, where std::cin >> thing.m_InputState; thing.Calling(); will of course not.
If, through profile-based optimisation, the compiler knows how often each of the cases are chosen, it may also select which path ends up "next" in the order of the code, such that the most likely comes first (and it may also provide prefix or other indication to the processor that "you're likely going this way".
But inlining itself happens based on:
The code itself being available.
The size and number of calls to the function.
Arbitrary compiler decisions that we can't know (unless we're very familiar with the source-code of the compiler).
Modern compilers also support "link time optimisation", where the object files produced are only "half-done", such that the linker will actually produce the final code, and it can move code around and inline, just the same as the old-fashioned compiler, on the entire code that makes up the executable (all the code using link-time optimisation), allowing code that didn't have the same source file or not in a header file to still be inlined.
The fact that m_InputState is not known does not stop Calling() from being inlined.
Typically, if you want a function to be inlinable, it has to be declared in the same .cpp file that it is called in (no function prototypes). This depends a bit on the compiler though.

Why is constexpr not automatic? [duplicate]

This question already has answers here:
Why do we need to mark functions as constexpr?
(4 answers)
Closed 2 years ago.
As far as I understand it, constexpr can be seen as a hint to the compiler to check whether given expressions can be evaluated at compile-time and do so if possible.
I know that it also imposes some restriction on the function or initialization declared as constexpr but the final goal is compile-time evaluation, isn't it?
So my question is, why can't we leave that at the compiler? It is obviously capable of checking the pre-conditions, so why doesn't it do for each expression and evaluate at compile-time where possible?
I have two ideas on why this might be the case but I am not yet convinced that they hit the point:
a) It might take too long during compile-time.
b) Since my code can use constexpr functions in locations where normale functions would not be allowed the specifier is also kind of part of the declaration. If the compiler did everything by itself, one could use a function in a C-array definition with one version of the function but with the next version there might be a compiler-error, because the pre-conditions for compile-time evaluation are no more satisfied.
constexpr is not a "hint" to the compiler about anything; constexpr is a requirement. It doesn't require that an expression actually be executed at compile time; it requires that it could.
What constexpr does (for functions) is restrict what you're allowed to put into function definition, so that the compiler can easily execute that code at compile time where possible. It's a contract between you the programmer and the compiler. If your function violates the contract, the compiler will error immediately.
Once the contract is established, you are now able to use these constexpr functions in places where the language requires a compile time constant expression. The compiler can then check the elements of a constant expression to see that all function calls in the expression call constexpr functions; if they don't, again a compiler error results.
Your attempt to make this implicit would result in two problems. First, without an explicit contract as defined by the language, how would I know what I can and cannot do in a constexpr function? How do I know what will make a function not constexpr?
And second, without the contract being in the compiler, via a declaration of my intent to make the function constexpr, how would the compiler be able to verify that my function conforms to that contract? It couldn't; it would have to wait until I use it in a constant expression before I find that it isn't actually a proper constexpr function.
Contracts are best stated explicitly and up-front.
constexpr can be seen as a hint to the compiler to check whether given expressions can be evaluated at compile-time and do so if possible
No, see below
the final goal is compile-time evaluation
No, see below.
so why doesn't it do for each expression and evaluate at compile-time where possible?
Optimizers do things like that, as allowed under the as-if rule.
constexpr is not used to make things faster, it is used to allow usage of the result in context where a runtime-variable expression is illegal.
This is only my evaluation, but I believe your (b) reason is correct (that it forms part of the interface that the compiler can enforce). The interface requirement serves both for the writer of the code and the client of the code.
The writer may intend something to be usable in a compile-time context, but not actually use it in this way. If the writer violates the rules for constexpr, they might not find out until after publication when clients who try to use it constexpr fail. Or, more realistically, the library might use the code in a constexpr sense in version 1, refactor this usage out in version 2, and break constexpr compatibility in version 3 without realizing it. By checking constexpr-compliance, the breakage in version 3 will be caught before deployment.
The interface for the client is more obvious --- an inline function won't silently become constexpr-required because it happened to work and someone used that way.
I don't believe your (a) reason (that it could take too long for the compiler) is applicable because (1) the compiler has to check much of the constexpr constraints anyway when the code is marked, (2) without the annotation, the compiler would only have to do the checking when used in a constexpr way (so most functions wouldn't have to be checked), and (3) IIUC the D programming language actually does allow functions to be compile-time evaluated if they meet requirements without any declaration assistance, so apparently it can be done.
I think I remember watching an early talk by Bjarne Stroustrup where he mentioned that programmers wanted fine grained control on this "dangerous" feature, from which I understand that they don't want things "accidentally" executed at compile time without them knowing. (Even if that sound like a good thing.)
There can be many reasons for that, but the only valid one is ultimatelly compilation speed I think ( (a) in your list ).
It would be too much burden on the compiler to determine for every function if it could be computed at compile time.
This argument is weaker as compilation times in general go down.
Like many other features of C++ what end up happening is that we end up with the "wrong defaults".
So you have to tell when you want constexpr instead of when you don't want constexpr (runtimeexpr); you have to tell when you want const intead of where you want mutable, etc.
Admitedly, you can imagine functions that take an absurd amount of time to run at compile time and that cannot be amortized (with other kind of machine resources) at runtime.
(I am not aware that "time-out" can be a criterion in a compiler for constexpr, but it could be so.)
Or it could be that one is compiling in a system that is always expected to finish compilation in a finite time but an unbounded runtime is admissible (or debuggable).
I know that this question is old, but time has illuminated that it actually makes sense to have constexpr as default:
In C++17, for example, you can declare a lambda constexpr but more importantly they are constexpr by default if they can be so.
https://learn.microsoft.com/en-us/cpp/cpp/lambda-expressions-constexpr
Note that lambda has all the "right" (opposite) defaults, members (captures) are const by default, arguments are templates by default auto, and now these functions are constexpr by default.

Two questions about inline functions in C++

I have question when I compile an inline function in C++.
Can a recursive function work with inline. If yes then please describe how.
I am sure about loop can't work with it but I have read somewhere recursive would work, If we pass constant values.
My friend send me some inline recursive function as constant parameter and told me that would be work but that not work on my laptop, no error at compile time but at run time display nothing and I have to terminate it by force break.
inline f(int n) {
if(n<=1)
return 1;
else {
n=n*f(n-1);
return n;
}
}
how does this work?
I am using turbo 3.2
Also, if an inline function code is too large then, can the compiler change it automatically in normal function?
thanks
This particular function definitely can be inlined. That is because the compiler can figure out that this particular form of recursion (tail-recursion) can be trivially turned into a normal loop. And with a normal loop it has no problem inlining it at all.
Not only can the compiler inline it, it can even calculate the result for a compile-time constant without generating any code for the function.
With GCC 4.4
int fac = f(10);
produced this instruction:
movl $3628800, 4(%esp)
You can easily verify when checking assembly output, that the function is indeed inlined for input that is not known at compile-time.
I suppose your friend was trying to say that if given a constant, the compiler could calculate the result entirely at compile time and just inline the answer at the call site. c++0x actually has a mechanism for this called constexpr, but there are limits to how complex the code is allowed to be. But even with the current version of c++, it is possible. It depends entirely on the compiler.
This function may be a good candidate given that it clearly only references the parameter to calculate the result. Some compilers even have non-portable attributes to help the compiler decide this. For example, gcc has pure and const attributes (listed on that page I just linked) that inform the compiler that this code only operates on the parameters and has no side effects, making it more likely to be calculated at compile time.
Even without this, it will still compile! The reason why is that the compiler is allowed to not inline a function if it decides. Think of the inline keyword more of a suggestion than an instruction.
Assuming that the compiler doesn't calculate the whole thing at compile time, inlining is not completely possible without other optimizations applied (see EDIT below) since it must have an actual function to call. However, it may get partially inlined. In that case the compiler will inline the initial call, but also emit a regular version of the function which will get called during recursion.
As for your second question, yes, size is one of the factors that compilers use to decide if it is appropriate to inline something.
If running this code on your laptop takes a very long time, then it is possible that you just gave it very large values and it is simply taking a long time to calculate the answer... The code look ok, but keep in mind that values above 13! are going to overflow a 32-bit int. What value did you attempt to pass?
The only way to know what actually happens is to compile it an look at the assembly generated.
PS: you may want to look into a more modern compiler if you are concerned with optimizations. For windows there is MingW and free versions of Visual C++. For *NIX there is of course g++.
EDIT: There is also a thing called Tail Recursion Optimization which allows compilers to convert certain types of recursive algorithms to iterative, making them better candidates for inlining. (In addition to making them more stack space efficient).
Recursive function can be inlined to certain limited depth of recursion. Some compilers have an option that lets you to specify how deep you want to go when inlining recursive functions. Basically, the compiler "flattens" several nested levels of recursion. If the execution reaches the end of "flattened" code, the code calls itself in usual recursive fashion and so on. Of course, if the depth of recursion is a run-time value, the compiler has to check the corresponding condition every time before executing each original recursive step inside the "flattened" code. In other words, there's nothing too unusual about inlining a recursive function. It is like unrolling a loop. There's no requirement for the parameters to be constant.
What you mean by "I am sure about loop can't work" is not clear. It doesn't seem to make much sense. Functions with a loop can be easily inlined and there's nothing strange about it.
What are you trying to say about your example that "displays nothing" is not clear either. There is nothing in the code that would "display" anything. No wonder it "displays nothing". On top of that, you posted invalid code. C++ language does not allow function declarations without an explicit return type.
As for your last question, yes, the compiler is completely free to implement an inline function as "normal" function. It has nothing to do with function being "too large" though. It has everything to do with more-or-less complex heuristic criteria used by that specific compiler to make the decision about inlining a function. It can take the size into account. It can take other things into account.
You can inline recursive functions. The compiler normally unrolls them to a certain depth- in VS you can even have a pragma for this, and the compiler can also do partial inlining. It essentially converts it into loops. Also, as #Evan Teran said, the compiler is not forced to inline a function that you suggest at all. It might totally ignore you and that's perfectly valid.
The problem with the code is not in that inline function. The constantness or not of the argument is pretty irrelevant, I'm sure.
Also, seriously, get a new compiler. There's modern free compilers for whatever OS your laptop runs.
One thing to keep in mind - according to the standard, inline is a suggestion, not an absolute guarantee. In the case of a recursive function, the compiler would not always be able to compute the recursion limit - modern compilers are getting extremely smart, a previous response shows the compiler evaluating a constant inline and simply generating the result, but consider
bigint fac = factorialOf(userInput)
there's no way the compiler can figure that one out........
As a side note, most compilers tend to ignore inlines in debug builds unless specifically instructed not to do so - makes debugging easier
Tail recursions can be converted to loops as long as the compiler can satisfactorily rearrange the internal representation to get the recursion conditional test at the end. In this case it can do the code generation to re-express the recursive function as a simple loop
As far as issues like tail recursion rewrites, partial expansions of recursive functions, etc, these are usually controlled by the optimization switches - all modern compilers are capable of pretty signficant optimization, but sometimes things do go wrong.
Remember that the inline key word merely sends a request, not a command to the compiler. The compliler may ignore yhis request if the function definition is too long or too complicated and compile the function as normal function.
in some of the cases where inline functions may not work are
For functions returning values, if a loop, a switch or a goto exists.
For functions not returning values, if a return statement exists.
If function contains static variables.
If in line functions are recursive.
hence in C++ inline recursive functions may not work.