Are arithmetic operations on literals in C++ evaluated at compile time? - c++

Herein, similar questions were asked for C#:
Are arithmetic operations on literals in C# evaluated at compile time?,
and java:
Are arithmetic operations on literals calculated at compile time or run time?.
Considering C++, will the following calculations be evaluated during run- or compile-time? The first is to define a built-in type, the second is to be a function argument.
Yet please consider them for all 4 basic arithmetic operations as well as with other built-in types, e.g. an int instead of the double below.
double testDouble = 2.0 + 2.0;
aUserDefinedType testUserDefinedTypeObject
(
aMemberVariable*std::pow(someOtherVariable, 1.0/8.0)
);

It depends on your compiler and its optimization level when building the code.
There is no intrinsic guarantee of compile time evaluation, but most compilers will evaluate constant expressions at compile time when optimizations are turned on.
There is also constexpr which can also help the compiler know what can be evaluated at compile time.

Related

Are const constant operations evaluated at run time?

I'm doing some chess programming in C++, as a result there are a lot of bitwise operations that I have to do with some large numbers. I was wondering, for perfomance sake if constant operations are done at runtime? Or if they're evaluated during compilation. e.g. Suppose I have to AND the following 2 constants:
const unsigned long long FILE_A = ~0x8080808080808080;
const unsigned long long FILE_B = ~0x4040404040404040;
In a function like this
unsigned long long join(){
return (FILE_A & FILE_B);
}
Is the AND operation on FILE_A and FILE_B done at runtime? Or does the compiler do it?
In general: a C++ compiler is allowed to do any optimization as long as the result of the optimization is "as if" the code was executed literally.
In the example you gave, doing the given calculation at compile-time is indistinguishable to doing it at run time; so modern C++ compilers will do exactly that. In fact, modern C++ compilers, if join() is defined in a header file (with an inline attribute) -- and if a moderate optimization level is selected -- will not only make the calculation at compile time, but completely optimize join() away, and inject the computed constant directly wherever join() gets used, making possible additional compile-time optimizations. That's because the result would be indistinguishable from the result if nothing was optimized away.
From the look of things it does. I put my code, the one above in this converter https://assembly.ynh.io/ and for the line return (FILE_A & FILE_B); it outputs the following assembly
movabsq $4557430888798830399, %rax
And yes, 4557430888798830399 is the bitwise and of (~0x8080808080808080) and (~0x4040404040404040)

Do these macros evaluate to the same code using gcc at compile-time?

Of course this is going to be a function of the compiler you are using, but I figured this would be a simple question to answer.
#define UBRRVAL(baud) (F_CPU/(16*baud)-1)
As compared with
#define UBRRVAL(baud) (F_CPU/16/baud-1)
I know that the latter is going to evaluate to (assuming F_CPU = 20000000):
#define UBRRVAL(baud) (12500000/baud-1)
Considering the forced precidence by the parenthesis I was curious to know if most compilers (gcc in particular) would evaluate the former expression equivalently to the latter at compile-time.
This is code that is going into an embeddded system, so if these expressions are not evaluated at compile-time equivalently, then the latter is more efficient; a single division at run-time is more efficient than a division and a mulitplication of course.
Simple answer, no.
Because neither macro is fully parenthesized, there are cases where the two are very different.
Consider UBRRVAL(2+1). The first would expand to (F_CPU/(16*2+1)-1), which is equivalent to F_CPU/33 - 1. The second would expand to (F_CPU/16/2+1-1), which is equivalent to F_CPU/32. Not the same at all.
Of course, it probably isn't meant to be called with an expression, just with a single constant value, but there's nothing to prevent it, and as such, someone will do it sometime in the future. One of the many evils of macros. I would recommend using a short (static) inline function (or constexpr as suggested in comments, if this is using a recent enough C++ compiler) instead...
Simple answer, yes. Within the specific constraints given both will be fully evaluated at compile time.
Parentheses force precedence but they do not force order of evaluation, except to the extent defined by the "as if" rule. You cannot be sure what code will be emitted if the expression is slightly more complicated so it is not evaluated at compile time. This may well depend on the specific processor.
As a side point, on most processors a 4 bit shift left or shift right are the same cost, and if the baud rate is a power of two the compiler is likely to generate shift operations.
[And be careful about parenthesising macro arguments. You got away with it this time, but only just.]

Does the product of two constants get computed every time it is executed?

For example, if I have:
if(x < 2*0.025) { ... }
Does the 2*0.025 get computed every time? Or does a 0.05 get substituted in so that the multiplication operation doesn't have to run every time?
In other words, is it more efficient to use 0.05 instead of 2*0.025?
Every compiler I know implements constant folding, i.e. calculates constant expressions at compile time, so there is no difference. The standard, however, does not mandate it:
A constant expression can be evaluated during translation rather than runtime, and accordingly may be used in any place that a constant may be.
You can explicitly disable this optimization with some compilers. For example, -frounding-math disables constant folding for floating point expressions in gcc.
Constant expressions are precomputed.

Can I guarantee the C++ compiler will not reorder my calculations?

I'm currently reading through the excellent Library for Double-Double and Quad-Double Arithmetic paper, and in the first few lines I notice they perform a sum in the following way:
std::pair<double, double> TwoSum(double a, double b)
{
double s = a + b;
double v = s - a;
double e = (a - (s - v)) + (b - v);
return std::make_pair(s, e);
}
The calculation of the error, e, relies on the fact that the calculation follows that order of operations exactly because of the non-associative properties of IEEE-754 floating point math.
If I compile this within a modern optimizing C++ compiler (e.g. MSVC or gcc), can I be ensured that the compiler won't optimize out the way this calculation is done?
Secondly, is this guaranteed anywhere within the C++ standard?
You might like to look at the g++ manual page: http://gcc.gnu.org/onlinedocs/gcc-4.6.1/gcc/Optimize-Options.html#Optimize-Options
Particularly -fassociative-math, -ffast-math and -ffloat-store
According to the g++ manual it will not reorder your expression unless you specifically request it.
Yes, that is safe (at least in this case). You only use two "operators" there, the primary expression (something) and the binary something +/- something (additive).
Section 1.9 Program execution (of C++0x N3092) states:
Operators can be regrouped according to the usual mathematical rules only where the operators really are associative or commutative.
In terms of the grouping, 5.1 Primary expressions states:
A parenthesized expression is a primary expression whose type and value are identical to those of the enclosed expression. ... The parenthesized expression can be used in exactly the same contexts as those where the enclosed expression can be used, and with the same meaning, except as otherwise indicated.
I believe the use of the word "identical" in that quote requires a conforming implementation to guarantee that it will be executed in the specified order unless another order can give the exact same results.
And for adding and subtracting, section 5.7 Additive operators has:
The additive operators + and - group left-to-right.
So the standard dictates the results. If the compiler can ascertain that the same results can be obtained with different ordering of the operations then it may re-arrange them. But whether this happens or not, you will not be able to discern a difference.
This is a very valid concern, because Intel's C++ compiler, which is very widely used, defaults to performing optimizations that can change the result.
See http://software.intel.com/sites/products/documentation/hpc/compilerpro/en-us/cpp/lin/compiler_c/copts/common_options/option_fp_model.htm#option_fp_model
I would be quite surprised if any compiler wrongly assumed associativity of arithmetic operators with default optimising options.
But be wary of extended precision of FP registers.
Consult compiler documentation on how to ensure that FP values do not have extended precision.
If you really need to, I think you can make a noinline function no_reorder(float x) { return x; }, and then use it instead of parenthesis. Obviously, it's not a particularly efficient solution though.
In general, you should be able to -- the optimizer should be aware of the properties of the real operations.
That said, I'd test the hell out of the compiler I was using.
Yes. The compiler will not change the order of your calculations within a block like that.
Between compiler optimizations and out-of-order execution on the processor, it is almost a guarantee that things will not happen exactly as you ordered them.
HOWEVER, it is also guaranteed that this will NEVER change the result. C++ follows standard order of operations and all optimizations preserve this behavior.
Bottom line: Don't worry about it. Write your C++ code to be mathematically correct and trust the compiler. If anything goes wrong, the problem was almost certainly not the compiler.
As per the other answers you should be able to rely on the compiler doing the right thing -- most compilers allow you to compile and inspect the assembler (use -S for gcc) -- you may want to do that to make sure you get the order of operation you expect.
Different optimization levels (in gcc, -O _O2 etc) allows code to be re-arranged (however sequential code like this is unlikely to be affected) -- but I would suggest you should then isolate that particular part of code into a separate file, so that you can control the optimization level for just the calculation.
The short answer is: the compiler will probably change the order of your calculations, but it will never change the behavior of your program (unless your code makes use of expression with undefined behavior: http://blog.regehr.org/archives/213)
However, you can still influence this behavior by deactivating all compiler optimizations (option "-O0" with gcc). If you still needs the compiler to optimize the rest of your code, you may put this function in a separate ".c" which you can compile with "-O0".
Additionally, you can use some hacks. For instance, if you interleaves your code with extern function calls the compiler may consider that it is unsafe to re-order your code as the function may have unknown side-effect. Calling "printf" to print the value of your intermediate results will conduct to similar behavior.
Anyway, unless you have any very good reason (e.g. debugging) you typically don't want to care about that, and you should trust the compiler.

Efficency of repeated arithmetic between two macros

In an ANSI C project I am working on, I have two macros defined: PERIOD_IN_MS and CYCLES_PER_MS. In the actual period handling logic, I do many comparisons between a counter that is incremented every ''cycle'' and PERIOD_IN_MS * CYCLES_PER_MS. I'm concerned that this arithmetic operation is repeatedly evaluated during each comparison.
Does anyone know if this is true, or if the compiler will evaluate the product of the two integer literals at compile time and use that instead?
I realize that this particular example would probably only remove one instruction out of the generated assembly code, but now I'm curious about this.
The standard doesn't impose any requirement to do this, but any sensible compiler will fold these constants down into one at compile-time. See e.g. http://en.wikipedia.org/wiki/Constant_propagation.
If you're curious to know whether this has actually happened, you can always take a look at the assembler generated by the compiler.
The compiler should (but I believe in C is not required to) evaluate the constant expression at compile-time. A good compiler will almost certainly do it, though, when optimization is turned on.
If you want to avoid multiple evaluation, maybe just to speed up compilation and your constants fit into int, you could enforce single evaluation by using an enumeration constant, instead.
enum { cycles_per_period = PERIOD_IN_MS * CYCLES_PER_MS};