Is there any speed difference between the following two cases? - c++

Will judge_function_2 be faster than judge_function_1? I think judge_function_1's AND statement needs to judge two, while judge_function_2 only needs to be judged once.
#include <iostream>
using namespace std;
bool judge_function_1()
{
if(100 == 100 || 100 == 200)
{
return true;
}
return false;
}
bool judge_function_2()
{
if(100 == 100)
{
return true;
}
if(100 == 200)
{
return true;
}
return false;
}
int main()
{
cout << judge_function_1() << endl;
cout << judge_function_2() << endl;
return 0;
}

Using godbolt and compiling with gcc with optimizations enabled results in the following assembly code https://godbolt.org/z/YEfYGv5vh :
judge_function_1():
mov eax, 1
ret
judge_function_2():
mov eax, 1
ret
The functions assembly code is identical and both return true, they will be exactly the same fast.

Compilers know how to read and understand code. They cannot guess your intentions, but thats only an issue when your code does not express your intentions.
A compiler "knows" that 100 == 100 is always true. It cannot be something else.
A compiler "knows" that true || whatever is always true. It cannot be something else.
A compiler "knows" that after a return no other statements are executed in a function.
The transformations are straightforward to proove that both functions are equivalent to
void same_thing_just_with_less_fluff() { return true; }
The code in both functions is just a very complicated way to say return true; / "this function returns true".
Compilers optimize code when you ask them to. If you turn on optimizations there is no reason to expect any difference between the two functions. Calling them has exactly the same effect. There is no observable difference between the two. And as shown above, the way to proove this is rather simple. It is somewhat safe to assume that any compiler can optimize the two functions to simply return true;.
To my experience, a misunderstanding that is common among beginners is that your code would be instructions for your CPU. That when you write 100 == 100 then at runtime there must be 100 in one register and 100 in another register and the cpu needs to carry out an operation to check if the values are the same. This picture is totally wrong.
Your code is an abstract description of the observable behavior of your code. Its a compilers job to translate this abstract description into something your cpu understands and can execute to exhibit the observable behavior you described in your code in accordance with the definitions provided by the C++ standard.
What your code describes is in plain english: Write 1 followed by a newline to stdout and then flush it twice.

From your guess:
I think judge_function_1's AND statement needs to judge two, while judge_function_2 only needs to be judged once.
I deduce that there ARE some compare performed, like if you pass two parameters to your function and compare them with some constant:
bool judge_function_1(int x, int y)
{
if(x == 100 || y == 200)
{
return true;
}
return false;
}
Even in that case, if the first condition is true, the function will return immediately, without comparing y to 200.

Related

Can I replace an if-statement with AND?

My prof once said, that if-statements are rather slow and should be avoided as much as possible. I'm making a game in OpenGL, where I need a lot of them.
In my tests replacing an if-statement with AND via short-circuiting worked, but is it faster?
bool doSomething();
int main()
{
int randomNumber = std::rand() % 10;
randomNumber == 5 && doSomething();
return 0;
}
bool doSomething()
{
std::cout << "function executed" << std::endl;
return true;
}
My intention is to use this inside the draw function of my renderer. My models are supposed to have flags, if a flag is true, a certain function should execute.
if-statements are rather slow and should be avoided as much as possible.
This is wrong and/or misleading. Most simplified statements about slowness of a program are wrong. There's probably something wrong with this answer too.
C++ statements don't have a speed that can be attributed to them. It's the speed of the compiled program that matters. And that consists of assembly language instructions; not of C++ statements.
What would probably be more correct is to say that branch instructions can be relatively slow (on modern, superscalar CPU architectures) (when the branch cannot be predicted well) (depending on what you are comparing to; there are many things that are much more expensive).
randomNumber == 5 && doSomething();
An if-statement is often compiled into a program that uses a branch instruction. A short-circuiting logical-and operation is also often compiled into a program that uses a branch instruction. Replacing if-statement with a logical-and operator is not a magic bullet that makes the program faster.
If you were to compare the program produced by the logical-and and the corresponding program where it is replaced with if (randomNumber == 5), you would find that the optimiser sees through your trick and produces the same assembly in both cases.
My models are supposed to have flags, if a flag is true, a certain function should execute.
In order to avoid the branch, you must change the premise. Instead of iterating through a sequence of all models, checking flag, and conditionally calling a function, you could create a sequence of all models for which the function should be called, iterate that, and call the function unconditionally -> no branching. Is this alternative faster? There is certainly some overhead of maintaining the data structure and the branch predictor may have made this unnecessary. Only way to know for sure is to measure the program.
I agree with the comments above that in almost all practical cases, it's OK to use ifs as much as you need without hesitation.
I also agree that it is not an issue important for a beginner to waste energy on optimizing, and that using logical operators will likely to emit code similar to ifs.
However - there is a valid issue here related to branching in general, so those who are interested are welcome to read on.
Modern CPUs use what we call Instruction pipelining.
Without getting too deap into the technical details:
Within each CPU core there is a level of parallelism.
Each assembly instruction is composed of several stages, and while the current instruction is executed, the next instructions are prepared to a certain degree.
This is called instruction pipelining.
This concept is broken with any kind of branching in general, and conditionals (ifs) in particular.
It's true that there is a mechanism of branch prediction, but it works only to some extent.
So although in most cases ifs are totally OK, there are cases it should be taken into account.
As always when it comes to optimizations, one should carefully profile.
Take the following piece of code as an example (similar things are common in image processing and other implementations):
unsigned char * pData = ...; // get data from somewhere
int dataSize = 100000000; // something big
bool cond = ...; // initialize some condition for relevant for all data
for (int i = 0; i < dataSize; ++i, ++pData)
{
if (cond)
{
*pData = 2; // imagine some small calculation
}
else
{
*pData = 3; // imagine some other small calculation
}
}
It might be better to do it like this (even though it contains duplication which is evil from software engineering point of view):
if (cond)
{
for (int i = 0; i < dataSize; ++i, ++pData)
{
*pData = 2; // imagine some small calculation
}
}
else
{
for (int i = 0; i < dataSize; ++i, ++pData)
{
*pData = 3; // imagine some other small calculation
}
}
We still have an if but it's causing to branch potentially only once.
In certain [rare] cases (requires profiling as mentioned above) it will be more efficient to do even something like this:
for (int i = 0; i < dataSize; ++i, ++pData)
{
*pData = (2 * cond + 3 * (!cond));
}
I know it's not common , but I encountered specific HW some years ago on which the cost of 2 multiplications and 1 addition with negation was less than the cost of branching (due to reset of instruction pipeline). Also this "trick" supports using different condition values for different parts of the data.
Bottom line: ifs are usually OK, but it's good to be aware that sometimes there is a cost.

Can (a==1 && a==2 && a==3) ever evaluate to true in C or C++?

We know it can in Java and JavaScript.
But the question is, can the condition below ever evaluate to true in C or C++?
if(a==1 && a==2 && a==3)
printf("SUCCESS");
EDIT
If a was an integer.
Depends on your definition of "a is an integer":
int a__(){ static int r; return ++r; }
#define a a__() //a is now an expression of type `int`
int main()
{
return a==1 && a==2 && a==3; //returns 1
}
Of course:
int f(int b) { return b==1&&b==2&&b==3; }
will always return 0; and optimizers will generally replace the check with exactly that.
If we put macro magic aside, I can see one way that could positively answer this question. But it will require a bit more than just standard C. Let's assume we have an extension allowing to use the __attribute__((at(ADDRESS))); attribute, which is placing a variable at some specific memory location (available in some ARM compilers for example, like ARM GCC). Lets assume we have a hardware counter register at the address ADDRESS, which is incrementing each read. Then we could do something like this:
volatile int a __attribute__((at(ADDRESS)));
The volatile is forcing the compiler to generate the register read each time the comparison is performed, so the counter will increment 3 times. If the initial value of the counter is 1, the statement will return true.
P.S. If you don't like the at attribute, same effect can be achieved using linker script by placing a into specific memory section.
If a is of a primitive type (i.e all == and && operators are built in) and you are in defined behavior, and there's no way for another thread to modify a in the middle of execution (this is technically a case of undefined behavior - see comments - but I left it here anyway because it's the example given in the Java question), and there is no preprocessor magic involved (see chosen answer), then I don't believe there is anything way for this to evaluate to true. However, as you can see by that list of conditions, there are many scenarios in which that expression could evaluate to true, depending on the types used and the context of the code.
In C, yes it can. If a is uninitialised then (even if there is no UB, as discussed here), its value is indeterminate, reading it gives indeterminate results, and comparing it to other numbers therefore also gives indeterminate results.
As a direct consequence, a could compare true with 1 in one moment, then compare true with 2 instead the next moment. It can't hold both those values simultaneously, but it doesn't matter, because its value is indeterminate.
In practice I'd be surprised to see the behaviour you describe, though, because there's no real reason for the actual storage to change in memory in the time between the two comparisons.
In C++, sort of. The above is still true there, but reading an indeterminate value is always an undefined operation in C++ so really all bets are off.
Optimisations are allowed to aggressively bastardise your code, and when you do undefined things this can quite easily result in all manner of chaos.
So I'd be less surprised to see this result in C++ than in C but, if I did, it would be an observation without purpose or meaning because a program with undefined behaviour should be entirely ignored anyway.
And, naturally, in both languages there are "tricks" you can do, like #define a (x++), though these do not seem to be in the spirit of your question.
The following program randomly prints seen: yes or seen: no, depending on whether at some point in the execution of the main thread (a == 0 && a == 1 && a == 2) evaluated to true.
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
_Atomic int a = 0;
_Atomic int relse = 0;
void *writer(void *arg)
{
++relse;
while (relse != 2);
for (int i = 100; i > 0; --i)
{
a = 0;
a = 1;
a = 2;
}
return NULL;
}
int main(void)
{
int seen = 0;
pthread_t pt;
if (pthread_create(&pt, NULL, writer, NULL)) exit(EXIT_FAILURE);
++relse;
while (relse != 2);
for (int i = 100; i > 0; --i)
seen |= (a == 0 && a == 1 && a == 2);
printf("seen: %s\n", seen ? "yes":"no");
pthread_join(pt, NULL);
return 0;
}
As far as I am aware, this does not contain undefined behavior at any point, and a is of an integer type, as required by the question.
Obviously this is a race condition, and so whether seen: yes or seen: no is printed depends on the platform the program is run on. On Linux, x86_64, gcc 8.2.1 both answers appear regularly. If it doesn't work, try increasing the loop counters.

breaking up conditional expression

I'm trying to evaluate if there's a performance hit in writing this expression
bool func() {
if (expr1 || expr2 ... || exprN)
return 1;
return 0;
}
as
bool func() {
if (expr1)
return 1;
if (expr2)
return 1;
...
if (exprN)
return 1;
return 0;
}
The reason I'm trying to do the latter is simply to improve readability/maintainability (eg. latter can be written in terms of a helper macro and thereby making it easier to add/remove exprs. There are about 50 expressions in this case).
A similar scenario is writing
bool func() {
if (expr1 && expr2 && ... && exprN) {
return 1;
}
return 0;
}
as
bool func() {
if (!expr1) {
return 0;
}
if (!expr2) {
return 0;
}
...
if (!exprN) {
return 0;
}
return 1;
}
Is there a performance hit in this case and if so do compilers optimize try to optimize it? I'd be interested to know if gcc does that.
(To give some context the expressions themselves are functions and let's say we want to determine if all or at least one returns true. The functions take arguments of different types)
Your two versions are functionally equivalent.
In C/C++ (and many other languages) the logical operators || and && perform "short-circuit" evaluation. They evaluate the sub-expressions from left to right, stopping as soon as the result is known. In the case of ||, this means it stops as soon as it gets a true (non-zero) result (because true || anything is true); in the case of &&, it stops as soon as it gets a false (zero) result (because false && anything is false).
If you were performing any other actions in your if bodies other than just returning from the function, it would be duplicated for every expression that's true. But since you're returning from the function as soon as one of the expressions is true, you'll never test any of the remaining expressions. So it's doing the same short-circuiting.
If you did have some action in the body, you could get the same type of short-circuiting by using else if rather than if. However, this would violate the DRY principle. But if it's in the output of a macro, rather than the original source code, this wouldn't be a signficant concern (although it would likely increase the size of the generated code -- if taken to extremes this could have memory performance implications).
Your current code
bool func() {
if (expr1 || expr2 ... || exprN)
return 1;
return 0;
}
involving umpteen logical expressions, can be written as the possibly more clear & maintainable
auto func()
-> bool
{
return false
|| expr1
|| expr2 ...
|| exprN;
}
This allows the compiler the best opportunity to optimize, because it's presented with complete information about the desired effect.
In contrast, a series of individual if statements would have to be analyzed by the compiler to determine that they implement an OR-chain.
As always when in doubt about micro-performance, MEASURE.
The C and C++ codes are generated to its assembly equivalent.
Hence, what i think is that the code generated should not be different unless you have multiple output function or jump statements(including Return, like you have in your second example).
So i think there should be no performance hit unless you have too many jumps in your improved readability code.

passing a flag in the form of "int" or a "bool" to a function, is better in terms of performance?

Say, I have a function like shown below
void caller()
{
int flag = _getFlagFromConfig();
//this is a flag, which according to the implementation
//is supposed to have only two values, 0 and 1 (as of now)
callee_1(flag);
callee_2(1 == flag);
}
void callee_1(int flag)
{
if (1 == flag)
{
//do operation X
}
}
void callee_2(bool flag)
{
if (flag)
{
//do operation X
}
}
Which of the callee functions will be a better implementation?
I have gone through this link and I'm pretty convinced that there is not much of a performance impact by taking bool for comparison in an if-condition. But in my case, I have the flag as an integer. In this case, is it worth going for the second callee?
It won't make any difference in terms of performance, however in terms of readability if there are only 2 values then a bool makes more sense. Especially if you name your flag something sensible like isFlagSet.
In terms of efficiency, they should be the same.
Note however that they don't do the same thing - you can pass something other than 1 to the first function, and the condition will evaluate to false even if the parameter is not itself false. The extra comparison could account for some overhead, probably not.
So let's assume the following case:
void callee_1(int flag)
{
if (flag)
{
//do operation X
}
}
void callee_2(bool flag)
{
if (flag)
{
//do operation X
}
}
In this case, technically the first variant would be faster, since bool values aren't checked directly for true or false, but promoted to a word-sized type and then checked for 0. Although the assembly generated might be the same, the processor theoretically does more work on the bool option.
If the value or argument is being used as a boolean, declare it bool.
The probability of it making any difference in performance is almost 0,
and the use of bool documents your intent, both to the reader and to
the compiler.
Also, if you have an int which is being used as a flag (due to an
existing interface): either use the implicit conversion (if the
interface documents it as a boolean), or compare it with 0 (not with
1). This is conform with the older definitions of how int served as
a boolean (before the days when C++ had bool).
One case where the difference between bool and int results in different (optimized) asm is the negation operator ("!").
For "!b", If b is a bool, the compiler can assume that the integer value is either 1 or 0, and the negation can thus be a simple "b XOR 1". OTOH, if b is an integer, the compiler, barring data-flow analysis, must assume that the variable may contain any integer value, and thus to implement the negation it must generate code such as
(b != 0) ? 0: 1
That being said, code where negation is a performance-critical operation is quite rare, I'm sure.

Can't recursive functions be inlined? [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
Can a recursive function be inline?
What are the trade offs of making recursive functions inline.
Recursive functions that can be optimised by tail-end recursion can certainly be inlined. If the last thing a function does is call itself, then it can be converted into a plain loop.
Arbitrary recursive functions can't be inlined for the same reason a snake can't swallow its own tail.
[Edit: just noticed that although your title says "be inlined", your actual question says "making functions inline". The two effectively have nothing to do with one another, they just have confusingly similar names. In modern compilers, the primary effect of inline is the thing that originally in C99 was (I think) just a necessary detail to make inline work at all: to permit multiple definitions of a symbol with external linkage. That's because modern compilers don't pay a whole lot of attention to the programmer's opinion of whether a function should be inlined. They do pay some, though, so the confusion of concepts persists. I've answered the question in the title, which is the decision the compiler makes, not the question in the body, which is the decision the programmer makes.]
Inlining is not necessarily an all-or-nothing deal. One strategy which compilers use to decide whether to inline, is to keep inlining function calls until the resulting code is "too big". "Big" is defined by some hopefully sensible heuristic.
So consider the following recursive function (which deliberately is not simply tail-recursive):
int triangle(int n) {
if (n == 1) return 1;
return n + triangle(n-1);
}
If it's called like this:
int t100() {
return triangle(100);
}
Then there's no particular reason in principle that the usual rules that the compiler uses for inlining shouldn't result in this:
int t100() {
// inline call to triangle(100)
int result;
if (100 == 1) { result = 1; } else {
// inline call to triangle(99)
int t99;
if (100 - 1 == 1) { t99 = 1; } else {
// inline call to triangle(98)
int t98;
if (100 - 1 - 1 == 1) { t98 = 1; } else {
// oops, "too big", no more inlining
t98 = triangle(100 - 1 - 1 - 1) + 98;
}
t99 = t98 + 99;
}
result = t99 + 100;
}
return result;
}
Obviously the optimiser will have a field day with that, so it's much "smaller" than it looks:
int t100() {
return triangle(97) + 297;
}
The code in triangle itself could be "unrolled" a few steps by a few levels of inlining, in exactly the same way, except that it doesn't have the benefits of constants:
int triangle(int n) {
if (n == 1) return 1;
if (n == 2) return 3;
if (n == 3) return 6;
return triangle(n-3) + 3*n - 3;
}
I doubt whether compilers actually do this, though, I don't think I've ever noticed it [Edit: MSVC does if you tell it to, thanks peterchen].
There's an obvious potential benefit in saving call overhead, but as against that people don't really expect recursive functions to get inlined, and there's no particular guarantee that the usual inlining heuristics will perform well with recursive functions (where there are two different places, the call site and the recursive call, that might be inlined, with different benefits in each case). Furthermore, it's difficult at compile time to estimate how deep the recursion will go, and the inline heuristics might like to take account of the call depth to make decisions. So it may be that the compiler just doesn't bother.
Functional language compilers are typically a lot more aggressive dealing with recursion than C or C++ compilers. The relevant trade-off there is that so many functions written in functional languages are recursive, that performance might be hopeless if the compiler couldn't optimise tail-recursion. So Lisp programmers typically rely on good optimisation of recursive functions, whereas C and C++ programmers typically don't.
If your compiler does not support it, you can try manually inlining instead...
int factorial(int n) {
int result = 1;
if (n-- == 0) {
return result;
} else {
result *= 1;
if (n-- == 0) {
return result;
} else {
result *= 2;
if (n-- == 0) {
return result;
} else {
result *= 3;
if (n-- == 0) {
return result;
} else {
result *= 4;
if (n-- == 0) {
return result;
} else {
// ...
}
}
}
}
}
}
See the problem yet?
Tail recursion (a special case of recursion) it's possible to be inlined by smart compilers.
Now, hold on. A tail-recursive function could be unrolled and inlined pretty easily. Apparently there are compilers that do this, but I am not aware of specifics.
Of course. Any function can be inlined if it makes sense to do it:
int f(int i)
{
if (i <= 0) return 1;
else return i * f(i - 1);
}
int main()
{
return f(10);
}
pseudo assembly (f is inlined in main):
main:
mov r0, #10 ; Pass 10 to f
f:
cmp r0, #0 ; arg <= 0? ...
bge 1l
mov r0, #1 ; ... is so, return 1
ret
1:
mov r0, -(sp) ; if not, save arg.
dec r0 ; pass arg - 1 to f
call f ; just because it's inlined doesn't mean I can't call it.
mul r0, (sp)+ ; compute the result
ret ; done.
;-)
When you call an ordinary function when you change command sequential execution order and jump(call or jmp) into some address where the function resides. Inlining mean that you place in all occurences of this function the commands of this function, so you don't have a one place where you could jump, also other types of optimisations can be used, like elemination of pushing/popping function parameters.
When you know, that the recursive chain will in normal cases be not so long, you could do inlining upto a predefined level (I don't know, if any existing compiler is intelligent enough for this today).
Inlining a recursive function is much like unrolling a loop. You will end up with much duplicate code -- but in some cases it could be worthwhile:
The number of recursive calls (the length of the chain) is normally short (in cases it gets longer than predefined, just do normal recursion)
The overhead for the functions calls is relatively big compared to the logic -- so do some "unrolling" for example five instances and end up doing a recursive call again -- this would lead to saving 80% of the call overhead.
Off course the tail-recursive special-case -- but this was mentioned by others.
Of course can be declared inline. The inline keyword is just a hint to the compiler. In many case the compiler just ignore it and depending on the compiler this could be one of this situatios.
Some compilers cna turn tail recursion into plain loops, and thus inline them normally.
Non-tail recursion could be inlined up to a given depth, usually decided by the compiler.
I've never encountered a practical application for that, as the cost of call isn't high enough anymore to offset the increase in code size.
[edit] (to clarify that: even though I like to toy with these things, and often check what code my compiler generates for "funny stuff" just out of curiosity, I haven't encountered a use case where any such unrolling helped significantly. This doesn't mean they don't exist or couldn't be constructed.
The only place where it would help is precalculating low iterations during compile time. However, in my experience this immensely increases compile times for often negligible runtime performance benefits.
Note that Visual Studio 2008 (and earlier) gives you quite some control over this:
#pragma inline_recursion(on)
#pragma inline_depth(N)
__forceinline
Be careful with the latter, it can easily overload the compiler :)
Inline means that on each place a call to a function marked as inline gets done, the compiler places a copy of the said function code there. This avoids function calling mechanisms, and it's usual argument stack pushing-poping, saving time in gazillion-calls-per-second situations. You see the consequences to static variables and stuff like that? all gone...
So, if you had an inlined recursive call, either your compiler is super smart and figures whether the number of copies is deterministic, of it will say "Cannot make it inline", because it wouldn't know when to stop.