In many languages you're allowed to declare a variable and use it before initializing it.
For example, in C++, you can write a snippet such as:
int x;
cout << x;
This would of course return unpredictable (well, unless you knew how your program was mapping out memory) results, but my question is, why is this behavior allowed by compilers?
Is there some application for or efficiency that results from allowing the use of uninitialized memory?
edit: It occurred to me that leaving initialization up to the user would minimize writes for memory mediums that have limited lifespans (write-cycles). Just a specific example under the aforementioned heading of 'performance'. Thanks.
My thoughts (and I've been wrong before, just ask my wife) are that it's simply a holdover from earlier incarnations of the language.
Early versions of C did not allow you to declare variables anywhere you wanted in a function, they had to be at the top (or maybe at the start of a block, it's hard to remember off the top of my head since I rarely do that nowadays).
In addition, you have the understandable desire to set a variable only when you know what it should be. There's no point in initialising a variable to something if the next thing you're going to do with it is simply overwrite that value (that's where the performance people are coming from here).
That's why it's necessary to allow uninitialised variables though you still shouldn't use them before you initialise them, and the good compilers have warnings to let you know about it.
In C++ (and later incarnations of C) where you can create your variable anywhere in a function, you really should create it and initialise it at the same time. But that wasn't possible early on. You had to use something like:
int fn(void) {
int x, y;
/* Do some stuff to set y */
x = y + 2;
/* Do some more stuff */
}
Nowadays, I'd opt for:
int fn(void) {
int y;
/* Do some stuff to set y */
int x = y + 2;
/* Do some more stuff */
}
The oldest excuse in programming : it improves performance!
edit: read your comments and I agree - years ago the focus on performance was on the number of CPU cycles. My first C compiler was traditional C (the one prior to ANSI C) and it allowed all sorts of abominations to compile. In these modern times performance is about the number of customer complaints. As I tell new graduates we hire - 'I don't care how quickly the program gives a wrong answer'. Use all the tools of modern compilers and development, write less bugs and everyone can go home on time.
Some API's are designed to return data via variables passed in, e.g:
bool ok;
int x = convert_to_int(some_string, &ok);
It may set the value of 'ok' inside the function, so initializing it is a waste.
(I'm not advocating this style of API.)
The short answer is that for more complicated cases, the compiler may not be able to determine whether a variable is used before initialization or not.
eg.
int x;
if (external_function() == 2) {
x = 42;
} else if (another_function() == 3) {
x = 49;
}
yet_another_function( &x );
cout << x; // Is this a use-before-definition?
Good compilers will give a warning message if they can spot a probable use-before-initialize error, but for complex cases - especially involving multiple compilation units - there's no way for the compiler to tell.
As to whether a language should allow the concept of uninitialized variables, that is another matter. C# is slightly unusual in defining every variable as being initialized with a default value. Most languages (C++/C/BCPL/FORTRAN/Assembler/...) leave it up to the programmer as to whether initialization is appropriate. Good compilers can sometimes spot unnecessary initializations and eliminate them, but this isn't a given. Compilers for more obscure hardware tend to have less effort put into optimization (which is the hard part of compiler writing) so languages targeting such hardware tend not to require unnecessary code generation.
Perhaps in some cases, it is faster to leave the memory uninitialised until it is needed (for example, if you return from a function before using a variable). I typically initialise everything anyway, I doubt it makes any real different in performance. The compiler will have its own way of optimising away useless initialisations, I'm sure.
Some languages have default values for some variable types. That being said I doubt there are performance benefits in any language to not explicitly initializing them. However the downsides are:
The possibility that it must be initialized and without being done you risk a crash
Unanticipated values
Lack of clarity and purpose for other programmers
My suggestion is always initialize your variables and the consistency will pay for itself.
Depending on the size of the variable, leaving the value uninitialized in the name of performance might be regarded as micro-optimization. Relatively few programs (when compared to the broad array of software types out there) would be negatively affected by the extra two-or-three cycles necessary to load a double; however, supposing the variable were quite large, delaying initialization until it is abundantly clear initialization is required is probably a good idea.
for loop style
int i;
for(i=0;i<something;++i){
.......
}
do something with i
and you would prefer the for loop to look like for(init;condition;inc)
here's one with an absolute necessity
bool b;
do{
....
b = g();
....
}while(!b);
horizontal screen real estate with long nested names
longer lived scoping for debugging visibility
very occasionally performance
Related
It used to be a good practice to write code like this:
int var = 0, var1 = 0; //declare variables outside the loop not to create them every time we go into for
auto var2 = somefunc(/*some params*/);
for (int i = 0, size = vec.size(); i < size; ++i)
{
//do some calculation, use var, var1, var2
}
But now all(?) modern compilers will optimize it for you, and if you write:
for (int i = 0; i < vec.size(); ++i)
{
//do something
int var = /*some usage*/; //declare where you need it for better readability
//do something
int var1 = /*some usage with call to somefunc(/*some params*/) */; //declare where you need it for better readability
//do something
}
The compiler will optimize it to be the same as the first snippet (or even better).
So, should we always rely on the compiler to optimize variable allocation, etc, to write code that is easier to read for other programmers?
Disclaimer: this is not an opinion-based question. I don't have experience with many compilers, and I don't have experience in using compiler optimization options, so I expect people with experience to answer here, something like "I've worked with so many compilers, yes, nowadays they are all smart and we don't need to think about where to declare variables", or "oh, I've run into some compilers that didn't do those kind of optimizations, so we still need to think about it", or something about experience of usage of optimization options.
Variables don't get "created" like you think they do. In practice they're either positions on the stack, or registers reserved for a particular purpose. In most cases there's absolutely zero cost to scoping them inside the loop.
As always, look at the assembly output if you ever want to be sure.
If you're used to languages like Python, Ruby or JavaScript where creating a variable is an operation with an actual cost this might be why you're thinking this way, but in an optimized C++ build it's a whole different game, as is with any language that goes through a compiler pass, even a JIT.
The call to somefunc is going to be a fragile optimization away, if it occurs.
But in general, write for readability, avoid premature pessimization, determine where performance bottlenecks are after writing it, and expend effort on measured improvements where the performance bottleneck is.
Code that is hard to read is hard to debug and hard to make correct and hard to improve. All of which you'll often spend more time on than writing the code in the first place.
Variables of trivial type, like int or double, do not exist outside of debug builds the way you might imagine. More complex types can have more fundamental existence, in that where you create/destroy them can matter. But compilers continue to improve, so even here worrying to much without knowledge the code is a bottleneck is a bad plan.
Compiler optimizations are performance safety nets. In other words, they aren't something you rely on. They are a fallback for programmers unaware of penalties in their code.
That being said, you shouldn't trust optimizations by default because not all compilers share the same behavior. In the same sense that no two implementations of anything share the same behavior. If you work in embedded systems, especially on exotic hardware, you might get stuck with a bare minimal C compiler that does very little to help you.
Also, just because something is optimized, doesn't necessarily mean you have to sacrifice readability.
By using the const qualifier a variable is supposed to be made read-only. As an example, an int marked const cannot be assigned to:
const int number = 5; //fine to initialize
number = 3; //error, number is const
At first glance it looks like this makes it impossible to modify the contents of number. Unfortunately, actually it can be done. As an example const_cast could be used (*const_cast<int*>(&number) = 3). This is undefined behavior, but this doesn't guarantee that number does not actually get modified. It could cause the program to crash, but it could also simply modify the value and continue.
Is it possible to make it actualy impossible to modify a variable?
A possible need for this might be security concerns. It might need to be of highest importance that some very valuable data must not be changed or that a piece of data being sent must not be modified.
No, this is not the concern of a programming language. Any "access" protection is only superficial and only exists at compile-time.
Memory of a computer can always be modified at runtime if you have the corresponding rights. Your OS might provide you with facilities to secure pages of memory though, e.g VirtualProtect() under Windows.
(Notice that an "attacker" could use the same facilities to restore the access if he has the privilege to do so)
Also I assume that there might be hardware solutions for this.
There is also the option of encrypting the data in question. Yet it appears to be a chicken-and-egg situation as the private key for the encryption and decryption has to be stored somewhere in memory as well (with a software-only solution).
While most of the answers in this thread are correct, but they are related to const, while the OP is asking for a way to have a constant value defined and used in the source code. My crystal ball says that OP is looking for symbolic constants (preprocessor #define statements).
#define NUMBER 3
//... some other code
std::cout<<NUMBER;
This way, the developer is able to parametrize values and maintain them easily, while there's virtually no (easy) way to alter it once the program is compiled and launched.
Just keep in mind that const variables are visible to debuggers, while symbolic constants are not, but they require no additional memory. Another criteria is the type checking, which is absent in case of symbolic constants, as well as for macros.
const is not intended to make a variable read-only.
The meaning of const x is basically:
Hey compiler, please prevent me from casually writing code in this scope which changes x.
That's very different from:
Hey compiler, please prevent any changes to x in this scope.
Even if you don't write any const_cast's yourself - the compiler will still not assume that const'ed entities won't change. Specifically, if you use the function
int foo(const int* x);
the compiler cannot assume that foo() doesn't change the memory pointed to by x.
You could use your value without a variable
Variables vary... so, naturally, a way to prevent that is using values which aren't stored in variables. You can achieve that by using...
an enumeration with a single value: enum : int { number = 1 }.
the preprocessor: #define NUMBER 1 <- Not recommended
a function: inline int get_number() { return 1; }
You could use implementation/platform-specific features
As #SebastianHoffman suggests, typical platforms allow marking some of a process' virtual memory space as read-only, so that attempts to change it result in an access violation signal to the process and the suspension of its execution. This is not a solution within the language itself, but it is often useful. Example: When you use string literals, e.g.:
const char* my_str = "Hello world";
const_cast<char*>(my_str)[0] = 'Y';
Your process will likely fail, with a message such as:
Segmentation fault (core dumped)
If you know the program at compile-time, you can place the data in read-only memory. Sure, someone could get around this, but security is about layers rather than absolutes. This makes it harder. C++ has no concept of this, so you'll have to inspect the resulting binary to see if it's happened (this could be scripted as a post-build check).
If you don't have the value at compile-time, your program depends on being able to change / set it at runtime, so you fundamentally cannot stop that from happening.
Of course, you can make it harder though things like const so the code is compiled assuming it won't change / programmers have a harder time accidentally changing it.
You may also find constexpr an interesting tool to explore here.
There is no way to specify what code does that does not adhere to the specification.
In your example number is truly constant. You correctly note that modifiying it after a const_cast would be undefined beahvior. And indeed it is impossible to modify it in a correct program.
I read this line in a book:
It is provably impossible to build a compiler that can actually
determine whether or not a C++ function will change the value of a
particular variable.
The paragraph was talking about why the compiler is conservative when checking for const-ness.
Why is it impossible to build such a compiler?
The compiler can always check if a variable is reassigned, a non-const function is being invoked on it, or if it is being passed in as a non-const parameter...
Why is it impossible to build such a compiler?
For the same reason that you can't write a program that will determine whether any given program will terminate. This is known as the halting problem, and it's one of those things that's not computable.
To be clear, you can write a compiler that can determine that a function does change the variable in some cases, but you can't write one that reliably tells you that the function will or won't change the variable (or halt) for every possible function.
Here's an easy example:
void foo() {
if (bar() == 0) this->a = 1;
}
How can a compiler determine, just from looking at that code, whether foo will ever change a? Whether it does or doesn't depends on conditions external to the function, namely the implementation of bar. There's more than that to the proof that the halting problem isn't computable, but it's already nicely explained at the linked Wikipedia article (and in every computation theory textbook), so I'll not attempt to explain it correctly here.
Imagine such compiler exists. Let's also assume that for convenience it provides a library function that returns 1 if the passed function modifies a given variable and 0 when the function doesn't. Then what should this program print?
int variable = 0;
void f() {
if (modifies_variable(f, variable)) {
/* do nothing */
} else {
/* modify variable */
variable = 1;
}
}
int main(int argc, char **argv) {
if (modifies_variable(f, variable)) {
printf("Modifies variable\n");
} else {
printf("Does not modify variable\n");
}
return 0;
}
Don't confuse "will or will not modify a variable given these inputs" for "has an execution path which modifies a variable."
The former is called opaque predicate determination, and is trivially impossible to decide - aside from reduction from the halting problem, you could just point out the inputs might come from an unknown source (eg. the user). This is true of all languages, not just C++.
The latter statement, however, can be determined by looking at the parse tree, which is something that all optimizing compilers do. The reason they do is that pure functions (and referentially transparent functions, for some definition of referentially transparent) have all sorts of nice optimizations that can be applied, like being easily inlinable or having their values determined at compile-time; but to know if a function is pure, we need to know if it can ever modify a variable.
So, what appears to be a surprising statement about C++ is actually a trivial statement about all languages.
I think the key word in "whether or not a C++ function will change the value of a particular variable" is "will". It is certainly possible to build a compiler that checks whether or not a C++ function is allowed to change the value of a particular variable, you cannot say with certainty that the change is going to happen:
void maybe(int& val) {
cout << "Should I change value? [Y/N] >";
string reply;
cin >> reply;
if (reply == "Y") {
val = 42;
}
}
I don't think it's necessary to invoke the halting problem to explain that you can't algorithmically know at compile time whether a given function will modify a certain variable or not.
Instead, it's sufficient to point out that a function's behavior often depends on run-time conditions, which the compiler can't know about in advance. E.g.
int y;
int main(int argc, char *argv[]) {
if (argc > 2) y++;
}
How could the compiler predict with certainty whether y will be modified?
It can be done and compilers are doing it all the time for some functions, this is for instance a trivial optimisation for simple inline accessors or many pure functions.
What is impossible is to know it in the general case.
Whenever there is a system call or a function call coming from another module, or a call to a potentially overriden method, anything could happen, included hostile takeover from some hacker's use of a stack overflow to change an unrelated variable.
However you should use const, avoid globals, prefer references to pointers, avoid reusing variables for unrelated tasks, etc. that will makes the compiler's life easier when performing aggressive optimisations.
There are multiple avenues to explaining this, one of which is the Halting Problem:
In computability theory, the halting problem can be stated as follows: "Given a description of an arbitrary computer program, decide whether the program finishes running or continues to run forever". This is equivalent to the problem of deciding, given a program and an input, whether the program will eventually halt when run with that input, or will run forever.
Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist.
If I write a program that looks like this:
do tons of complex stuff
if (condition on result of complex stuff)
{
change value of x
}
else
{
do not change value of x
}
Does the value of x change? To determine this, you would first have to determine whether the do tons of complex stuff part causes the condition to fire - or even more basic, whether it halts. That's something the compiler can't do.
Really surprised that there isn't an answer that using the halting problem directly! There's a very straightforward reduction from this problem to the halting problem.
Imagine that the compiler could tell whether or not a function changed the value of a variable. Then it would certainly be able to tell whether the following function changes the value of y or not, assuming that the value of x can be tracked in all the calls throughout the rest of the program:
foo(int x){
if(x)
y=1;
}
Now, for any program we like, let's rewrite it as:
int y;
main(){
int x;
...
run the program normally
...
foo(x);
}
Notice that, if, and only if, our program changes the value of y, does it then terminate - foo() is the last thing it does before exiting. This means we've solved the halting problem!
What the above reduction shows us is that the problem of determining whether a variable's value changes is at least as hard as the halting problem. The halting problem is known to be incomputable, so this one must be also.
As soon as a function calls another function that the compiler doesn't "see" the source of, it either has to assume that the variable is changed, or things may well go wrong further below. For example, say we have this in "foo.cpp":
void foo(int& x)
{
ifstream f("f.dat", ifstream::binary);
f.read((char *)&x, sizeof(x));
}
and we have this in "bar.cpp":
void bar(int& x)
{
foo(x);
}
How can the compiler "know" that x is not changing (or IS changing, more appropriately) in bar?
I'm sure we can come up with something more complex, if this isn't complex enough.
It is impossible in general to for the compiler to determine if the variable will be changed, as have been pointed out.
When checking const-ness, the question of interest seems to be if the variable can be changed by a function. Even this is hard in languages that support pointers. You can't control what other code does with a pointer, it could even be read from an external source (though unlikely). In languages that restrict access to memory, these types of guarantees can be possible and allows for more aggressive optimization than C++ does.
To make the question more specific I suggest the following set of constraints may have been what the author of the book may have had in mind:
Assume the compiler is examining the behavior of a specific function with respect to const-ness of a variable. For correctness a compiler would have to assume (because of aliasing as explained below) if the function called another function the variable is changed, so assumption #1 only applies to code fragments that don't make function calls.
Assume the variable isn't modified by an asynchronous or concurrent activity.
Assume the compiler is only determining if the variable can be modified, not whether it will be modified. In other words the compiler is only performing static analysis.
Assume the compiler is only considering correctly functioning code (not considering array overruns/underruns, bad pointers, etc.)
In the context of compiler design, I think assumptions 1,3,4 make perfect sense in the view of a compiler writer in the context of code gen correctness and/or code optimization. Assumption 2 makes sense in the absence of the volatile keyword. And these assumptions also focus the question enough to make judging a proposed answer much more definitive :-)
Given those assumptions, a key reason why const-ness can't be assumed is due to variable aliasing. The compiler can't know whether another variable points to the const variable. Aliasing could be due to another function in the same compilation unit, in which case the compiler could look across functions and use a call tree to statically determine that aliasing could occur. But if the aliasing is due to a library or other foreign code, then the compiler has no way to know upon function entry whether variables are aliased.
You could argue that if a variable/argument is marked const then it shouldn't be subject to change via aliasing, but for a compiler writer that's pretty risky. It can even be risky for a human programmer to declare a variable const as part of, say a large project where he doesn't know the behavior of the whole system, or the OS, or a library, to really know a variable won't change.
Even if a variable is declared const, doesn't mean some badly written code can overwrite it.
// g++ -o foo foo.cc
#include <iostream>
void const_func(const int&a, int* b)
{
b[0] = 2;
b[1] = 2;
}
int main() {
int a = 1;
int b = 3;
std::cout << a << std::endl;
const_func(a,&b);
std::cout << a << std::endl;
}
output:
1
2
To expand on my comments, that book's text is unclear which obfuscates the issue.
As I commented, that book is trying to say, "let's get an infinite number of monkeys to write every conceivable C++ function which could ever be written. There will be cases where if we pick a variable that (some particular function the monkeys wrote) uses, we can't work out whether the function will change that variable."
Of course for some (even many) functions in any given application, this can be determined by the compiler, and very easily. But not for all (or necessarily most).
This function can be easily so analysed:
static int global;
void foo()
{
}
"foo" clearly does not modify "global". It doesn't modify anything at all, and a compiler can work this out very easily.
This function cannot be so analysed:
static int global;
int foo()
{
if ((rand() % 100) > 50)
{
global = 1;
}
return 1;
Since "foo"'s actions depends on a value which can change at runtime, it patently cannot be determined at compile time whether it will modify "global".
This whole concept is far simpler to understand than computer scientists make it out to be. If the function can do something different based on things can change at runtime, then you can't work out what it'll do until it runs, and each time it runs it may do something different. Whether it's provably impossible or not, it's obviously impossible.
I often use const for local variables that are not being modified, like this:
const float height = person.getHeight();
I think it can make the compiled code potentially faster, allowing the compiler to do some more optimization. Or am I wrong, and compilers can figure out by themselves that the local variable is never modified?
Or am I wrong, and compilers can figure out by themselves that the local variable is never modified?
Most of the compilers are smart enough to figure this out themselves.
You should rather use const for ensuring const-correctness and not for micro-optimization.
const correctness lets compiler help you guard against making honest mistakes, so you should use const wherever possible but for maintainability reasons & preventing yourself from doing stupid mistakes.
It is good to understand the performance implications of code we write but excessive micro-optimization should be avoided. With regards to performance one should follow the,
80-20 Rule:
Identify the 20% of your code which uses 80% of your resources, through profiling on representative data sets and only then attempt to optimize those bottlenecks.
This performance difference will almost certainly be negligible, however you should be using const whenever possible for code documentation reasons. Often times, compilers can figure this out for your anyway and make the optimizations automatically. const is really more about code readability and clarity than performance.
If there is a value type on the left hand, you may safely assume that it will have a negligible effect, or none at all. It's not going to influence overload resolution, and what is actually const can easily be deduced from the scope.
It's an entirely different matter with reference types:
std::vector<int> v(1);
const auto& a = v[0];
auto& b = v[0];
These two assignments resolve to two entirely different operators, and similar overload pairs are found in many libraries aside STL too. Even in this simple example, optimizations which depend on v having been immutable for the scope of b are already no longer trivial and less likely to be found.
The STL is still quite tame in these terms though, in such that at least the behavior doesn't change based on choosing the const_reference overload or not. For most of STL, the const_reference overload is only tied to the object being const itself.
Some other libraries (e.g. Qt) make heavy use of copy-on-write semantics. In these const-correctness with references is no longer optional, but necessary:
QVector<int> v1(1);
auto v2 = v1; // Backing storage of v2 and v1 is still linked
const auto& a = v1[0]; // Still linked
const auto& b = v2[0]; // Still linked
auto& c = v2[0]; // Deep copy from v1 to v2 is happening now :(
// Even worse, &b != &c
Copy-on-write semantics are something commonly found in large-matrix or image manipulation libraries, and something to watch out for.
It's also something where the compiler is no longer able to save you, the overload resolution is mandated by C++ standard and there is no leeway for eliminating the costly side effects.
I don't think it is a good practice to make local variables, including function parameters, constant by default.
The main reason is brevity. Good coding practices allow you to make your code short, this one doesn't.
Quite similarly, you can write void foo(void) in your function declarations, and you can justify it by increased clarity, being explicit about not intending to pass a parameter to the function, etc, but it is essentially a waste of space, and eventually almost fell out of use. I think the same thing will happen to the trend of using const everywhere.
Marking local variables with a const qualifier is not very useful for most of the collaborators working with the code you create. Unlike class members, global variables, or the data pointed to a by a pointer, a local variable doesn't have any external effects, and no one would ever be restricted by the qualifier of the local variable or learn anything useful from it (unless he is going to change the particular function where the local variable is).
If he needs to change your function, the task should not normally require him to try to deduce valuable information from the constant qualifier of a variable there. The function should not be so large or difficult to understand; if it is, probably you have more serious problems with your design, consider refactoring. One exception is functions that implement some hardcore math calculations, but for those you would need to put some details or a link to the paper in your comments.
You might ask why not still put the const qualifier if it doesn't cost you much effort. Unfortunately, it does. If I had to put all const qualifiers, I would most likely have to go through my function after I am done and put the qualifiers in place - a waste of time. The reason for that is that you don't have to plan the use of local variables carefully, unlike with the members or the data pointed to by pointers.
They are mostly a convenience tool, technically, most of them can be avoided by either factoring them into expressions or reusing variables. So, since they are a convenience tool, the very existence of a particular local variable is merely a matter of taste.
Particularly, I can write:
int d = foo(b) + c;
const int a = foo(b); int d = a + c;
int a = foo(b); a += c
Each of the variants is identical in every respect, except that the variable a is either constant or not, or doesn't exist at all. It is hard to commit to some of the choices early.
There is one major problem with local constant values - as shown in the code below:
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
// const uint32_t const_dummy = 0;
void func1(const uint32_t *ptr);
int main(void) {
const uint32_t const_dummy = 0;
func1(&const_dummy);
printf("0x%x\n", const_dummy);
return EXIT_SUCCESS;
}
void func1(const uint32_t *ptr) {
uint32_t *tmp = (uint32_t *)ptr;
*tmp = 1;
}
This code was compiled on Ubuntu 18.04.
As you can see, const_dummy's value can be modified in this case!
But, if you modify the code and set const_dummy scope to global - by commenting out the local definition and remove the comment from the global definition - you will get an exception and your our program will crash - which is good, because you can debug it and find the problem.
What is the reason? Well global const values are located in the ro (read only) section of the program. The OS - protects this area using the MMU.
It is not possible to do it with constants defined in the stack.
With systems that don't use the MMU - you will not even "feel" that there is a problem.
I've seen statements like this
if(SomeBoolReturningFunc())
{
//do some stuff
//do some more stuff
}
and am wondering if putting a function in an if statement is efficient, or if there are cases when it would be better to leave them separate, like this
bool AwesomeResult = SomeBoolReturningFunc();
if(AwesomeResult)
{
//do some other, more important stuff
}
...?
I'm not sure what makes you think that assigning the result of the expression to a variable first would be more efficient than evaluating the expression itself, but it's never going to matter, so choose the option that enhances the readability of your code. If you really want to know, look at the output of your compiler and see if there is any difference. On the vast majority of systems out there this will likely result in identical machine code.
either way it should not matter. the basic idea is that the result will be stored in a temporary variable no matter what, whether you name it or not. Readability is more important nowadays because computers are normally so fast that small tweaks don't matter as much.
I've certainly seen if (f()) {blah;} produce more efficient code than bool r = f(); if (r) {blah;}, but that was many years ago on a 68000.
These days, I'd definitely pick the code that's simpler to debug. Your inefficiencies are far more likely to be your algorithm vs. the code your compiler generates.
as others have said, basically makes no real difference in performance, generally in C++ most performance gains at a pure code level (as opposed to algorithm) are made in loop unrolling.
also, avoiding branching altogether can give way more performance if the condition is in a loop.
for instance by having separate loops for each conditional case or by having a statements that inherently take into account the condition (perhaps by multiplying a term by 0 if its not wanted)
and then you can get more by unrolling that loop
templating your code can help a lot with this in a "semi" clean way.
There is also the possibility of
if (bool AwesomeResult = SomeBoolRetuningFunc()) { ... }
:)
IMO, the kind of statements you have seen are clearer to read, and less error prone:
bool result = Foo();
if (result) {
//some stuff
}
bool other_result = Bar();
if (result) { //should be other_result? Hopefully caught by "unused variable" warning...
//more stuff
}
Both variants usually produce the same machine code and run exactly the same. Very rarely there will a performance difference and even in this cases it will unlikely be a bottleneck (which translates into don't bother prior to profiling).
The significant difference is in debugging and readability. With a temporary variable it's easier to debug. Without the variable the code is shorter and perhaps more readable.
If you want both easy to debug and easier to read code you should better declare the variable as const:
const bool AwesomeResult = SomeBoolReturningFunc();
if(AwesomeResult)
{
//do some other, more important stuff
}
this way it's clearer that the variable is never assigned to again and there's no other logic behind its declaration.
Putting aside any debugging ease or readability problems, and as long as the function's returned value is not used again in the if-block; it seems to me that assigning the returned value to a variable only causes an extra use of the = operator and an extra bool variable stored in the stack space - I might speculate further that an extra variable in the stack space will cause latency in further stack accesses (not sure though).
The thing is, these are really minor problems and as long as the compiler has an optimization flag on, should cause no inefficiency. A different case I'd consider would be an embedded system - then again, how much damage could a single, 8-bit variable cause? (I have absolutely no knowledge concerning embedded systems, so maybe someone else could elaborate on this?)
The AwesomeResult version can be faster if SomeBoolReturningFunc() is fairly slow and you are able to use AwesomeResult more than once rather than calling SomeBoolReturningFunc() again.
Placing the function inside or outside the if-statement doesn't matter. There is no performance gain or loss. This is because the compiler will automatically create a place on the stack for the return value - whether or not you've explicitly defined a variable.
In the performance tuning I've done, this would only be thought about in the final stage of cycle-shaving after cleaning out a series of significant performance issues.
I seem to recall that one statement per line was the recommendation of the book Code Complete, where it was argued that such code is easier to understand. Make each statement do one and only one thing so that it is very easy to see very quickly at a glance what is happening in the code.
I personally like having the return types in variables to make them easier to inspect (or even change) in the debugger.
One answer stated that a difference was observed in generated code. I sincerely doubt that with an optimizing compiler.
A disadvantage, prior to C++11, for multiple lines is that you have to know the type of the return for the variable declaration. For example, if the return is changed from bool to int then depending on the types involved you could have a truncated value in the local variable (which could make the if malfunction). If compiling with C++11 enabled, this can be dealt with by using the auto keyword, as in:
auto AwesomeResult = SomeBoolReturningFunc() ;
if ( AwesomeResult )
{
//do some other, more important stuff
}
Stroustrup's C++ 4th edition recommends putting the variable declaration in the if statement itself, as in:
if ( auto AwesomeResult = SomeBoolReturningFunc() )
{
//do some other, more important stuff
}
His argument is that this limits the scope of the variable to the greatest extent possible. Whether or not that is more readable (or debuggable) is a judgement call.