Do references always create an implicit pointer? - c++

If I create a reference to a variable inside the scope of a function like that :
{
int x = 5;
int & ref = x;
}
Will it always create an implicit pointer ? Creating a pointer is needed if the reference is a function parameter, but in this case, it is the same as using x directly.

Not necessarily. How your compiler implements references is down to it, so long as it follows the C++ standard.
Remember that the compiler will adopt the as-if rule. You program the intended behaviour. The compiler generates the code. A good compiler will miss out your code snippet entirely since it has no observable effect.
See What exactly is the "as-if" rule?

That's an unspecified implementation detail. (Function parameters might be passed in registers, which would mean no pointer either.)
But in this (automatic) scope, ref is just an alias for x, so no pointer is needed for the compiler.

Others have given already the formally correct answer. I am trying with a more practical perspective. In principle, yes, the compiler will "always" create an implicit pointer. Frequently, that a pointer is the only way a reference can be implemented.
However, the compiler employs many optimization strategies and hence, frequently, the implicit pointer can and will be optimized away.
Some examples:
In your example above, since the variables are never used, everything even the variable x will be optimized away.
If you pass the reference to a function that cannot be inlined, the reference most likely will be kept. If the function can be inlined, the references probably can be optimized away as well.
void swap(int &a, int &b) {
int c=a; a=b; b=c;
}
If the above function is typically equivalent to using pointers. If you ask your compiler to produce the assembly code, except for some minor differences, it will produce the same code. In many cases the function can be inlined which means your call to the function swap will be replaced by what the function is doing. As a consequence, the references will probably optimized away (same would be the case if you had been using pointers).
If your question goes deeper and is whether there is a difference in using pointers versus references, they are equally expensive. A reference cannot magically replace the necessity for a pointer. On the other hand, even though they are the same, references are not redundant from a code readability point of view.
In the end, as the others have explained use whatever makes your program more readable and don't worry about the difference.
Edit: removed vector<int&> sample - thanks idclev 463035818

Related

Compiler deduction of rvalue-references for variables going out of scope

Why won't the compiler automatically deduce that a variable is about to go out of scope, and therefore let it be considered an rvalue-reference?
Take for example this code:
#include <string>
int foo(std::string && bob);
int foo(const std::string & bob);
int main()
{
std::string bob(" ");
return foo(bob);
}
Inspecting the assembly code clearly shows that the const & version of "foo" is called at the end of the function.
Compiler Explorer link here: https://godbolt.org/g/mVi9y6
Edit: To clarify, I'm not looking for suggestions for alternative ways to move the variable. Nor am I trying to understand why the compiler chooses the const& version of foo. Those are things that I understand fine.
I'm interested in knowing of a counter example where the compiler converting the last usage of a variable before it goes out of scope into an rvalue-reference would introduce a serious bug into the resulting code. I'm unable to think of code that breaks if a compiler implements this "optimization".
If there's no code that breaks when the compiler automatically makes the last usage of a variable about to go out of scope an rvalue-reference, then why wouldn't compilers implement that as an optimization?
My assumption is that there is some code that would break where compilers to implement that "optimization", and I'd like to know what that code looks like.
The code that I detail above is an example of code that I believe would benefit from an optimization like this.
The order of evaluation for function arguments, such as operator+(foo(bob), foo(bob)) is implementation defined. As such, code such as
return foo(bob) + foo(std::move(bob));
is dangerous, because the compiler that you're using may evaluate the right hand side of the + operator first. That would result in the string bob potentially being moved from, and leaving it in a valid, but indeterminate state. Subsequently, foo(bob) would be called with the resulting, modified string.
On another implementation, the non-move version might be evaluated first, and the code would behave the way a non-expert would expect.
If we make the assumption that some future version of the c++ standard implements an optimization that allows for the compiler to treat the last usage of a variable as an rvalue reference, then
return foo(bob) + foo(bob);
would work with no surprises (assuming appropriate implementations of foo, anyway).
Such a compiler, no matter what order of evaluation it uses for function arguments, would always evaluate the second (and thus last) usage of bob in this context as an rvalue-reference, whether that was the left hand side, or right hand side of the operator+.
Here's a piece of perfectly valid existing code that would be broken by your change:
// launch a thread that does the calculation, moving v to the thread, and
// returns a future for the result
std::future<Foo> run_some_async_calculation_on_vector(std::pmr::vector<int> v);
std::future<Foo> run_some_async_calculation() {
char buffer[2000];
std::pmr::monotonic_buffer_resource rsrc(buffer, 2000);
std::pmr::vector<int> vec(&rsrc);
// fill vec
return run_some_async_calculation_on_vector(vec);
}
Move constructing a container always propagates its allocator, but copy constructing one doesn't have to, and polymorphic_allocator is an allocator that doesn't propagate on container copy construction. Instead, it always reverts to the default memory resource.
This code is safe with copying because run_some_async_calculation_on_vector receives a copy allocated from the default memory resource (which hopefully persists throughout the thread's lifetime), but is completely broken by a move, because then it would have kept rsrc as the memory resource, which will disappear once run_some_async_calculation returns.
The answer to your question is because the standard says it's not allowed to. The compiler can only do that optimization under the as if rule. String has a large constructor and so the compiler isn't going to do the verification it would need to.
To build on this point a bit: all that it takes to write code that "breaks" under this optimization is to have the two different versions of foo print different things. That's it. The compiler produces a program that prints something different than the standard says that it should. That's a compiler bug. Note that RVO does not fall under this category because it is specifically addressed by the standard.
It might make more sense to ask why the standard doesn't say so, e.g.why not extend the rule governing returning at the end of a function, which is implicitly treated as an rvalue. The answer is most likely because it rapidly becomes complicated to define correct behavior. What do you do if the last line were return foo(bob) + foo(bob)? And so on.
Because the fact it will go out of scope does not make it not-an-lvalue while it is in scope. So the - very reasonable - assumption is that the programmer wants the second version of foo() for it. And the standard mandates this behavior AFAIK.
So just write:
int main()
{
std::string bob(" ");
return foo(std::move(bob));
}
... however, it's possible that the compiler will be able to optimize the code further if it can inline foo(), to get about the same effect as your rvalue-reference version. Maybe.
Why won't the compiler automatically deduce that a variable is about to go out of scope, and can therefore be considered an rvalue-reference?
At the time the function gets called, the variable is still in scope. If the compiler changes the logic of which function is a better fit based on what happens after the function call, it will be violating the standard.

why not allow common subexpression elimination on const nonvolatile member functions?

One of the goals of C++ is to allow user-defined types to behave as nicely as built-in types. One place where this seems to fail is in compiler optimization. If we assume that a const nonvolatile member function is the moral equivalent of a read (for a user-defined type), then why not allow a compiler to eliminate repeated calls to such a function? For example
class C {
...
public:
int get() const;
}
int main() {
C c;
int x{c.get()};
x = c.get(); // why not allow the compiler to eliminate this call
}
The argument for allowing this is the same as the argument for copy elision: while it changes the operational semantics, it should work for code that follows good semantic practice, and provides substantial improvement in efficiency/modularity. (In this example it is obviously silly, but it becomes quite valuable in, say, eliminating redundant iterative safety checks when functions are inlined.)
Of course it wouldn't make sense to allow this for functions that return non-const references, only for functions that return values or const references.
My question is whether there is a fundamental technical argument against this that doesn't equally apply to copy elision.
Note: just to be clear, I am not suggesting the compiler look inside of the definition of get(). I'm saying that the declaration of get() by itself should allow the compiler to elide the extra call. I'm not claiming that it preserves the as-if rule; I'm claiming that, just as in copy elision, this is a case where we want to allow the compiler to violate the as-if rule. If you are writing code where you want a side effect to be semantically visible, and don't want redundant calls to be eliminated, you shouldn't declare your method as const.
New answer based on clarification on the question
C::get would need a stronger annotation than const. As it stands today, the const is a promise that the method doesn't (conceptually) modify the object. It makes not guarantees about interaction with global state or side effects.
Thus if the new version of the C++ standard carved out another exception to the as-if rule, as it did for copy elision, based solely on the fact that a method is marked const, it would break a lot of existing code. The standards committee seems to try pretty hard not to break existing code.
(Copy elision probably broke some code, too, but I think it's actually a pretty narrow exception compared to what you're proposing.)
You might argue that we should re-specify what const means on a method declaration, giving it this stronger meaning. That would mean you could no longer have a C::print method that's const, so it seems this approach would also break a lot of existing code.
So we would have to invent a new annotation, say pure_function. To get that into the standard, you'd have to propose it and probably convince at least one compiler maker to implement it as an extension to illustrate that it's feasible and useful.
I suspect that the incremental utility is pretty low. If your C::get were trivial (no interaction with global state and no observable side effects), then you may as well define it in the class definition, thus making it available for inlining. I believe inlining would allow the compiler to generate code as optimal as a pure_function tag on the declaration (and maybe even more so), so I wouldn't expect the incremental benefit of a pure_function tag to be significant enough to convince the standards committee, compiler makers, and language users to adopt it.
Original answer
C::get could depend on global state and it might have observable side effects, either of which would make it a mistake to elide the second call. It would violate the as-if rule.
The question is whether the compiler knows this at the time it's optimizing at the call site. As your example is written, only the declaration of C::get is in scope. The definition is elsewhere, presumably in another compilation unit. Thus the compiler must assume the worst when it compiles and optimizes the calling code.
Now if the definition of C::get were both trivial and in view, then I suppose it's theoretically possible for the compiler to realize there are no side effects or non-deterministic behavior, but I doubt most optimizers get that aggressive. Unless C::get were inlined, I imagine there would be an exponential growth in the paths to analyze.
And if you want to skip the entire assignment statement (as opposed to just the second call of C::get), then the compiler would also have to examine the assignment operator for side effects and reliance on global state in order to ensure the optimization wouldn't violate the as-if rule.
First of all const-ness of methods (or of references) is totally irrelevant for the optimizer, because constness can be casted away legally (using const-cast) and because, in case of references, there could be aliasing. Const correctness has been designed to help programmers, not the optimizer (another issue is if it really helps or not, but that's a separate unrelated discussion).
Moreover to elide a call to a function the optimizer would also need to be sure that the result doesn't depend and doesn't influence global state.
Compilers sometimes have a way to declare that a function is "pure", i.e. that the result depends only on the arguments and doesn't influence global state (like sin(x)), but how you declare them is implementation dependent because the C++ standard doesn't cover this semantic concept.
Note also that the word const in const reference describes a property of the reference, not of the referenced object. Nothing is known about the const-ness of an object that you're given a const reference of and the object can indeed change or even go out of existence while you have the reference still in your hands. A const reference means simply that you cannot change the object using that reference, not that the object is constant or that it will be constant for a while.
For a description of why a const reference and a value are two very different semantic concepts and of the subtle bugs you can meet if you confuse them see this more detailed answer.
The first answer to your question from Adrian McCarthy was just about as clear as possible:
The const-ness of a member function is a promise that no modification of externally visible state will be made (baring mutable variables in an object instance, for example).
You would expect a const member function which just reported the internal state of an object to always return the same answer. However, it could also interact with the ever changing real world and return a different answer every time.
What if it is a function to return the current time?
Let us put this into an example.
This is a class which converts a timestamp (double) into a human readable string.
class time_str {
// time and its format
double time;
string time_format;
public:
void set_format(const string& time_format);
void set_time(double time);
string get_time() const;
string get_current_time() const;
};
And it is used (clumsily) like so:
time_str a;
a.set_format("hh:mm:ss");
a.set_time(89.432);
cout << a.get_time() << endl;
So far so good. Each invocation to a.get_time(); will return the same result.
However, at some point, we decide to introduce a convenience function which returns the current time in the same format:
cout << a.get_time() << " is different from " << a.get_current_time() << endl;
It is const because it doesn't change the state of the object in any way (though it accesses the time format). However, obviously each call to get_current_time() must return a different answer.

Does const-correctness give the compiler more room for optimization?

I know that it improves readability and makes the program less error-prone, but how much does it improve the performance?
And on a side note, what's the major difference between a reference and a const pointer? I would assume they're stored in the memory differently, but how so?
[Edit: OK so this question is more subtle than I thought at first.]
Declaring a pointer-to-const or reference-of-const never helps any compiler to optimize anything. (Although see the Update at the bottom of this answer.)
The const declaration only indicates how an identifier will be used within the scope of its declaration; it does not say that the underlying object can not change.
Example:
int foo(const int *p) {
int x = *p;
bar(x);
x = *p;
return x;
}
The compiler cannot assume that *p is not modified by the call to bar(), because p could be (e.g.) a pointer to a global int and bar() might modify it.
If the compiler knows enough about the caller of foo() and the contents of bar() that it can prove bar() does not modify *p, then it can also perform that proof without the const declaration.
But this is true in general. Because const only has an effect within the scope of the declaration, the compiler can already see how you are treating the pointer or reference within that scope; it already knows that you are not modifying the underlying object.
So in short, all const does in this context is prevent you from making mistakes. It does not tell the compiler anything it does not already know, and therefore it is irrelevant for optimization.
What about functions that call foo()? Like:
int x = 37;
foo(&x);
printf("%d\n", x);
Can the compiler prove that this prints 37, since foo() takes a const int *?
No. Even though foo() takes a pointer-to-const, it might cast the const-ness away and modify the int. (This is not undefined behavior.) Here again, the compiler cannot make any assumptions in general; and if it knows enough about foo() to make such an optimization, it will know that even without the const.
The only time const might allow optimizations is cases like this:
const int x = 37;
foo(&x);
printf("%d\n", x);
Here, to modify x through any mechanism whatsoever (e.g., by taking a pointer to it and casting away the const) is to invoke Undefined Behavior. So the compiler is free to assume you do not do that, and it can propagate the constant 37 into the printf(). This sort of optimization is legal for any object you declare const. (In practice, a local variable to which you never take a reference will not benefit, because the compiler can already see whether you modify it within its scope.)
To answer your "side note" question, (a) a const pointer is a pointer; and (b) a const pointer can equal NULL. You are correct that the internal representation (i.e. an address) is most likely the same.
[update]
As Christoph points out in the comments, my answer is incomplete because it does not mention restrict.
Section 6.7.3.1 (4) of the C99 standard says:
During each execution of B, let L be any lvalue that has &L based on P. If L is used to
access the value of the object X that it designates, and X is also modified (by any means),
then the following requirements apply: T shall not be const-qualified. ...
(Here B is a basic block over which P, a restrict-pointer-to-T, is in scope.)
So if a C function foo() is declared like this:
foo(const int * restrict p)
...then the compiler may assume that no modifications to *p occur during the lifetime of p -- i.e., during the execution of foo() -- because otherwise the Behavior would be Undefined.
So in principle, combining restrict with a pointer-to-const could enable both of the optimizations that are dismissed above. Do any compilers actually implement such an optimization, I wonder? (GCC 4.5.2, at least, does not.)
Note that restrict only exists in C, not C++ (not even C++0x), except as a compiler-specific extension.
Off the top of my head, I can think of two cases where proper const-qualification allows additional optimizations (in cases where whole-program analysis is unavailable):
const int foo = 42;
bar(&foo);
printf("%i", foo);
Here, the compiler knows to print 42 without having to examine the body of bar() (which might not be visible in the curent translation unit) because all modifications to foo are illegal (this is the same as Nemo's example).
However, this is also possible without marking foo as const by declaring bar() as
extern void bar(const int *restrict p);
In many cases, the programmer actually wants restrict-qualified pointers-to-const and not plain pointers-to-const as function parameters, as only the former make any guarantees about the mutability of the pointed-to objects.
As to the second part of your question: For all practical purposes, a C++ reference can be thought of as a constant pointer (not a pointer to a constant value!) with automatic indirection - it is not any 'safer' or 'faster' than a pointer, just more convenient.
There are two issues with const in C++ (as far as optimization is concerned):
const_cast
mutable
const_cast mean that even though you pass an object by const reference or const pointer, the function might cast the const-ness away and modify the object (allowed if the object is not const to begin with).
mutable mean that even though an object is const, some of its parts may be modified (caching behavior). Also, objects pointed to (instead of being owned) can be modified in const methods, even when they logically are part of the object state. And finally global variables can be modified too...
const is here to help the developer catch logical mistakes early.
const-correctness generally doesn't help performance; most compilers don't even bother to track constness beyond the frontend. Marking variables as const can help, depending on the situation.
References and pointers are stored exactly the same way in memory.
This really depends on the compiler/platform (it may help optimisation on some compiler that has not yet been written, or on some platform that you never use). The C and C++ standards say nothing about performance other than giving complexity requirements for some functions.
Pointers and references to const usually do not help optimisation, as the const qualification can legally be cast away in some situations, and it is possible that the object can be modified by a different non-const reference. On the other hand, declaring an object to be const can be helpful, as it guarantees that the object cannot be modified (even when passed to functions that the compiler does not know the definition of). This allows the compiler to store the const object in read-only memory, or cache its value in a centralised place, reducing the need for copies and checks for modifications.
Pointers and references are usually implemented in the exact same way, but again this is totally platform dependant. If you are really interested then you should look at the generated machine code for your platform and compiler in your program (if indeed you are using a compiler that generates machine code).
One thing is, if you declare a global variable const, it may be possible to put it in the read-only portion of a library or executable and thus share it among multiple processes with a read-only mmap. This can be a big memory win on Linux at least if you have a lot of data declared in global variables.
Another situation, if you declare a constant global integer, or float or enum, the compiler may be able to just put the constant inline rather than using a variable reference. That's a bit faster I believe though I'm not a compiler expert.
References are just pointers underneath, implementation-wise.
It can help performance a little bit, but only if you are accessing the object directly through its declaration. Reference parameters and such cannot be optimized, since there might be other paths to an object not originally declared const, and the compiler generally can't tell if the object you are referencing was actually declared const or not unless that's the declaration you are using.
If you are using a const declaration, the compiler will know that externally-compiled function bodies, etc. cannot modify it, so you get a benefit there. And of course things like const int's are propagated at compile time, so that's a huge win (compared to just an int).
References and pointers are stored exactly the same, they just behave differently syntactically. References are basically renamings, and so are relatively safe, whereas pointers can point to lots of different things, and are thus more powerful and error-prone.
I guess the const pointer would be architecturally identical to the reference, so the machine code and efficiency would be the same. the real difference is syntax -- references are a cleaner, easier to read syntax, and since you don't need the extra machinery provided by a pointer, a reference would be stylistically preferred.

Const variables in c++

AFAIK we cannot change a value of a constant variable in C.
But i faced this interview question as below:
In C++, we have procedure to change the value of a constant variable.
Could anybody tell me how could we do it?
You can modify mutable data members of a const-qualified class-type object:
struct awesome_struct {
awesome_struct() : x(0) { }
mutable int x;
};
int main() {
const awesome_struct a;
a.x = 42;
}
The behavior here is well-defined.
Under the circumstances, I think I'd have explained the situation: attempting to change the value of a variable that's const gives undefined behavior. What he's probably asking about is how to change a variable that isn't itself const, but to which you've received a pointer or reference to const. In this case, when you're sure the variable itself is not const-qualified, you can cast away the const-ness with const_cast, and proceed to modify.
If you do the same when the variable itself is const-qualified the compiler will allow the code to compile, but the result will be undefined behavior. The attempt at modifying the variable might succeed -- or it might throw an exception, abort the program, re-format your NAS appliance's hard drives, or pretty much anything else.
It's probably also worth mentioning that when/if a variable is likely to need to be used this way, you can specify that the variable itself is mutable. This basically means that the variable in question is never const qualified, even if the object of which it's a part is const qualified.
There is no perfect way to cast away const-ness of a variable(which is const by definition) without invoking UB
Most probably the interviewers have no idea what they are talking about. ;-)
You cannot change the value of a constant variable. Trying to do so by cheating the compiler, invokes undefined behavior. The compiler may not be smart enough to tell you what you're doing is illegal, though!
Or maybe, you mean this:
const char *ptr= "Nawaz";
ptr = "Sarfaraz";
??
If so, then it's changing the value of the pointer itself, which is not constant, rather the data ptr points to is constant, and you cannot change that for example by writing ptr[2]='W';.
The interviewer was looking for const_cast. As an added bonus, you can explain scenarios where const_cast make sense, such as const_cast<MyClass *>(this)->doSomething() (which can be perfectly valid, although IMO not very clean), and scenarios where it can be dangerous , such as casting away constness of a passed const reference and therefore breaking the contract your functions signature provides.
If a const variable requires a change in value, then you have discovered a flaw in the design (under most, if not all, situations). If the design is yours, great! - you have an opportunity to improve your code.
If the design is not yours, good luck because the UB comment above is spot-on. In embedded systems, const variables may be placed in ROM. Performing tricks in an attempt to make such a variable writable lead to some spectacular failures. Furthermore, the problem extends beyond simple member variables. Methods and/or functions that are marked const may be optimized in such away that "lack of const-ness" becomes nonsensical.
(Take this "comment-as-answer" as a statement on the poor interview question.)

Constants and compiler optimization in C++

I've read all the advice on const-correctness in C++ and that it is important (in part) because it helps the compiler to optimize your code. What I've never seen is a good explanation on how the compiler uses this information to optimize the code, not even the good books go on explaining what happens behind the curtains.
For example, how does the compiler optimize a method that is declared const vs one that isn't but should be. What happens when you introduce mutable variables? Do they affect these optimizations of const methods?
I think that the const keyword was primarily introduced for compilation checking of the program semantic, not for optimization.
Herb Sutter, in the GotW #81 article, explains very well why the compiler can't optimize anything when passing parameters by const reference, or when declaring const return value. The reason is that the compiler has no way to be sure that the object referenced won't be changed, even if declared const : one could use a const_cast, or some other code can have a non-const reference on the same object.
However, quoting Herb Sutter's article :
There is [only] one case where saying
"const" can really mean something, and
that is when objects are made const at
the point they are defined. In that
case, the compiler can often
successfully put such "really const"
objects into read-only memory[...].
There is a lot more in this article, so I encourage you reading it: you'll have a better understanding of constant optimization after that.
Let's disregard methods and look only at const objects; the compiler has much more opportunity for optimization here. If an object is declared const, then (ISO/IEC 14882:2003 7.1.5.1(4)):
Except that any class member declared
mutable (7.1.1) can be modified, any
attempt to modify a const object
during its lifetime (3.8) results in
undefined behavior.
Lets disregard objects that may have mutable members - the compiler is free to assume that the object will not be modified, therefore it can produce significant optimizations. These optimizations can include things like:
incorporating the object's value directly into the machines instruction opcodes
complete elimination of code that can never be reached because the const object is used in a conditional expression that is known at compile time
loop unrolling if the const object is controlling the number of iterations of a loop
Note that this stuff applies only if the actual object is const - it does not apply to objects that are accessed through const pointers or references because those access paths can lead to objects that are not const (it's even well-defined to change objects though const pointers/references as long as the actual object is non-const and you cast away the constness of the access path to the object).
In practice, I don't think there are compilers out there that perform any significant optimizations for all kinds of const objects. but for objects that are primitive types (ints, chars, etc.) I think that compilers can be quite aggressive in optimizing
the use of those items.
handwaving begins
Essentially, the earlier the data is fixed, the more the compiler can move around the actual assignment of the data, ensuring that the pipeline doesn't stall out
end handwaving
Meh. Const-correctness is more of a style / error-checking thing than an optimisation. A full-on optimising compiler will follow variable usage and can detect when a variable is effectively const or not.
Added to that, the compiler cannot rely on you telling it the truth - you could be casting away the const inside a library function it doesn't know about.
So yes, const-correctness is a worthy thing to aim for, but it doesn't tell the compiler anything it won't figure out for itself, assuming a good optimising compiler.
It does not optimize the function that is declared const.
It can optimize functions that call the function that is declared const.
void someType::somefunc();
void MyFunc()
{
someType A(4); //
Fling(A.m_val);
A.someFunc();
Flong(A.m_val);
}
Here to call Fling, the valud A.m_val had to be loaded into a CPU register. If someFunc() is not const, the value would have to be reloaded before calling Flong(). If someFunc is const, then we can call Flong with the value that's still in the register.
The main reason for having methods as const is for const correctness, not for possible compilation optimization of the method itself.
If variables are const they can (in theory) be optimized away. But only is the scope can be seen by the compiler. After all the compiler must allow for them to be modified with a const_cast elsewhere.
Those are all true answers, but the answers and the question seem to presume one thing: that compiler optimization actually matters.
There is only one kind of code where compiler optimization matters, that is in code that is
a tight inner loop,
in code that you compile, as opposed to a 3rd-party library,
not containing function or method calls (even hidden ones),
where the program counter spends a noticeable fraction of its time
If the other 99% of the code is optimized to the Nth degree, it won't make a hoot of difference, because it only matters in code where the program counter actually spends time (which you can find by sampling).
I would be surprised if the optimizer actually puts much stock into a const declaration. There is a lot of code that will end up casting const-ness away, it would be a very reckless optimizer that relied on the programmer declaration to assume when the state may change.
const-correctness is also useful as documentation. If a function or parameter is listed as const, I don't need to worry about the value changing out from under my code (unless somebody else on the team is being very naughty). I'm not sure it would be actually worth it if it wasn't built into the library, though.
The most obvious point where const is a direct optimization is in passing arguments to a function. It's often important to ensure that the function doesn't modify the data so the only real choices for the function signature are these:
void f(Type dont_modify); // or
void f(Type const& dont_modify);
Of course, the real magic here is passing a reference rather than creating an (expensive) copy of the object. But if the reference weren't marked as const, this would weaken the semantics of this function and have negative effects (such as making error-tracking harder). Therefore, const enables an optimization here.
/EDIT: actually, a good compiler can analyze the control flow of the function, determine that it doesn't modify the argument and make the optimization (passing a reference rather than a copy) itself. const here is merely a help for the compiler. However, since C++ has some pretty complicated semantics and such control flow analysis can be very expensive for big functions, we probably shouldn't rely on compilers for this. Does anybody have any data to back me up / prove me wrong?
/EDIT2: and yes, as soon as custom copy constructors come into play, it gets even trickier because compilers unfortunately aren't allowed to omit calling them in this situation.
This code,
class Test
{
public:
Test (int value) : m_value (value)
{
}
void SetValue (int value) const
{
const_cast <Test&>(*this).MySetValue (value);
}
int Value () const
{
return m_value;
}
private:
void MySetValue (int value)
{
m_value = value;
}
int
m_value;
};
void modify (const Test &test, int value)
{
test.SetValue (value);
}
void main ()
{
const Test
test (100);
cout << test.Value () << endl;
modify (test, 50);
cout << test.Value () << endl;
}
outputs:
100
50
which means the const declared object has been altered in a const member function. The presence of const_cast (and the mutable keyword) in the C++ language means that the const keyword can't aide the compiler in generating optimised code. And as I pointed out in my previous posts, it can even produce unexpected results.
As a general rule:
const != optimisation
In fact, this is a legal C++ modifier:
volatile const
const helps compilers optimize mainly because it makes you write optimizable code. Unless you throw in const_cast.