As the tile implies, in C++ we cannot have contains that hold references since objects inside containers have to be assignable. We cannot reassign a reference after it's been initialized.
However, in my program, I have a static const std::map that holds const reference as values and it compiles fine. I am wondering if the reason is because the map is declared as const and initialized at declaration, which tells the compiler that "this object is const and its content will not change so it's ok to hold const reference as its values".
I couldn't find answers anywhere else. The code works but I don't want it to confuse other developers.
Edit,
Sorry I didn't include the code. Here it goes:
const glm::dvec4& GetObjectColor(const msg::ObjectType type) {
static const std::map<msg::ObjectType, const glm::dvec4&> kObjectColorMap = {
{msg::ObjectType::PERSON, kWhite},
{msg::ObjectType::ANIMAL, kSilver},
{msg::ObjectType::SEDAN, kGray},
{msg::ObjectType::SUV, kRed},
{msg::ObjectType::VAN, kMaroon},
{msg::ObjectType::BICYCLE, kYellow},
{msg::ObjectType::TRICYCLE, kOlive},
{msg::ObjectType::MOTORCYCLE, kLime},
{msg::ObjectType::TRUCK, kGreen},
{msg::ObjectType::BUS, kAqua},
{msg::ObjectType::PICKUP, kTeal},
{msg::ObjectType::UNKNOWN, kBlue}};
return kObjectColorMap.at(type);
}
No. You cannot.
Maybe you have seen this question already: Why does storing references (not pointers) in containers in C++ not work?
The premise of the quesiton is correct. You cannot store reference in containers.
... and it compiles fine.
Code not causing a compiler error cannot safely assumed to be correct. Actually "compiles without errors" is the lowest bar you can put on code. Consider this horribibly broken code:
int* dont_do_this_at_home;
*dont_do_this_at_home = 42; // serisouly: DONT DO THIS
No compiler I know will issue an error or warning for this code irrespective of the fact that this code could not be more broken. The best that can happen here is that you get a segmentation fault. The worst: Demons fly out of your nose. Read here about undefined behavior: https://en.cppreference.com/w/cpp/language/ub
As explained in the above linked Q&A, the language specs say: you cannot store reference in containers. If you still do it and your compiler will not generate an error, then you should not assume that you found a way around the rule.
It is similar to not being allow to toucht the ball with your hands in soccer. You can touch the ball with your hands, but that does not disprove the rule.
Consider this:
int a = 5;
std::map<int,int&> x{ { 1,a} }; // WRONG !!!
No compiler error, but still it is wrong. C++ isnt soccer, if you break the rules there is no referee that will tell you about it.
PS: there is std::reference_wrapper to store references in containers.
Related
Why does the following code compile?
class Demo
{
public:
Demo() : a(this->a){}
int& a;
};
int main()
{
Demo d;
}
In this case, a is a reference to an integer. However, when I initialize Demo, I pass a reference to a reference of an integer which has not yet been initialized. Why does this compile?
This still compiles even if instead of int, I use a reference to a class which has a private default constructor. Why is this allowed?
Why does this compile?
Because it is syntactically valid.
C++ is not a safe programming language. There are several features that make it easy to do the right thing, but preventing someone from doing the wrong thing is not a priority. If you are determined to do something foolish, nothing will stop you. As long as you follow the syntax, you can try to do whatever you want, no matter how ludicrous the semantics. Keep that in mind: compiling is about syntax, not semantics.*
That being said, the people who write compilers are not without pity. They know the common mistakes (probably from personal experience), and they recognize that your compiler is in a good position to spot certain kinds of semantic mistakes. Hence, most compilers will emit warnings when you do certain things (not all things) that do not make sense. That is why you should always enable compiler warnings.
Warnings do not catch all logical errors, but for the ones they do catch (such as warning: 'Demo::a' is initialized with itself and warning: '*this.Demo::a' is used uninitialized), you've saved yourself a ton of debugging time.
* OK, there are some semantics involved in compiling, such as giving a meaning to identifiers. When I say compiling is not about semantics, I am referring to a higher level of semantics, such as the intended behavior.
Why does this compile?
Because there is no rule that would make the program ill-formed.
Why is this allowed?
To be clear, the program is well-formed, so it compiles. But the behaviour of the program is undefined, so from that perspective, the premise of your question is flawed. This isn't allowed.
It isn't possible to prove all cases where an indeterminate value is used, and it isn't easy to specify which of the easy cases should be detected by the compiler, and which would be considered to be too difficult. As such, the standard doesn't attempt to specify it, and leaves it up to the compiler to warn when it is able to detect it. For what it's worth, GCC is able to detect it in this case for example.
C++ allows you to pass a reference to a reference to uninitialized data because you might want to use the called function as the initializer.
When declaring a variable as a const and trying to modfiy its value later, you will get a compiler error. For example this code:
void func(){
const int a = 5;
a = 4;
}
will generate error C3892 on MSVS. However, if the opposite case happened, no error nor warning will be thrown. For example, this code:
void func(){
int a = 5;
std::cout << a;
}
won't produce any warning even with /wall configuration. I know that this code is not buggy. It is just does not follow the best practice.
Why there is no warning for such thing? is it only on MSVS? Does the standard have anything to say about that? Are there another compilers that produce a warning for this?
When declaring a variable as a const and trying to modifiy its value later, you will get a compiler error.
That's right, that's because you try misleading your compiler by telling it that you are not going to modify something, and then modifying it later.
Why there is no warning for such thing?
Because there is nothing misleading about that. Declaring a variable non-const does not tell the compiler that you are going to modify it, only that you want to have an option to modify it later.
Does the standard have anything to say about that?
No. Although the standard talks about the other case (modifying a const), it does not say anything about not modifying a non-const.
One reason for this is that finding a modification of a const is a trivial task, because it happens in one place. However, finding that there are no modifications of a non-const requires the compiler to look through the entire scope of the variable from the point of its declaration on. That is why finding potential const applications is left for program verification and refactoring tools.
I know for a function this simple it will be inlined:
int foo(int a, int b){
return a + b;
}
But my question is, can't the compiler just auto-detect that this is the same as:
int foo(const int a, const int b){
return a + b;
}
And since that could be detected, why would I need to type const anywhere? I know that the inline keyword has become obsolete because of compiler advances. Isn't it time that const do the same?
You don't put const as the result of not modifying a variable. You use const to enforce you not modifying it. Without const, you are allowed to modify the value. With const, the compiler will complain.
It's a matter of semantics. If the value should not be mutable, then use const, and the compiler will enforce that intention.
Yes, the compiler can prove constness in your example.
No, it would be of no use :-).
Update: Herb Sutter dedicated one of his gotchas to the topic (http://www.gotw.ca/gotw/081.htm). Summary:
const helps most by making the compiler and linker choose functions for const objects including const member functions which can be coded to be more efficient.
const doesn't help with the usual translation unit model [differs from what I supposed]; the compiler needs to see the whole program for verifying factual constness (which the mere declaration does not guarantee) and exploiting it, as well as prove the absence of aliasing ...
... and when the compiler can see the whole program and can prove factual constness it actually of course doesn't need the const declaration any longer! It can prove it. Duh.
The one place where const makes a big difference is a definition because the compiler may store the object in read-only memory.
The article is, of course, worth reading.
With respect to whole program optimization/translation which usually is necessary to exploit constness cf. the comments below from amdn and Angew.
can't the compiler just auto-detect that this is the same as...
If by that you mean whether the compiler can detect that the variables are not modified in the second case, most likely yes. The compiler is likely to produce the same output for both code samples. However, const might help the compiler in more complex situations. But the most important point is that it keeps you from inadvertently modifying one of the variables.
The compiler will always know what you did and will infer internal constness from that in order to optimize the code.
What the compiler can never know is what you wanted to do.
If you wanted a variable to remain constant but accidentally change it later in the code the compiler can only trap this error if you tell the compiler what you wanted.
This is what the const keyword is for.
struct bar {
const int* x;
};
bar make_bar(const int& x){
return {&x};
}
std::map<int,bar> data;
shuffle(data);
knowing that bar will never modify x (or cause it to be modified) in its lifetime requires understanding every use of bar in the program, or, say, making x a pointer to const.
Even with perfect whole program optimization (which cannot exist: turing machines are not perfectly understandable), dynamic linking means you cannot know at compile time how data will be used. const is a promise, and breaking that promise (in certain contexts) can be UB. The compiler can use that UB to optimize in ways that ignores the promise being broken.
inline is not obsolete: it means the same thing it ever did, that linker collisions of this symbol are to be ignored, and it mildly suggests injecting the code into the calling scope.
const simplifies certain optimizations (which may make them possible), and enforces things on the programmer (which helps the programmer), and can change what code means (const overloading).
Maybe he could but the const statement is also for you. If you set a variable as const and try to assign a new value afterwards you will get an error. If the compiler would make a var out of it by himself this would not work.
Const qualifier is a method to enforce behavior of the variables inside your scope. It only provides the compiler the means to scream at you if you try to modify them inside the scope where they are declared const.
A variable might be truly const (meaning it is writen in a read only location, hence compiler optimizations) if it's const at the time of it's declaration.
You can provide your 2nd function non const variables who will become "const" inside the function scope.
Or alternativelly you can bypass the const by casting , so the compiler cannot parse your whole code in an attempt to figure out if the valuea will be changed or not inside the function scope.
Considering that const qualifiers are mainly for code enforcing, and that compilers will generate the same code in 99% of cases if a variable is const or non const, then NO, the compiler shouldn't auto-detect constness.
Short answer: because not all problems are that simple.
Longer answer: You cannot assume that an approach which works with a simple problem also works with a complex problem
Exact answer: const is an intent. The main goal of const is to prevent you doing anything accidentially. If the compiler would add const automatically it would just see that the approach is NOT const and leave it at it. Using the const keyword will raise an error instead.
This question already has answers here:
Does const-correctness give the compiler more room for optimization?
(7 answers)
Closed 9 years ago.
Do const declarations help the compiler (GCC) produce faster code or are they only useful for readability and correctness?
Zed Shaw has argued that const is useless or is overused in C/C++:
Next is all the bizarre fascination with const. For some odd reason
C++ loves to make you slap const on every part of a declaration, and
yet I get the same end result as C: a function is called. (...)
(From: http://librelist.com/browser//mongrel2/2010/7/15/c-verses-c++/#770d94bcfc6ddf1d8510199996b607dd )
Yes. Here’s one concrete example. const makes it possible to pass arguments by const& rather than by value (which might require a costly copy). It’s important to realise that the alternative to pass-by-const& is not pass-by-& because the latter doesn’t allow temporaries to be bound. So, for instance, this code:
auto result = foo{1} + foo{2} + foo{3};
may call foo operator +(foo const&, foo const&) but it may not call foo operator +(foo&, foo&).
That way, const helps avoid copies.
But generally, const is a tool to ensure correctness, not to to aid optimisations.
Either way, Zed Shaw has no idea what he’s talking about. The rest of his rant is similarly misinformed, by the way.
No, const does not help the compiler make faster code. Const is for const-correctness, not optimizations.
The C++ standard says that const items can't be modified, but also says that const_cast should remove the const modifier from an object and make it writable (unless it's located in actually read-only memory, in which case the behavior is undefined); as such const cannot mean, in general, that the target variable will not change.
I can only think of these two very narrow scenarios where having const produces faster code than not having it:
the variable is global with internal linkage (static) and is passed by reference or pointer to a function defined in a different translation unit (different file). In this case, the compiler cannot elide reads to it if it is not marked const;
the variable is global with external linkage (extern). Reads to a const extern can be elided inside the file that defines it (but nowhere else).
When const is applied to a global variable, the compiler is allowed to assume that the value will never change because it will place it in read-only memory, and this means undefined behavior if the program attempts to modify it, and compiler authors love to rely on the threat of undefined behavior to make code faster.
Note that both scenarios apply only to global variables, which probably make for a very minor portion of the variables in your program. To its merit, however, const implies static at the global level in C++ (this is not the case in C).
Someone above said that using const can make code faster because it's possible to use const references. I would argue here that what make the code faster is the use of a reference, not the use of const.
That said, I still believe const is a very sharp knife with which you can't cut yourself and I would advise that you use it whenever it's appropriate, but don't do it for performance reasons.
Yes, const can (not guaranteed) help the compiler produce faster/more correct code. More so than not, they're just a modifier on data that you express to both the compiler and to other people that read your code that some data is not supposed to change. This helps the type system help you write more correct software.
More important than optimizations, they just prevent your own code and people using your code from writing to data you assume to be invariant.
In general const modifier on method, references and pointers can't be used to optimize code for a couple of reasons. The primary one is that const modifier, in those contexts, doesn't make any guarantees about the underlying data not changing, it just makes it harder to modify it. Here is a classic example
void M(const C& p1, C& p2) {
cout << p1.field << endl;
p2.Mutate();
cout << p1.field<< endl;
}
In this case it's very possible that p1.field is modified in this code. The most obvious case is that p1 and p2 refer to the same value.
C local;
M(local, local);
Hence there is no real optimization the compiler can do here. The const parameter is equally as dangerous as the non-const one.
The other reason why it can't really optimize is that anyone can cheat in C++ with const_cast.
class C {
public:
int field;
int GetField() const {
C* pEvil = const_cast<C*>(this);
pEvil->field++;
return field;
}
};
So even if you are dealing with a single const reference the values are still free to change under the hood.
I'm using the UnitTest++ framework to implement unit tests on some C code I'm responsible for. The end product is embedded and uses const structures to hold configuration information. Since the target host can modify the configuration asynchronously the members of the structure are all volatile. Some of the structures are also declared as volatile.
I'm getting segmentation faults when I use const_cast to try to modify the structure instances lacking the volatile keyword on the UnitTest Windows 7 host. This makes sense to me. However, if the structure instance was declared with the volatile keyword then the test passes. This does not make sense to me.
Here's a quick code example that shows the problem with gcc on Win7. Switching the define value causes the segfault to appear or not, depending on if the volatile instance of the struct is used or not.
typedef struct
{
volatile int foo;
volatile int bar;
} TestStruct;
const TestStruct constStruct = { 1, 2};
volatile const TestStruct volatileConstStruct = { 3, 4};
#define SEG_FAULT 0
int main(void)
{
TestStruct * constPtr = const_cast<TestStruct*>(&constStruct);
TestStruct * constVolPtr = const_cast<TestStruct*>(&volatileConstStruct);
#if(SEG_FAULT == 0)
constVolPtr->foo = 10;
#else
constPtr->foo = 20;
#endif
}
Can anyone help me understand why the volatile keyword presents a workaround for the segfault? Also, can anyone suggest a method to allow me to modify the values in the structure for unit test without adding the volatile keyword to all the structure instances?
EDIT:
I've just discovered that you can do this in C:
#define const
Including the effective "const undefine" above in the test fixture allows my target compiler to see the const keyword and correctly place the structures into flash memory. However, the preprocessor on the UnitTest++ compiler strips out the const keyword, so my test fixture is able to modify the struct.
The drawback to this solution is that I cannot add unit tests that verify correct const operation of function calls. However, since removing the const from the struct instances is not an option (need the data to be placed in flash) this appears to be a drawback I will have to live with.
Why this strange behavior?
Modifying a const object using const_cast is an Undefined Behavior.
const_cast is used when you have a const pointer to an non const object and you want to point your pointer to it.
Why it works with volatile?
Not sure. However, It is still an Undefined Behavior and you are just lucky that it works.
The problem with Undefined Behavior is that all safe bets are off and the program might show any behavior. It might appear to work or it may not work. may crash or show any weird behavior.
it is best to not write any code exhibiting Undefined Behavior, that saves warranting explanations for such situations.
How to solve this?
Don't declare the objects you modify as const, Since you intend to modify them during the course of your program/test, they should not be const. Currently, You are making a promise to the compiler that your structure objects are immutable(const) but later you break that contract by modifying it. Make this promise only if you can keep it.
I believe a footnote in the standard gives you the answer. (Note that footnotes are not normative.)
In §6.7.3 of the standard draft N1570:
132) The implementation may place a const object that is not volatile
in a read-only region of storage.
This mean that the structure defined with the volatile keyword will be placed in read-write memory, despite the fact that it's defined const.
One could argue that the compiler is not allowed to place any of the structures in read-only memory, as they both contains volatile members. I would send in a compiler bug report, if I were you.
Can anyone help me understand why the volatile keyword presents a
workaround for the segfault? Also, can anyone suggest a method to
allow me to modify the values in the structure for unit test without
adding the volatile keyword to all the structure instances?
You can't. A const object is placed in read-only memory, and you will trigger a segfault if you write to it. Either drop the const or add volatile -- I would strongly recommend dropping const.