NULL vs nullptr (Why was it replaced?) [duplicate] - c++

This question already has answers here:
What exactly is nullptr?
(14 answers)
Closed 9 years ago.
I know that in C++ 0x or NULL was replaced by nullptr in pointer-based applications. I'm just curious of the exact reason why they made this replacement?
In what scenario is using nullptr over NULL beneficial when dealing with pointers?

nullptr has type std::nullptr_t. It's implicitly convertible to any pointer type. Thus, it'll match std::nullptr_t or pointer types in overload resolution, but not other types such as int.
0 (aka. C's NULL bridged over into C++) could cause ambiguity in overloaded function resolution, among other things:
f(int);
f(foo *);
(Thanks to Caleth pointing this out in the comments.)

You can find a good explanation of why it was replaced by reading A name for the null pointer: nullptr, to quote the paper:
This problem falls into the following categories:
Improve support for library building, by providing a way for users to write less ambiguous code, so that over time library writers will not need to worry about overloading on integral and pointer types.
Improve support for generic programming, by making it easier to express both integer 0 and nullptr unambiguously.
Make C++ easier to teach and learn.

Here is Bjarne Stroustrup's wordings,
In C++, the definition of NULL is 0, so there is only an aesthetic difference. I prefer to avoid macros, so I use 0. Another problem with NULL is that people sometimes mistakenly believe that it is different from 0 and/or not an integer. In pre-standard code, NULL was/is sometimes defined to something unsuitable and therefore had/has to be avoided. That's less common these days.
If you have to name the null pointer, call it nullptr; that's what
it's called in C++11. Then, "nullptr" will be a keyword.

One reason: the literal 0 has a bad tendency to acquire the type int, e.g. in perfect argument forwarding or more in general as argument with templated type.
Another reason: readability and clarity of code.

Related

Changing/punning types [duplicate]

This question already has answers here:
C++ cast syntax styles
(10 answers)
What's the difference between type(myVar) and (type)myVar?
(2 answers)
What is the difference between (type)value and type(value)?
(5 answers)
Closed 3 years ago.
Consider:
b = (int) a; // C-like cast notation
b = int (a); // Functional notation
Apparently I was wrong in my initial cut at an answer. They are roughly equivalent. And while compound type names like long long or void * can't use functional syntax directly (i.e. long long(val) doesn't work), using typedef can get around this issue.
Both cast notations are very bad and should be avoided. For example:
const char c = 'a';
void *fred = (void *)(&c);
works, and it shouldn't.
Both the C-style cast notation will sometimes behave like static_cast, sometimes like const_cast, sometimes like reinterpret_cast, or even a combination of the two depending on the exact situation in which it's used. These semantics are rather complex and it's not always easy to tell exactly what's happening in any given situation.
I have gone to using mostly C++ static_cast<type>(val) style casts, and never use C-style casts. Based on my research for this question I'm going to also stop using function-style casts for anything. The question "C++ cast syntax styles" has an excellent answer (the accepted one) that details why.
There's hardly any difference. Officially, the first tells the compiler that the value is an integer. This probably doesn't generate any extra code at all. The function call is an actual call that internally performs the other typecast. A smart compiler will optimize this, so they are actually the same.
There isn't any difference. It is a matter of preference. These are old-style casts.
It depends where you use it and how. I.e. if you have values or pointers (or pointers of pointers).
With C++ you should read up on *_cast<> and use them instead.

C++ Operator Overloading??? Isn't it more like override? [duplicate]

This question already has answers here:
Why is it called operator overloading?
(2 answers)
Closed 4 years ago.
In C++, why is operator overloading called "overloading"?
To me, this seems more like "override."
Because you're not changing the meaning of +, -, *, / etc. with respect to the fundamental datatypes. You can't change what those means for char, short, int, float, etc. Therefore, you're not truly overriding anything.
You are instead expanding the meaning of them to new contexts, which seems to fit with the term "overloading": you've loaded the symbols onto new meanings they did not previously have.
This is pretty subjective, and not easy to answer in specific terms.
But generally we use "override" to mean "replacing the behaviour of a function with another behaviour", as in when you have a polymorphic class hierarchy, and a call to a function whose various implementations are virtual can result in totally different behaviour. Certainly that's what the standard means by the term.
Isn't this also what happens with overloading? Kind of. But usually when you overload a function, you do so to give it different parameter lists, but would still expect each implementation to perform the same job. It doesn't have to, but one expects it to.
Similarly with overloaded operators, if you're overloading say operator+ then generally we expect that it actually still just does the normal, conventional "addition" logic — but overloaded so that it can take an argument of your new class type, instead of the existing overloads that take built-in types.
In practice, that breaks down a bit, because even the standard library makes operator<< mean something completely different (among other examples).
Still, the actual task of creating these new operators is accomplished by what the language considers to be function overloading (particularly as no virtual calls are involved at all).
In short, you're arguably not entirely wrong, but it's pretty arbitrary and this is what we ended up with.

Why is p2 not a pointer type in declaration int* p1, p2;? [duplicate]

This question already has answers here:
Why does c++ pointer * associate to the variable declared, not the type?
(3 answers)
Closed 9 years ago.
int* p1, p2;
According to the C++ standard, p1 is a pointer yet p2 is not.
I just wonder why the C++ standard doesn't also define p2 as a pointer?
I think it is reasonable to do so. Because:
C++ is a strong-typing language. That is to say, given any type T, the statement T t1, t2; always guarantees t1 and t2 have the same type.
However, the fact that p1 and p2 don't have the same type breaks the rule, and seems counter-intuitive.
So, my question is: What's the rationale to make such a counter-intuitive rule as is? Just for backward compatibility?
int is the base type, * or & are prefix specifiers which are also called declarator operators. It means that * operates on the variable after it. And it is not a modifier for int. I think that is why.
The simple answer that won't help you much is that this is done for backwards compatibility with C, that has exactly the same syntax to define variables. The rationale for that design would have to come from the creators of C, and I don't really know it.
What I do know is that most coding guidelines I have used in different companies prohibited the definition of multiple variables in the same statement, and once you avoid that, everything becomes simple to read an maintain. The syntax for declarations in C and C++ is not really one of their strengths, but it is what it is.

What are the uses of the type `std::nullptr_t`?

I learned that nullptr, in addition to being convertible to any pointer type (but not to any integral type) also has its own type std::nullptr_t. So it is possible to have a method overload that accepts std::nullptr_t.
Exactly why is such an overload required?
If more than one overload accepts a pointer type, an overload for std::nullptr_t is necessary to accept a nullptr argument. Without the std::nullptr_t overload, it would be ambiguous which pointer overload should be selected when passed nullptr.
Example:
void f(int *intp)
{
// Passed an int pointer
}
void f(char *charp)
{
// Passed a char pointer
}
void f(std::nullptr_t nullp)
{
// Passed a null pointer
}
There are some special cases that comparison with a nullptr_t type is useful to indicate whether an object is valid.
For example, the operator== and operator!= overloads of std::function could only take nullptr_t as the parameter to tell if the function object is empty. For more details you could read this question.
Also, what other type would you give it, that doesn't simply re-introduce the problems we had with NULL? The whole point is to get rid of the nasty implicit conversions, but we can't actually change behaviour of old programs so here we are.
The type was introduced to avoid confusion between integer zero and the the null memory. And as always cpp gives you access to the type. Where as Java only gives you access to the value. It really doesnt matter what purpose you find for it. I normally use it as a token in function overloading.
But I have some issues with the implementation of cpp null const.
Why didnt they just continue with NULL or null? That definition was already being used for that purpose. What about code that already was using nullptr for something else.
Not to mention nullptr is just too long. Annoying to type and ugly to look at most times. 6 characters just to default initialize a variable.
With the introduction of nullptr, you would think zero would no longer be both a integer and null pointer const. However zero still holds that annoying ambiguity. So I dont see the sense then of this new nullptr value. If you define a function that can accept an integer or a char pointer, and pass zero to that function call, the compiler will complain that it is totally ambigious! And I dont think casting to an integer will help.
Finally, it sucks that nullptr_t is part of the std namespace and not simply a keyword. Infact I am just learning this fact, after how long I have been using nullptr_t in my functions. MinGW32 that comes with CodeBlocks allows you to get away with using nullptr_t with std namespace. Infact MinGW32 allows void* increment and a whole lot of other things.
Which leads me to: cpp has too much denominations and confusion. To the point where code compatibility with one compiler is not compatibility with another of the same cpp version. Static library of one compiler cannot work with a different compiler. There is no reason why it has to be this way. And I think this is just one way to help kill cpp.

C++ cast syntax styles

A question related to Regular cast vs. static_cast vs. dynamic_cast:
What cast syntax style do you prefer in C++?
C-style cast syntax: (int)foo
C++-style cast syntax: static_cast<int>(foo)
constructor syntax: int(foo)
They may not translate to exactly the same instructions (do they?) but their effect should be the same (right?).
If you're just casting between the built-in numeric types, I find C++-style cast syntax too verbose. As a former Java coder I tend to use C-style cast syntax instead, but my local C++ guru insists on using constructor syntax.
What do you think?
It's best practice never to use C-style casts for three main reasons:
as already mentioned, no checking is performed here. The programmer simply cannot know which of the various casts is used which weakens strong typing
the new casts are intentionally visually striking. Since casts often reveal a weakness in the code, it's argued that making casts visible in the code is a good thing.
this is especially true if searching for casts with an automated tool. Finding C-style casts reliably is nearly impossible.
As palm3D noted:
I find C++-style cast syntax too verbose.
This is intentional, for the reasons given above.
The constructor syntax (official name: function-style cast) is semantically the same as the C-style cast and should be avoided as well (except for variable initializations on declaration), for the same reasons. It is debatable whether this should be true even for types that define custom constructors but in Effective C++, Meyers argues that even in those cases you should refrain from using them. To illustrate:
void f(auto_ptr<int> x);
f(static_cast<auto_ptr<int> >(new int(5))); // GOOD
f(auto_ptr<int>(new int(5)); // BAD
The static_cast here will actually call the auto_ptr constructor.
According to Stroustrup:
The "new-style casts" were introduced
to give programmers a chance to state
their intentions more clearly and for
the compiler to catch more errors.
So really, its for safety as it does extra compile-time checking.
Regarding this subject, I'm following the recommandations made by Scott Meyers (More Effective C++, Item 2 : Prefer C++-style casts).
I agree that C++ style cast are verbose, but that's what I like about them : they are very easy to spot, and they make the code easier to read (which is more important than writing).
They also force you to think about what kind of cast you need, and to chose the right one, reducing the risk of mistakes. They will also help you detecting errors at compile time instead at runtime.
Definitely C++-style. The extra typing will help prevent you from casting when you shouldn't :-)
I use static_cast for two reasons.
It's explicitly clear what's taking place. I can't read over that without realizing there's a cast going on. With C-style casts you eye can pass right over it without pause.
It's easy to search for every place in my code where I'm casting.
The constructor syntax. C++ is OO, constructors exist, I use them.
If you feel the need to annotate these conversion ctor's you should do it for every type, not just the built-in ones. Maybe you use the 'explicit' keyword for conversion ctors but the client syntax mimics exactly what the ctor syntax for built-in types does.
Being greppable, that may be true, but what a big surprise that typing more characters makes searches easy. Why treat these ones as special?
If you are writing math formulas with lots of int/unsigned/... to and from double/float - graphics - and you need to write a static_cast every time, the look of the formula gets cluttered and is very much unreadable.
And it's an uphill battle anyway as a lot of times you will convert without even noticing that you are.
For downcasting pointers I do use the static_cast as of course no ctor exists by default that would do that.
C-style cast syntax, do not error check.
C++-style cast syntax, does some checking.
When using static_cast, even if it doesn't do checking, at least you know you should be carefull here.
C-style cast is the worst way to go. It's harder to see, ungreppable, conflates different actions that should not be conflated, and can't do everything that C++-style casts can do. They really should have removed C-style casts from the language.
We currently use C-style casts everywhere. I asked the other casting question, and I now see the advantage of using static_cast instead, if for no other reason than it's "greppable" (I like that term). I will probably start using that.
I don't like the C++ style; it looks too much like a function call.
Go for C++ style and, at worse, the ugly verbose code snippets that comprised C++'s explicit typecast will be a constant reminder of what we all know (i.e explicit casting is bad -- the lead to the coin-ing of expletives).
Do not go with C++ style if you want to master the art of tracking runtime errors.