Optimized code: is ++p faster than p++? [duplicate] - c++

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Is there a performance difference between i++ and ++i in C++?
In C++, I see people frequently use ++p in a for loop or elsewhere, where you want to increment a value, but not use the return value. I've heard this is more efficient because p++ returns the value before incrementing, and thus requires a temporary space.
But, it feels that even a very ignorant compiler will pull out that return value as dead code (as long as the increment code is inlined, and so the compiler can see that the return value wasn't necessary).
I'm trying to come up with a good example where, using some sort of iterator, iter++ will actually create the copy (even though the return value of iter++ is not used.
After all, we don't really frequently consider register allocation either when we write code with iterators.
I learned to use p++ simply because that's what the book I learned from did. Is preferring ++p when no return value is used an archaic practice, or simply part of elegant coding? And why is the language not called ++C then?

p++ is slower only when ++p doesn't do the job - i.e. when you actually use the return value. Otherwise, the compiler will optimize the code to the same thing.
People prefer using ++i instead of i++ because it describes better what you intend to do: increment i, more than increment i and return the old value. Of course, you tend to stick to old habits. If you're used to writing i++, that's ok, unless the coding standards of the company you work for mandate you use ++i.
Chapter 3 of D&E: ``I picked C++ because it was short, had nice
interpretations, and wasn't of the form "adjective C."' In C, ++ can,
depending on context, be read as "next," "successor," or "increment,"
though it is always pronounced "plus plus." The name C++ and its
runner up ++C are fertile sources for jokes and puns -- almost all of
which were known and appreciated before the name was chosen. The name
C++ was suggested by Rick Mascitti. It was first used in December of
1983 when it was edited into the final copies of [Stroustrup,1984] and
[Stroustrup,1984c].
Chapter 1 of TC++PL: ``The name C++ (pronounced
"see plus plus") was coined by Rick Mascitti in the summer of 1983.
The name signifies the evolutionary nature of the changes from C; "++"
is the C increment operator. The slightly shorter name "C+" is a
syntax error; it has also been used as the name of an unrelated
language. Connoisseurs of C semantics find C++ inferior to ++C. The
language is not called D, because it is an extension of C, and it does
not attempt to remedy problems by removing features. For yet another
interpretation of the name C++, see the appendix of [Orwell,1949].''
Sauce.

Depending on the iterator, creating a copy can be an extremely time consuming task (think about a naive, stack based iterator for a binary search tree). I suppose you realize that though, and that's not your real question :). Anyway, as far as I'm aware, the compiler is not required to optimize i++ to ++i.
As it's not required to optimize it, I would think it's better to err on the side of caution and stick with ++i.

Related

Is it a good style to write constants on the left of equal to == in If statement in C++? [duplicate]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Okay, we know that the following two lines are equivalent -
(0 == i)
(i == 0)
Also, the first method was encouraged in the past because that would have allowed the compiler to give an error message if you accidentally used '=' instead of '=='.
My question is - in today's generation of pretty slick IDE's and intelligent compilers, do you still recommend the first method?
In particular, this question popped into my mind when I saw the following code -
if(DialogResult.OK == MessageBox.Show("Message")) ...
In my opinion, I would never recommend the above. Any second opinions?
I prefer the second one, (i == 0), because it feel much more natural when reading it. You ask people, "Are you 21 or older?", not, "Is 21 less than or equal to your age?"
It doesn't matter in C# if you put the variable first or last, because assignments don't evaluate to a bool (or something castable to bool) so the compiler catches any errors like "if (i = 0) EntireCompanyData.Delete()"
So, in the C# world at least, its a matter of style rather than desperation. And putting the variable last is unnatural to english speakers. Therefore, for more readable code, variable first.
If you have a list of ifs that can't be represented well by a switch (because of a language limitation, maybe), then I'd rather see:
if (InterstingValue1 == foo) { } else
if (InterstingValue2 == foo) { } else
if (InterstingValue3 == foo) { }
because it allows you to quickly see which are the important values you need to check.
In particular, in Java I find it useful to do:
if ("SomeValue".equals(someString)) {
}
because someString may be null, and in this way you'll never get a NullPointerException. The same applies if you are comparing constants that you know will never be null against objects that may be null.
(0 == i)
I will always pick this one. It is true that most compilers today do not allow the assigment of a variable in a conditional statement, but the truth is that some do. In programming for the web today, I have to use myriad of langauges on a system. By using 0 == i, I always know that the conditional statement will be correct, and I am not relying on the compiler/interpreter to catch my mistake for me. Now if I have to jump from C# to C++, or JavaScript I know that I am not going to have to track down assignment errors in conditional statements in my code. For something this small and to have it save that amount of time, it's a no brainer.
I used to be convinced that the more readable option (i == 0) was the better way to go with.
Then we had a production bug slip through (not mine thankfully), where the problem was a ($var = SOME_CONSTANT) type bug. Clients started getting email that was meant for other clients. Sensitive type data as well.
You can argue that Q/A should have caught it, but they didn't, that's a different story.
Since that day I've always pushed for the (0 == i) version. It basically removes the problem. It feels unnatural, so you pay attention, so you don't make the mistake. There's simply no way to get it wrong here.
It's also a lot easier to catch that someone didn't reverse the if statement in a code review than it is that someone accidentally assigned a value in an if. If the format is part of the coding standards, people look for it. People don't typically debug code during code reviews, and the eye seems to scan over a (i = 0) vs an (i == 0).
I'm also a much bigger fan of the java "Constant String".equals(dynamicString), no null pointer exceptions is a good thing.
You know, I always use the if (i == 0) format of the conditional and my reason for doing this is that I write most of my code in C# (which would flag the other one anyway) and I do a test-first approach to my development and my tests would generally catch this mistake anyhow.
I've worked in shops where they tried to enforce the 0==i format but I found it awkward to write, awkward to remember and it simply ended up being fodder for the code reviewers who were looking for low-hanging fruit.
Actually, the DialogResult example is a place where I WOULD recommend that style. It places the important part of the if() toward the left were it can be seen. If it's is on the right and the MessageBox have more parameters (which is likely), you might have to scroll right to see it.
OTOH, I never saw much use in the "(0 == i) " style. If you could remember to put the constant first, you can remember to use two equals signs,
I'm trying always use 1st case (0==i), and this saved my life a few times!
I think it's just a matter of style. And it does help with accidentally using assignment operator.
I absolutely wouldn't ask the programmer to grow up though.
I prefer (i == 0), but I still sort of make a "rule" for myself to do (0 == i), and then break it every time.
"Eh?", you think.
Well, if I'm making a concious decision to put an lvalue on the left, then I'm paying enough attention to what I'm typing to notice if I type "=" for "==". I hope. In C/C++ I generally use -Wall for my own code, which generates a warning on gcc for most "=" for "==" errors anyway. I don't recall seeing that warning recently, perhaps because the longer I program the more reflexively paranoid I am about errors I've made before...
if(DialogResult.OK == MessageBox.Show("Message"))
seems misguided to me. The point of the trick is to avoid accidentally assigning to something.
But who is to say whether DialogResult.OK is more, or less likely to evaluate to an assignable type than MessageBox.Show("Message")? In Java a method call can't possibly be assignable, whereas a field might not be final. So if you're worried about typing = for ==, it should actually be the other way around in Java for this example. In C++ either, neither or both could be assignable.
(0==i) is only useful because you know for absolute certain that a numeric literal is never assignable, whereas i just might be.
When both sides of your comparison are assignable you can't protect yourself from accidental assignment in this way, and that goes for when you don't know which is assignable without looking it up. There's no magic trick that says "if you put them the counter-intuitive way around, you'll be safe". Although I suppose it draws attention to the issue, in the same way as my "always break the rule" rule.
I use (i == 0) for the simple reason that it reads better. It makes a very smooth flow in my head. When you read through the code back to yourself for debugging or other purposes, it simply flows like reading a book and just makes more sense.
My company has just dropped the requirement to do if (0 == i) from its coding standards. I can see how it makes a lot of sense but in practice it just seems backwards. It is a bit of a shame that by default a C compiler probably won't give you a warning about if (i = 0).
Third option - disallow assignment inside conditionals entirely:
In high reliability situations, you are not allowed (without good explanation in the comments preceeding) to assign a variable in a conditional statement - it eliminates this question entirely because you either turn it off at the compiler or with LINT and only under very controlled situations are you allowed to use it.
Keep in mind that generally the same code is generated whether the assignment occurs inside the conditional or outside - it's simply a shortcut to reduce the number of lines of code. There are always exceptions to the rule, but it never has to be in the conditional - you can always write your way out of that if you need to.
So another option is merely to disallow such statements, and where needed use the comments to turn off the LINT checking for this common error.
-Adam
I'd say that (i == 0) would sound more natural if you attempted to phrase a line in plain (and ambiguous) english. It really depends on the coding style of the programmer or the standards they are required to adhere to though.
Personally I don't like (1) and always do (2), however that reverses for readability when dealing with dialog boxes and other methods that can be extra long. It doesn't look bad how it is not, but if you expand out the MessageBox to it's full length. You have to scroll all the way right to figure out what kind of result you are returning.
So while I agree with your assertions of the simplistic comparison of value types, I don't necessarily think it should be the rule for things like message boxes.
both are equal, though i would prefer the 0==i variant slightly.
when comparing strings, it is more error-prone to compare "MyString".equals(getDynamicString())
since, getDynamicString() might return null.
to be more conststent, write 0==i
Well, it depends on the language and the compiler in question. Context is everything.
In Java and C#, the "assignment instead of comparison" typo ends up with invalid code apart from the very rare situation where you're comparing two Boolean values.
I can understand why one might want to use the "safe" form in C/C++ - but frankly, most C/C++ compilers will warn you if you make the typo anyway. If you're using a compiler which doesn't, you should ask yourself why :)
The second form (variable then constant) is more readable in my view - so anywhere that it's definitely not going to cause a problem, I use it.
Rule 0 for all coding standards should be "write code that can be read easily by another human." For that reason I go with (most-rapidly-changing value) test-against (less-rapidly-changing-value, or constant), i.e "i == 0" in this case.
Even where this technique is useful, the rule should be "avoid putting an lvalue on the left of the comparison", rather than the "always put any constant on the left", which is how it's usually interpreted - for example, there is nothing to be gained from writing
if (DateClass.SATURDAY == dateObject.getDayOfWeek())
if getDayOfWeek() is returning a constant (and therefore not an lvalue) anyway!
I'm lucky (in this respect, at least) in that these days in that I'm mostly coding in Java and, as has been mentioned, if (someInt = 0) won't compile.
The caveat about comparing two booleans is a bit of a red-herring, as most of the time you're either comparing two boolean variables (in which case swapping them round doesn't help) or testing whether a flag is set, and woe-betide-you if I catch you comparing anything explicitly with true or false in your conditionals! Grrrr!
In C, yes, but you should already have turned on all warnings and be compiling warning-free, and many C compilers will help you avoid the problem.
I rarely see much benefit from a readability POV.
Code readability is one of the most important things for code larger than a few hundred lines, and definitely i == 0 reads much easier than the reverse
Maybe not an answer to your question.
I try to use === (checking for identical) instead of equality. This way no type conversion is done and it forces the programmer do make sure the right type is passed,
You are right that placing the important component first helps readability, as readers tend to browse the left column primarily, and putting important information there helps ensure it will be noticed.
However, never talk down to a co-worker, and implying that would be your action even in jest will not get you high marks here.
I always go with the second method. In C#, writing
if (i = 0) {
}
results in a compiler error (cannot convert int to bool) anyway, so that you could make a mistake is not actually an issue. If you test a bool, the compiler is still issuing a warning and you shouldn't compare a bool to true or false. Now you know why.
I personally prefer the use of variable-operand-value format in part because I have been using it so long that it feels "natural" and in part because it seems to the predominate convention. There are some languages that make use of assignment statements such as the following:
:1 -> x
So in the context of those languages it can become quite confusing to see the following even if it is valid:
:if(1=x)
So that is something to consider as well. I do agree with the message box response being one scenario where using a value-operand-variable format works better from a readability stand point, but if you are looking for constancy then you should forgo its use.
This is one of my biggest pet peeves. There is no reason to decrease code readability (if (0 == i), what? how can the value of 0 change?) to catch something that any C compiler written in the last twenty years can catch automatically.
Yes, I know, most C and C++ compilers don't turn this on by default. Look up the proper switch to turn it on. There is no excuse for not knowing your tools.
It really gets on my nerves when I see it creeping into other languages (C#,Python) which would normally flag it anyway!
I believe the only factor to ever force one over the other is if the tool chain does not provide warnings to catch assignments in expressions. My preference as a developer is irrelevant. An expression is better served by presenting business logic clearly. If (0 == i) is more suitable than (i == 0) I will choose it. If not I will choose the other.
Many constants in expressions are represented by symbolic names. Some style guides also limit the parts of speech that can be used for identifiers. I use these as a guide to help shape how the expression reads. If the resulting expression reads loosely like pseudo code then I'm usually satisfied. I just let the expression express itself and If I'm wrong it'll usually get caught in a peer review.
We might go on and on about how good our IDEs have gotten, but I'm still shocked by the number of people who turn the warning levels on their IDE down.
Hence, for me, it's always better to ask people to use (0 == i), as you never know, which programmer is doing what.
It's better to be "safe than sorry"
if(DialogResult.OK == MessageBox.Show("Message")) ...
I would always recommend writing the comparison this way. If the result of MessageBox.Show("Message") can possibly be null, then you risk a NPE/NRE if the comparison is the other way around.
Mathematical and logical operations aren't reflexive in a world that includes NULLs.

What are the historical reasons C languages have pre-increments and post-increments?

(Note: I am not asking about the definitions of pre-increment vs. post-increment, or how they are used in C/C++. Therefore, I do not think this is a duplicate question.)
Developers of C (Dennis Ritchie et al) created increment and decrement operators for very good reasons. What I don't understand is why they decided to create the distinction of pre- vs post- increments/decrements?
My sense is that these operators were far more useful when C was being developed than today. Most C/C++ programmers use one or the other, and programmers from other languages find the distinction today bizarre and confusing (NB: this is based solely on anecdotal evidence).
Why did they decide to do this, and what has changed in computation that this distinction isn't so useful today?
For the record, the difference between the two can be seen in C++ code:
int x = 3;
cout << "x = 3; x++ == " << x++ << endl;
cout << "++x == " << ++x << endl;
cout << "x-- == " << x-- << endl;
cout << "--x == " << --x << endl;
will give as an output
x++ == 3
++x == 5
x-- == 5
--x == 3
Incrementing and decrementing by 1 were widely supported in hardware at the time: a single opcode, and fast. This because "incrementing by 1" and "decrementing by 1" were a very common operation in code (true to this day).
The post- and predecrement forms only affected the place where this opcode got inserted in the generated machine code. Conceptually, this mimics "increase/decrease before or after using the result". In a single statement
i++;
the 'before/after' concept is not used (and so it does the same as ++i;), but in
printf ("%d", ++i);
it is. That distinction is as important nowadays as it was when the language C was designed (this particular idiom was copied from its precursor named "B").
From The Development of the C Language
This feature [PDP-7's "`auto-increment' memory cells"] probably suggested such operators to Thompson [Ken Thompson, who designed "B", the precursor of C]; the generalization to make them both prefix and postfix was his own. Indeed, the auto-increment cells were not used directly in implementation of the operators, and a stronger motivation for the innovation was probably his observation that the translation of ++x was smaller than that of x=x+1.
Thanks to #dyp for mentioning this document.
When you count down from n it is very important whether is pre-decrement or post-decrement
#include <stdio.h>
void foopre(int n) {
printf("pre");
while (--n) printf(" %d", n);
puts("");
}
void foopost(int n) {
printf("post");
while (n--) printf(" %d", n);
puts("");
}
int main(void) {
foopre(5);
foopost(5);
return 0;
}
See the code running at ideone.
To get an answer that goes beyond speculation, most probably you have to ask Dennis Ritchie et al personally.
Adding to the answer already given, I'd like to add two possible reasons I came up with:
lazyness / conserving space:
you might be able to save a few keystrokes / bytes in the input file using the appropriate version in constructs like while(--i) vs. while(i--). (take a look at pmg s answer to see, why both make a difference, if you didn't see it in the first run)
esthetics
For reasons of symmetry having just one version either pre- or postincrement / decrement might feel like missing something.
EDIT: added sparing a few bytes in the input file in the speculation section providing, now providing a pretty nice "historic" reason as well.
Anyways the main point in putting together the list was giving examples of possible explanations not being too historic, but still holding today.
Of course I am not sure, but I think asking for a "historic" reason other than personal taste is starting from a presumtion not neccesarily true.
For C
Let's look at Kernighan & Ritchie original justification (original K&R page 42 and 43):
The unusual aspects is that ++ and -- may be used either as prefix or
as postfix. (...) In the context where no value is wanted (..) choose
prefix or postfix according to taste. But htere are situations where
one or the other is specifically called for.
The text continues with some examples that use increments within index, with the explicit goal of writing "more compact" code. So the reason behind these operators is convenience of more compact code.
The three examples given (squeeze(), getline() and strcat() ) use only postfix within expressions using indexing. The authors compare the code with a longer version that doesn't use embedded increments. This confirms that focus is on compactness.
K&R highlight on page 102, the use of these operators in combination with pointer dereferencing (eg *--p and *p--). No further example is given, but again, they make clear that the benefit is compactness.
For C++
Bjarne Stroustrup wanted to have C compatibility, so C++ inherited prefix and postfix increment and decrement.
But there's more on it: in his book "The design and evolution of C++", Stroustrup explains that initially, he planned have only one overload for both, postfix and prefix, in user defined classes:
Several people, notably Brian Kernighan, pointed out that this
restriction was unnatural from a C perspective and prevented users
from defining a class that could be used as replacement for an
ordinary pointer.
Which caused him to find the current signature difference to differentiate prefix and postfix.
By the way, without these operators C++ would not be C++ but C_plus_1 ;-)
Consider the following loop:
for(uint i=5; i-- > 0;)
{
//do something with i,
// e.g. call a function that _requires_ an unsigned parameter.
}
You can't replicate this loop with a pre-decrement operation without moving the decrement operation outside of the for(...) construct, and it's just better to have your initialization, interation and check all in one place.
A much larger issue is this: one can over-load the increment operators (all 4) for a class. But then the operators are critically different: the post operators usually result in a temporary copy of the class instance being made, where as the pre-operators do not. That is a huge difference in semantics.
The PDP-11 had a single instruction that corresponded to *p++, and another for *--p (or possibly the other way round).

Why to avoid postfix operator in C++? [duplicate]

This question already has answers here:
Prefix/Postfix increment operators
(3 answers)
Closed 9 years ago.
I heard a professor saying "Avoid postfix operator where the context allows to choose prefix". I search but I didn't found related posts in stackoverflow that explaining this.
Why to prefer prefix operator++ to postfix operator++ when we have the ability to choose either one?
The prefix operator++ does a single operation -- increment the value.
The postfix operator++ does three operations -- save the current value, increment the value, return the old value.
The prefix version is conceptually simpler, and is always (up to bizarre operator overloads) at least as efficient as the postfix version.
I'm pretty sure your professor is talking about the old speed difference between the prefix and postfix ++ operator. I'm also pretty sure it no longer matters which you choose as modern compilers usually are smart enough to recognize if it can be optimized out.
Also, depending on your code you might be required to use one or the other for correctness.
The prefix operator is potentially faster than the postfix operator, depending on the type on which it's operating. It should never be slower.
For most intrinsic types, the speed should be identical. However, many custom iterators need to make an extra copy of some state object in order to properly implement the postfix operator.
In order to implement the postfix operator, a copy of the original object has to be taken because that's what gets returned back to you.
For the prefix operator, you get the new object back, saving the copy overhead.
Some folk (rightly) will tell you that the compiler will optimise out unintentional postfix copies; for example in code like for (int n = 0; n < large; n++)
I'd always prefer to see ++n.
Infact, I'd rather the language be called ++C; not C++!
There's no real reason, except for stylistic issues. One noted
specialist recommended it once, and everyone blindly followed,
although the measures I did indicated that it made no
difference.
If you're starting on a green fields project, I'll use prefix,
but the motivation is just to avoid stupid discussions about the
issue. If I'm working on existing code, I'll continue to use
whatever was most common, because in real code, it makes
absolutely no difference, despite claims to the contrary.

Where does the k prefix for constants come from?

it's a pretty common practice that constants are prefixed with k (e.g. k_pi). But what does the k mean?
Is it simply that c already meant char?
It's a historical oddity, still common practice among teams who like to blindly apply coding standards that they don't understand.
Long ago, most commercial programming languages were weakly typed; automatic type checking, which we take for granted now, was still mostly an academic topic. This meant that is was easy to write code with category errors; it would compile and run, but go wrong in ways that were hard to diagnose. To reduce these errors, a chap called Simonyi suggested that you begin each variable name with a tag to indicate its (conceptual) type, making it easier to spot when they were misused. Since he was Hungarian, the practise became known as "Hungarian notation".
Some time later, as typed languages (particularly C) became more popular, some idiots heard that this was a good idea, but didn't understand its purpose. They proposed adding redundant tags to each variable, to indicate its declared type. The only use for them is to make it easier to check the type of a variable; unless someone has changed the type and forgotten to update the tag, in which case they are actively harmful.
The second (useless) form was easier to describe and enforce, so it was blindly adopted by many, many teams; decades later, you still see it used, and even advocated, from time to time.
"c" was the tag for type "char", so it couldn't also be used for "const"; so "k" was chosen, since that's the first letter of "konstant" in German, and is widely used for constants in mathematics.
I haven't seen it that much, but maybe it comes from certain languages' (the germanic ones in particular) spelling of the word constant - konstant.
Don't use Hungarian Notation. If you want constants to stand out, make them all caps.
As a side note: there are a lot of things in the Google Coding Standards that are poor practice (in terms of code readability). That is what happens when you design a coding standard by committee.
It means the value is k-onstant.
I think mathematical convention was the precedent. k is used in maths all the time as just some constant.
K stands for konstant, a wordplay on constant. It relates to Coding Styles.
It's just a matter of preference, some people and projects use them which means they also embrace the Hungarian notation, many don't. That's not that important.
If you're unsure what a prefix or style might mean, always check if the project has a coding style reference and read that.
Actually, whenever I define constants in typescript, I do something like this -
NODE_ENV = 'production';
But recently, I saw that the k prefix is being used in the Flutter SDK. It makes sense to me to keep using the k prefix cuz' it helps your editor/IDE in searching out constants in your codebase.
It's a convention, probably from math. But there are other suggestions for constant too, for example Kernighan and Ritchie in their book "The C language" suggest writing constants' name in capital letters (e.g. #define MAX 55).
I think, it means coefficient (as k in math means)

Why/When to use (!!p) instead of (p != NULL)

In the following code, what is the benefit of using (!!p) instead of (p != NULL)?
AClass *p = getInstanceOfAClass();
if( !!p )
// do something
else
// do something without having valid pointer
It is pretty much the same, although I consider the !!p to be bad style, and usually indicates a coder trying to be clever.
That's a matter of style, in fact they are equivalent. See this very similar question for discussion.
IMO comparing against null pointer is clearer.
I thing GMan’s original comment should be the accepted answer:
I wonder what's wrong with just if (p)
The point is: nothing is wrong with it, and this should be the preferred way. First off, !!p is “too clever”; it’s also completely unnecessary and thus bad (notice: we’re talking about pointers in an if statement here, so Anacrolix’ comment, while generally valid, doesn’t apply here!).
The same goes for p != NULL. While this is possible, it’s just not needed. It’s more code, it’s completely redundant code and hence it makes the code worse. The truest thing Jeff Atwood ever said was that “the best code is no code at all.” Avoid redundant syntax. Stick to the minimum (that still conveys the complete meaning; if (p) is complete).
Finally, if (p) is arguably the most idiomatic way to write this in C++. C++ bends over backwards to get this same behaviour for other types in the language (e.g. data streams), at the cost of some very weird quirks. The next version of the standard even introduces new a syntax to achieve this behaviour in user-defined types.
For pointers, we get the same for free. So use it.
/EDIT: About clarity: sharptooth writes that
IMO comparing against null pointer is clearer.
I claim that this is objectively wrong: if (p) is clearer. There is no possible way that this statement could mean anything else, neither in this context nor in any other, in C++.
As far as I can see, it's just a shorter way to convert it into a boolean value. It applies the ! twice, though, whereas p != NULL does one comparison. So I guess the benefit is just shorter code, albeit more cryptic if you don't know what !!p is supposed to mean.
They are the same, but I recommend to use
NULL != p
It is more readable.
There is no difference in the given example.
However the assumption that this applies to all cases is incorrect. a = not not b is not the same as a = b, as far as integer types are concerned.
In C, 0 is false. Anything but 0 is true. But not 0 is 1, and nothing else. In C++, true casts to 1 as an integer, not only for backward compatibilty with C, but because 1 is not 0, and 1 is the most common value used to denote true in C bool types, including the official C bool type, and BOOL used in Win32.
While for the example code given, !!p is unnecessary because the result is cast to a bool for evaluation of the if condition, that doesn't rule out the use of !! for purposes of casting booleans to expected integer values. Personally in this example, to maximize the probability that type changes and semantics are clear, I would use NULL != p or p != NULL to make it absolutely clear what is meant.
This technique is known as the double-bang idiom, and this guy provides some good justifications.
Do !!NOT use double negation. A simple argument is that since C++ is a limited English subset and english just does not have a double negation then english speakers will have a lot of difficulty to parse what is going on.