what's the difference of i++ and ++i in for loop? [duplicate] - c++

Perhaps it doesn't matter to the compiler once it optimizes, but in C/C++, I see most people make a for loop in the form of:
for (i = 0; i < arr.length; i++)
where the incrementing is done with the post fix ++. I get the difference between the two forms. i++ returns the current value of i, but then adds 1 to i on the quiet. ++i first adds 1 to i, and returns the new value (being 1 more than i was).
I would think that i++ takes a little more work, since a previous value needs to be stored in addition to a next value: Push *(&i) to stack (or load to register); increment *(&i). Versus ++i: Increment *(&i); then use *(&i) as needed.
(I get that the "Increment *(&i)" operation may involve a register load, depending on CPU design. In which case, i++ would need either another register or a stack push.)
Anyway, at what point, and why, did i++ become more fashionable?
I'm inclined to believe azheglov: It's a pedagogic thing, and since most of us do C/C++ on a Window or *nix system where the compilers are of high quality, nobody gets hurt.
If you're using a low quality compiler or an interpreted environment, you may need to be sensitive to this. Certainly, if you're doing advanced C++ or device driver or embedded work, hopefully you're well seasoned enough for this to be not a big deal at all. (Do dogs have Buddah-nature? Who really needs to know?)

It doesn't matter which you use. On some extremely obsolete machines, and in certain instances with C++, ++i is more efficient, but modern compilers don't store the result if it's not stored. As to when it became popular to postincriment in for loops, my copy of K&R 2nd edition uses i++ on page 65 (the first for loop I found while flipping through.)

For some reason, i++ became more idiomatic in C, even though it creates a needless copy. (I thought that was through K&R, but I see this debated in other answers.) But I don't think there's a performance difference in C, where it's only used on built-ins, for which the compiler can optimize away the copy operation.
It does make a difference in C++, however, where i might be a user-defined type for which operator++() is overloaded. The compiler might not be able to assert that the copy operation has no visible side-effects and might thus not be able to eliminate it.

As for the reason why, here is what K&R had to say on the subject:
Brian Kernighan
you'll have to ask dennis (and it might be in the HOPL paper). i have a
dim memory that it was related to the post-increment operation in the
pdp-11, though beyond that i don't know, so don't quote me.
in c++ the preferred style for iterators is actually ++i for some subtle
implementation reason.
Dennis Ritchie
No particular reason, it just became fashionable. The code produced
is identical on the PDP-11, just an inc instruction, no autoincrement.
HOPL Paper
Thompson went a step further by inventing the ++ and -- operators, which increment or decrement; their prefix or postfix position determines whether the alteration occurs before or after noting the value of the operand. They were not in the earliest versions of B, but appeared along the way. People often guess that they were created to use the auto-increment and auto-decrement address modes provided by the DEC PDP-11 on which C and Unix first became popular. This is historically impossible, since there was no PDP-11 when B was developed. The PDP-7, however, did have a few ‘auto-increment’ memory cells, with the property that an indirect memory reference through them incremented the cell. This feature probably suggested such operators to Thompson; the generalization to make them both prefix and postfix was his own. Indeed, the auto-increment cells were not used directly in implementation of the operators, and a stronger
motivation for the innovation was probably his observation that the translation of ++x was smaller than that of x=x+1.

For integer types the two forms should be equivalent when you don't use the value of the expression. This is no longer true in the C++ world with more complicated types, but is preserved in the language name.
I suspect that "i++" became more popular in the early days because that's the style used in the original K&R "The C Programming Language" book. You'd have to ask them why they chose that variant.

Because as soon as you start using "++i" people will be confused and curios. They will halt there everyday work and start googling for explanations. 12 minutes later they will enter stack overflow and create a question like this. And voila, your employer just spent yet another $10

Going a little further back than K&R, I looked at its predecessor: Kernighan's C tutorial (~1975). Here the first few while examples use ++n. But each and every for loop uses i++. So to answer your question: Almost right from the beginning i++ became more fashionable.

My theory (why i++ is more fashionable) is that when people learn C (or C++) they eventually learn to code iterations like this:
while( *p++ ) {
...
}
Note that the post-fix form is important here (using the infix form would create a one-off type of bug).
When the time comes to write a for loop where ++i or i++ doesn't really matter, it may feel more natural to use the postfix form.
ADDED: What I wrote above applies to primitive types, really. When coding something with primitive types, you tend to do things quickly and do what comes naturally. That's the important caveat that I need to attach to my theory.
If ++ is an overloaded operator on a C++ class (the possibility Rich K. suggested in the comments) then of course you need to code loops involving such classes with extreme care as opposed to doing simple things that come naturally.

At some level it's idiomatic C code. It's just the way things are usually done. If that's your big performance bottleneck you're likely working on a unique problem.
However, looking at my K&R The C Programming Language, 1st edition, the first instance I find of i in a loop (pp 38) does use ++i rather than i++.

Im my opinion it became more fashionable with the creation of C++ as C++ enables you to call ++ on non-trivial objects.
Ok, I elaborate: If you call i++ and i is a non-trivial object, then storing a copy containing the value of i before the increment will be more expensive than for say a pointer or an integer.

I think my predecessors are right regarding the side effects of choosing postincrement over preincrement.
For it's fashonability, it may be as simple as that you start all three expressions within the for statement the same repetitive way, something the human brain seems to lean towards to.

I would add up to what other people told you that the main rule is: be consistent. Pick one, and do not use the other one unless it is a specific case.

If the loop is too long, you need to reload the value in the cache to increment it before the jump to the begining.
What you don't need with ++i, no cache move.

In C, all operators that result in a variable having a new value besides prefix inc/dec modify the left hand variable (i=2, i+=5, etc). So in situations where ++i and i++ can be interchanged, many people are more comfortable with i++ because the operator is on the right hand side, modifying the left hand variable
Please tell me if that first sentence is incorrect, I'm not an expert with C.

Related

Code style in C (and C++) loops: postfix and prefix increments

These days I see this very often:
for (int i = 0; i < NUMBER; ++i)
Rather than - as was very much the fashion (?) 20 years ago - this:
for (int i = 0; i < NUMBER; i++)
Is there a reason for this change or is it just fashion?
(There is a similar question about Java here but I don't think the answers get to the heart of the matter)
C and C++
Post-increment involves a temporary that can be avoided by using pre-increment (Post-increment and Pre-increment concept?).
C++
If only integers are involved, the issue is not that critical and the compiler can probably optimize both versions to do essentially the same. However, in C++ basically any object of class type can have a ++ (both post- and pre-) and that can be overloaded to do basically anything. Post-incrementing a big object has a cost and depending on how to operator is implemented the compiler might not be able to optimize both post- and pre-increment to do essentially the same thing. To be on the save side it is prefered to use ++i.
20 years ago
Actually the only thing that changed with respect to this issue is that compiler optimizations got much better over time. I didn't check any old compiler, but I would say that 20 years ago missing compiler optimizations were a much stronger reason to use ++i instead of i++. Maybe what you observed is just style of code getting better.
It's mostly fashion, but also a combination of consistency and misunderstanding.
When iterators came along, and more and more people started realising that ++it could be cheaper than it++, for some more-complex-than-a-pointer iterator it, they started getting into the habit of pre-incrementing everywhere, to save what they perceived to be unnecessary copies in the semantics of a post-increment (the original value would have to be stored temporarily for returning after the increment occurs). They'd then do that for integers too, because why not?
In reality, there's nothing to save. Your compiler is perfectly capable of spotting that the evaluated result is never used here, so ++it and it++ in this context have exactly identical semantics (unless there is some non-idiomatic side effect in the definition of the operator). This is particularly the case for a simple integer i.
It doesn't hurt, of course, and it does eliminate the risk that you've missed some weird angle that does result in an unnecessary copy. It's a good practice. It's just that it hasn't come about out of necessity.

Is ++(a = b); faster than a = b + 1;?

Is it faster to use ++(a = b); instead of a = b + 1;?
For my understanding, the first approach consists of the operations:
move the value of b to a
increment a in memory
while the second approach does:
push b and 1 to the stack
call add
pop the result to a register
move the register to a
Does it actually take less cycles? Or does the compiler (gcc for example) do an optimization so it does not make a difference?
edit: TIL that ++(a=b) is wrong illegal UB, at least in pre-C++11. Nevertheless, I'll discuss this assuming it's either legal or the compiler does what you expect.
Generally speaking, a = b + 1; is faster.
The optimizer will most surely make the same of both. If not, it is more likely to optimize the second version, because it is a very common thing to write, and omtimizers are more likely to recognize common things than weird corner cases.
Why do I say it should be the same after optimization, but the second is faster? Because of the fellow developers. Everyone recognizes a = b + 1; immediately. Noone really has to think about it. The other case is more likely to trigger a reaction in the likes of "wtf is he doing there, and why?". Many people will figure out eventually what you did there. Some will not. Some might even introduce bugs because of it. Few people will find out why you did it and nevertheless stumble each time they have to read that line. Everyone will lose time wondering while reading that line. That's why the other is faster.
Caveat: all this is written silently assuming that you are talking of builtin types, like ints or pointers. Your interpretation of what the two do supports that. If we're talking of UDTs, the two lines are not even guaranteed to do the same. It then depends completely on how operator=, operator++ and operator+ and maybe the conversion from int are implemented. Nevertheless, if the implementations make you conside to write ++(a=b), they are most likely bad implementations and should be improved rather than hacked around.
tl;dr: if I'd catch you doing ++(a=b) in any codebase I work on, we'd have to have a serious talk ;-)
There is no simple answer to this question. The question has been flagged with C++ so we have no way of knowing what this code is actually doing without knowing the precise type of all the operands. Also, the context within which the code appears will make a difference to the way the optimiser generates code - the compiler could alias the variables and move the increment into instructions further down the program, for example, into effective address calculations for the two variables.
But the real question is, why do you care? As Arne said above, readability is far more important and you've not posted a scenario whereby any difference would have a measurable effect.
Only worry about it if it is actually causing a problem.
With optimizations on, they generate exactly the same code for me so they will perform exactly the same. This shouldn't be a surprise as the effects of both statements are exactly the same.
++(a = b); is undefined behaviour because there are two unsequenced modifications to a.
Although the value computation of a in a = b is sequenced before the modification of a due ++, the side-effect of a = b (storage to a) is unsequenced relative to the side-effect of ++ (again, storage to a).

Why worry about 'undefined behavior' in >> of signed type?

My question is related to this one and will contain few questions.
For me the most obvious (means I would use it in my code) solution to above problem is just this:
uint8_t x = some value;
x = (int8_t)x >> 7;
Yes, yes I hear you all .... undefined behavior and this is why I've not posted my 'solution'.
I have a feeling (maybe it is only my sick mind) that term 'undefined behavior' is overused on SO just to justify downvoting someone if question is tagged c/c++.
So - let's (for a while) put aside C/C++ standards and think about everyday life/programming, real compiler implementations and code they generate for contemporary hardware.
Taking into account the following:
As far as I remember all the hardware I had encountered had distinct instructions for arithmetic and logical shift.
All compilers that I know translate >> into arithmetic shift for signed types and logical shift for unsigned types.
I cannot recall any compiler ever emitting div-like low level instruction when >> was used in c/c++ code (and we are not talking about operator overloading here).
All the hardware I know use U2.
So ... is there anything (any contemporary compiler, hardware) that behaves differently than mentioned above? Put simply should I ever be worried about right shifting signed value not being translated to arithmetic shift?
My 'solution' compiles to just one low level instruction on many platforms while others require multiple low level instructions. What would you use in your code?
Truth please ;-)
Why worry about 'undefined behavior' in >> of signed type?
Because it doesn't really matter how well defined any particular undefined behaviour is now; the point is that it may break at any point in the future. You're relying on a side-effect that may be optimized (or un-optimized) away at any point for any reason or no reason.
Also, I don't want to have to ask somebody with detailed knowledge of many different compiler's implementations before I use something I shouldn't use in the first place, so I skip it.
Yes, there are compilers which behave different from what you assume.
In particular, optimization phases within compilers. These take advantage of the known possible values of variables, and will derive those possible values from the absence of UB. A pointer must be non-null if it's been dereferenced, an integer must be non-zero if it's been used as a divider, and a right-shifted value must be non-negative.
And that works back in time:
if (x<0) {
printf("This is dead code\n");
}
x >> 3;
What it really comes down to is, are you willing to take the risk?
"The standard doesn't guarantee yada yada" is nice and all, but let's be honest now, the risk isn't big. If you're going to run your code on some crazy platform, you generally know in advance. And if it takes you by surprise, well, that's the risk you took.
Also, the workaround is horrible. If you're not going to need it, it's just polluting your codebase with pointless "function calls instead of right shifts" that will be harder to maintain (and thus carry a cost). And you'll never to able to "paste and forget" code from other places into the project - you'd always have to check the code for the possibility of right shifting negative signed integers.

Programming Myth? Pre vs. post increment and decrement speed [duplicate]

Is there a performance difference between i++ and ++i if the resulting value is not used?
Executive summary: No.
i++ could potentially be slower than ++i, since the old value of i
might need to be saved for later use, but in practice all modern
compilers will optimize this away.
We can demonstrate this by looking at the code for this function,
both with ++i and i++.
$ cat i++.c
extern void g(int i);
void f()
{
int i;
for (i = 0; i < 100; i++)
g(i);
}
The files are the same, except for ++i and i++:
$ diff i++.c ++i.c
6c6
< for (i = 0; i < 100; i++)
---
> for (i = 0; i < 100; ++i)
We'll compile them, and also get the generated assembler:
$ gcc -c i++.c ++i.c
$ gcc -S i++.c ++i.c
And we can see that both the generated object and assembler files are the same.
$ md5 i++.s ++i.s
MD5 (i++.s) = 90f620dda862cd0205cd5db1f2c8c06e
MD5 (++i.s) = 90f620dda862cd0205cd5db1f2c8c06e
$ md5 *.o
MD5 (++i.o) = dd3ef1408d3a9e4287facccec53f7d22
MD5 (i++.o) = dd3ef1408d3a9e4287facccec53f7d22
From Efficiency versus intent by Andrew Koenig :
First, it is far from obvious that ++i is more efficient than i++, at least where integer variables are concerned.
And :
So the question one should be asking is not which of these two operations is faster, it is which of these two operations expresses more accurately what you are trying to accomplish. I submit that if you are not using the value of the expression, there is never a reason to use i++ instead of ++i, because there is never a reason to copy the value of a variable, increment the variable, and then throw the copy away.
So, if the resulting value is not used, I would use ++i. But not because it is more efficient: because it correctly states my intent.
A better answer is that ++i will sometimes be faster but never slower.
Everyone seems to be assuming that i is a regular built-in type such as int. In this case there will be no measurable difference.
However if i is complex type then you may well find a measurable difference. For i++ you must make a copy of your class before incrementing it. Depending on what's involved in a copy it could indeed be slower since with ++i you can just return the final value.
Foo Foo::operator++()
{
Foo oldFoo = *this; // copy existing value - could be slow
// yadda yadda, do increment
return oldFoo;
}
Another difference is that with ++i you have the option of returning a reference instead of a value. Again, depending on what's involved in making a copy of your object this could be slower.
A real-world example of where this can occur would be the use of iterators. Copying an iterator is unlikely to be a bottle-neck in your application, but it's still good practice to get into the habit of using ++i instead of i++ where the outcome is not affected.
Short answer:
There is never any difference between i++ and ++i in terms of speed. A good compiler should not generate different code in the two cases.
Long answer:
What every other answer fails to mention is that the difference between ++i versus i++ only makes sense within the expression it is found.
In the case of for(i=0; i<n; i++), the i++ is alone in its own expression: there is a sequence point before the i++ and there is one after it. Thus the only machine code generated is "increase i by 1" and it is well-defined how this is sequenced in relation to the rest of the program. So if you would change it to prefix ++, it wouldn't matter in the slightest, you would still just get the machine code "increase i by 1".
The differences between ++i and i++ only matters in expressions such as array[i++] = x; versus array[++i] = x;. Some may argue and say that the postfix will be slower in such operations because the register where i resides has to be reloaded later. But then note that the compiler is free to order your instructions in any way it pleases, as long as it doesn't "break the behavior of the abstract machine" as the C standard calls it.
So while you may assume that array[i++] = x; gets translated to machine code as:
Store value of i in register A.
Store address of array in register B.
Add A and B, store results in A.
At this new address represented by A, store the value of x.
Store value of i in register A // inefficient because extra instruction here, we already did this once.
Increment register A.
Store register A in i.
the compiler might as well produce the code more efficiently, such as:
Store value of i in register A.
Store address of array in register B.
Add A and B, store results in B.
Increment register A.
Store register A in i.
... // rest of the code.
Just because you as a C programmer is trained to think that the postfix ++ happens at the end, the machine code doesn't have to be ordered in that way.
So there is no difference between prefix and postfix ++ in C. Now what you as a C programmer should be vary of, is people who inconsistently use prefix in some cases and postfix in other cases, without any rationale why. This suggests that they are uncertain about how C works or that they have incorrect knowledge of the language. This is always a bad sign, it does in turn suggest that they are making other questionable decisions in their program, based on superstition or "religious dogmas".
"Prefix ++ is always faster" is indeed one such false dogma that is common among would-be C programmers.
Taking a leaf from Scott Meyers, More Effective c++ Item 6: Distinguish between prefix and postfix forms of increment and decrement operations.
The prefix version is always preferred over the postfix in regards to objects, especially in regards to iterators.
The reason for this if you look at the call pattern of the operators.
// Prefix
Integer& Integer::operator++()
{
*this += 1;
return *this;
}
// Postfix
const Integer Integer::operator++(int)
{
Integer oldValue = *this;
++(*this);
return oldValue;
}
Looking at this example it is easy to see how the prefix operator will always be more efficient than the postfix. Because of the need for a temporary object in the use of the postfix.
This is why when you see examples using iterators they always use the prefix version.
But as you point out for int's there is effectively no difference because of compiler optimisation that can take place.
Here's an additional observation if you're worried about micro optimisation. Decrementing loops can 'possibly' be more efficient than incrementing loops (depending on instruction set architecture e.g. ARM), given:
for (i = 0; i < 100; i++)
On each loop you you will have one instruction each for:
Adding 1 to i.
Compare whether i is less than a 100.
A conditional branch if i is less than a 100.
Whereas a decrementing loop:
for (i = 100; i != 0; i--)
The loop will have an instruction for each of:
Decrement i, setting the CPU register status flag.
A conditional branch depending on CPU register status (Z==0).
Of course this works only when decrementing to zero!
Remembered from the ARM System Developer's Guide.
First of all: The difference between i++ and ++i is neglegible in C.
To the details.
1. The well known C++ issue: ++i is faster
In C++, ++i is more efficient iff i is some kind of an object with an overloaded increment operator.
Why?
In ++i, the object is first incremented, and can subsequently passed as a const reference to any other function. This is not possible if the expression is foo(i++) because now the increment needs to be done before foo() is called, but the old value needs to be passed to foo(). Consequently, the compiler is forced to make a copy of i before it executes the increment operator on the original. The additional constructor/destructor calls are the bad part.
As noted above, this does not apply to fundamental types.
2. The little known fact: i++ may be faster
If no constructor/destructor needs to be called, which is always the case in C, ++i and i++ should be equally fast, right? No. They are virtually equally fast, but there may be small differences, which most other answerers got the wrong way around.
How can i++ be faster?
The point is data dependencies. If the value needs to be loaded from memory, two subsequent operations need to be done with it, incrementing it, and using it. With ++i, the incrementation needs to be done before the value can be used. With i++, the use does not depend on the increment, and the CPU may perform the use operation in parallel to the increment operation. The difference is at most one CPU cycle, so it is really neglegible, but it is there. And it is the other way round then many would expect.
Please don't let the question of "which one is faster" be the deciding factor of which to use. Chances are you're never going to care that much, and besides, programmer reading time is far more expensive than machine time.
Use whichever makes most sense to the human reading the code.
#Mark
Even though the compiler is allowed to optimize away the (stack based) temporary copy of the variable and gcc (in recent versions) is doing so,
doesn't mean all compilers will always do so.
I just tested it with the compilers we use in our current project and 3 out of 4 do not optimize it.
Never assume the compiler gets it right, especially if the possibly faster, but never slower code is as easy to read.
If you don't have a really stupid implementation of one of the operators in your code:
Alwas prefer ++i over i++.
I have been reading through most of the answers here and many of the comments, and I didn't see any reference to the one instance that I could think of where i++ is more efficient than ++i (and perhaps surprisingly --i was more efficient than i--). That is for C compilers for the DEC PDP-11!
The PDP-11 had assembly instructions for pre-decrement of a register and post-increment, but not the other way around. The instructions allowed any "general-purpose" register to be used as a stack pointer. So if you used something like *(i++) it could be compiled into a single assembly instruction, while *(++i) could not.
This is obviously a very esoteric example, but it does provide the exception where post-increment is more efficient(or I should say was, since there isn't much demand for PDP-11 C code these days).
In C, the compiler can generally optimize them to be the same if the result is unused.
However, in C++ if using other types that provide their own ++ operators, the prefix version is likely to be faster than the postfix version. So, if you don't need the postfix semantics, it is better to use the prefix operator.
I can think of a situation where postfix is slower than prefix increment:
Imagine a processor with register A is used as accumulator and it's the only register used in many instructions (some small microcontrollers are actually like this).
Now imagine the following program and their translation into a hypothetical assembly:
Prefix increment:
a = ++b + c;
; increment b
LD A, [&b]
INC A
ST A, [&b]
; add with c
ADD A, [&c]
; store in a
ST A, [&a]
Postfix increment:
a = b++ + c;
; load b
LD A, [&b]
; add with c
ADD A, [&c]
; store in a
ST A, [&a]
; increment b
LD A, [&b]
INC A
ST A, [&b]
Note how the value of b was forced to be reloaded. With prefix increment, the compiler can just increment the value and go ahead with using it, possibly avoid reloading it since the desired value is already in the register after the increment. However, with postfix increment, the compiler has to deal with two values, one the old and one the incremented value which as I show above results in one more memory access.
Of course, if the value of the increment is not used, such as a single i++; statement, the compiler can (and does) simply generate an increment instruction regardless of postfix or prefix usage.
As a side note, I'd like to mention that an expression in which there is a b++ cannot simply be converted to one with ++b without any additional effort (for example by adding a - 1). So comparing the two if they are part of some expression is not really valid. Often, where you use b++ inside an expression you cannot use ++b, so even if ++b were potentially more efficient, it would simply be wrong. Exception is of course if the expression is begging for it (for example a = b++ + 1; which can be changed to a = ++b;).
I always prefer pre-increment, however ...
I wanted to point out that even in the case of calling the operator++ function, the compiler will be able to optimize away the temporary if the function gets inlined. Since the operator++ is usually short and often implemented in the header, it is likely to get inlined.
So, for practical purposes, there likely isn't much of a difference between the performance of the two forms. However, I always prefer pre-increment since it seems better to directly express what I"m trying to say, rather than relying on the optimizer to figure it out.
Also, giving the optmizer less to do likely means the compiler runs faster.
My C is a little rusty, so I apologize in advance. Speedwise, I can understand the results. But, I am confused as to how both files came out to the same MD5 hash. Maybe a for loop runs the same, but wouldn't the following 2 lines of code generate different assembly?
myArray[i++] = "hello";
vs
myArray[++i] = "hello";
The first one writes the value to the array, then increments i. The second increments i then writes to the array. I'm no assembly expert, but I just don't see how the same executable would be generated by these 2 different lines of code.
Just my two cents.

rate ++a,a++,a=a+1 and a+=1 in terms of execution efficiency in C.Assume gcc to be the compiler [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Is there a performance difference between i++ and ++i in C++?
In terms of usage of the following, please rate in terms of execution time in C.
In some interviews i was asked which shall i use among these variations and why.
a++
++a
a=a+1
a+=1
Here is what g++ -S produces:
void irrelevant_low_level_worries()
{
int a = 0;
// movl $0, -4(%ebp)
a++;
// incl -4(%ebp)
++a;
// incl -4(%ebp)
a = a + 1;
// incl -4(%ebp)
a += 1;
// incl -4(%ebp)
}
So even without any optimizer switches, all four statements compile to the exact same machine code.
You can't rate the execution time in C, because it's not the C code that is executed. You have to profile the executable code compiled with a specific compiler running on a specific computer to get a rating.
Also, rating a single operation doesn't give you something that you can really use. Todays processors execute several instructions in parallel, so the efficiency of an operation relies very much on how well it can be paired with the instructions in the surrounding code.
So, if you really need to use the one that has the best performance, you have to profile the code. Otherwise (which is about 98% of the time) you should use the one that is most readable and best conveys what the code is doing.
The circumstances where these kinds of things actually matter is very rare and few in between. Most of the time, it doesn't matter at all. In fact I'm willing to bet that this is the case for you.
What is true for one language/compiler/architecture may not be true for others. And really, the fact is irrelevant in the bigger picture anyway. Knowing these things do not make you a better programmer.
You should study algorithms, data structures, asymptotic analysis, clean and readable coding style, programming paradigms, etc. Those skills are a lot more important in producing performant and manageable code than knowing these kinds of low-level details.
Do not optimize prematurely, but also, do not micro-optimize. Look for the big picture optimizations.
This depends on the type of a as well as on the context of execution. If a is of a primitive type and if all four statements have the same identical effect then these should all be equivalent and identical in terms of efficiency. That is, the compiler should be smart enough to translate them into the same optimized machine code. Granted, that is not a requirement, but if it's not the case with your compiler then that is a good sign to start looking for a better compiler.
For most compilers it should compile to the same ASM code.
Same.
For more details see http://www.linux-kongress.org/2009/slides/compiler_survey_felix_von_leitner.pdf
I can't see why there should be any difference in execution time, but let's prove me wrong.
a++
and
++a
are not the same however, but this is not related to efficiency.
When it comes to performance of individual lines, context is always important, and guessing is not a good idea. Test and measure is better
In an interview, I would go with two answers:
At first glance, the generated code should be very similar, especially if a is an integer.
If execution time was definitely a known problem - you have to measure it using some kind of profiler.
Well, you could argue that a++ is short and to the point. It can only increment a by one, but the notation is very well understood. a=a+1 is a little more verbose (not a big deal, unless you have variablesWithGratuitouslyLongNames), but some might argue it's more "flexible" because you can replace the 1 or either of the a's to change the expression. a+=1 is maybe not as flexible as the other two but is a little more clear, in the sense that you can change the increment amount. ++a is different from a++ and some would argue against it because it's not always clear to people who don't use it often.
In terms of efficiency, I think most modern compilers will produce the same code for all of these but I could be mistaken. Really, you'd have to run your code with all variations and measure which performs best.
(assuming that a is an integer)
It depends on the context, and if we are in C or C++. In C the code you posted (except for a-- :-) will cause a modern C compiler to produce exactly the same code. But by a very high chance the expected answer is that a++ is the fastest one and a=a+1 the slowest, since ancient compilers relied on the user to perform such optimizations.
In C++ it depends of the type of a. When a is a numeric type, it acts the same way as in C, which means a++, a+=1 and a=a+1 generate the same code. When a is a object, it depends if any operator (++, + and =) is overloaded, since then the overloaded operator of the a object is called.
Also when you work in a field with very special compilers (like microcontrollers or embedded systems) these compilers can behave very differently on each of these input variations.