Related
Let's say that you want to make a for loop in C++, but in addition to the determined condition you also know when, or if the loop should stop. Note that such a condition does not have to be necessarily hard-coded, nor should the loop have such a determining value all the time. This is just an example, perhaps useful for random loops, or ones who's contents are determined during run time. Let's keep this example simple and focus on a solution that could be applied to any such related task in programming.
I would like to compare these two solutions. The first one uses a breaks statement and the second one uses a multiple or combined condition. Which one is the "best" or "right" way to accomplish the task, in terms of efficiency or reliability. Which one is less prone to bugs? Fell free to suggest another solution.
const int MAX_SIZE = 10;
int myArray[MAX_SIZE] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
// two ways to stop a loop
for (int x = 0; x < MAX_SIZE; x++) {
if (myArray[x] == 7) {
break;
}
cout << myArray[x] << “ “; // fixed
}
for (int x = 0; (x < MAX_SIZE) && (myArray[x] != 7); x++) {
cout << myArray[x] << “ “;
}
First, the loops behave sightly differently - the first one still prints number 7 while the other not.
In case you swapped the lines, it wouldn't make really difference for compiler, both solutions would be compiled in pretty much the same native code.
About the readability (and prone to errors): It depends, for such simple rule it's probably better and more readable to make it part of the condition. But when it comes more complicated, it may be worth introducing some temporary variable and then the break variant is easier to read. It's actually quite common for more complex algorithms to use break inside the loop, often even infinite loops with break inside.
As others have also noted, the two methods are not equivalent since one produces more outputs than another. However, I guess it's just a typo and you could have definitely identified that by running the code.
For comparison purposes, the first one is easier to generalize in the event you need to use more stopping conditions (it will be more readable than having a very long if) or do something else before stopping the loop. For instance, you might want to check first if myArray[x] == 7and then output something before breaking the loop, perhaps the reason for doing so. Therefore, the first method is better when your code is complex.
Performance-wise, these methods are quite similar.
The two loops are not equivalent. To make the first loop produce the same output as the second (at least, for the contents of the array you have specified) you'd need to change it to
for (int x = 0; x < MAX_SIZE; x++)
{
if (myArray[x] == 7)
{
break;
}
cout << myArray[x] << " ";
}
In practice, most people will find your second form more readable because there is a single obvious condition to end the loop, rather than two separate places where the loop can be terminated. And that is the one I would normally use.
The fact you think the form you had it was equivalent suggests the readability concern was affecting you. So the second form would definitely be preferable.
I almost never see a for loop like this:
for (int i = 0; 5 != i; ++i)
{}
Is there a technical reason to use > or < instead of != when incrementing by 1 in a for loop? Or this is more of a convention?
while (time != 6:30pm) {
Work();
}
It is 6:31pm... Damn, now my next chance to go home is tomorrow! :)
This to show that the stronger restriction mitigates risks and is probably more intuitive to understand.
There is no technical reason. But there is mitigation of risk, maintainability and better understanding of code.
< or > are stronger restrictions than != and fulfill the exact same purpose in most cases (I'd even say in all practical cases).
There is duplicate question here; and one interesting answer.
Yes there is a reason. If you write a (plain old index based) for loop like this
for (int i = a; i < b; ++i){}
then it works as expected for any values of a and b (ie zero iterations when a > b instead of infinite if you had used i == b;).
On the other hand, for iterators you'd write
for (auto it = begin; it != end; ++it)
because any iterator should implement an operator!=, but not for every iterator it is possible to provide an operator<.
Also range-based for loops
for (auto e : v)
are not just fancy sugar, but they measurably reduce the chances to write wrong code.
You can have something like
for(int i = 0; i<5; ++i){
...
if(...) i++;
...
}
If your loop variable is written by the inner code, the i!=5 might not break that loop. This is safer to check for inequality.
Edit about readability.
The inequality form is way more frequently used. Therefore, this is very fast to read as there is nothing special to understand (brain load is reduced because the task is common). So it's cool for the readers to make use of these habits.
And last but not least, this is called defensive programming, meaning to always take the strongest case to avoid current and future errors influencing the program.
The only case where defensive programming is not needed is where states have been proven by pre- and post-conditions (but then, proving this is the most defensive of all programming).
I would argue that an expression like
for ( int i = 0 ; i < 100 ; ++i )
{
...
}
is more expressive of intent than is
for ( int i = 0 ; i != 100 ; ++i )
{
...
}
The former clearly calls out that the condition is a test for an exclusive upper bound on a range; the latter is a binary test of an exit condition. And if the body of the loop is non-trivial, it may not apparent that the index is only modified in the for statement itself.
Iterators are an important case when you most often use the != notation:
for(auto it = vector.begin(); it != vector.end(); ++it) {
// do stuff
}
Granted: in practice I would write the same relying on a range-for:
for(auto & item : vector) {
// do stuff
}
but the point remains: one normally compares iterators using == or !=.
The loop condition is an enforced loop invariant.
Suppose you don't look at the body of the loop:
for (int i = 0; i != 5; ++i)
{
// ?
}
in this case, you know at the start of the loop iteration that i does not equal 5.
for (int i = 0; i < 5; ++i)
{
// ?
}
in this case, you know at the start of the loop iteration that i is less than 5.
The second is much, much more information than the first, no? Now, the programmer intent is (almost certainly) the same, but if you are looking for bugs, having confidence from reading a line of code is a good thing. And the second enforces that invariant, which means some bugs that would bite you in the first case just cannot happen (or don't cause memory corruption, say) in the second case.
You know more about the state of the program, from reading less code, with < than with !=. And on modern CPUs, they take the same amount of time as no difference.
If your i was not manipulated in the loop body, and it was always increased by 1, and it started less than 5, there would be no difference. But in order to know if it was manipulated, you'd have to confirm each of these facts.
Some of these facts are relatively easy, but you can get wrong. Checking the entire body of the loop is, however, a pain.
In C++ you can write an indexes type such that:
for( const int i : indexes(0, 5) )
{
// ?
}
does the same thing as either of the two above for loops, even down to the compiler optimizing it down to the same code. Here, however, you know that i cannot be manipulated in the body of the loop, as it is declared const, without the code corrupting memory.
The more information you can get out of a line of code without having to understand the context, the easier it is to track down what is going wrong. < in the case of integer loops gives you more information about the state of the code at that line than != does.
As already said by Ian Newson, you can't reliably loop over a floating variable and exit with !=. For instance,
for (double x=0; x!=1; x+=0.1) {}
will actually loop forever, because 0.1 can't exactly be represented in floating point, hence the counter narrowly misses 1. With < it terminates.
(Note however that it's basically undefined behaviour whether you get 0.9999... as the last accepted number – which kind of violates the less-than assumption – or already exit at 1.0000000000000001.)
Yes; OpenMP doesn't parallelize loops with the != condition.
It may happen that the variable i is set to some large value and if you just use the != operator you will end up in an endless loop.
As you can see from the other numerous answers, there are reasons to use < instead of != which will help in edge cases, initial conditions, unintended loop counter modification, etc...
Honestly though, I don't think you can stress the importance of convention enough. For this example it will be easy enough for other programmers to see what you are trying to do, but it will cause a double-take. One of the jobs while programming is making it as readable and familiar to everyone as possible, so inevitably when someone has to update/change your code, it doesn't take a lot of effort to figure out what you were doing in different code blocks. If I saw someone use !=, I'd assume there was a reason they used it instead of < and if it was a large loop I'd look through the whole thing trying to figure out what you did that made that necessary... and that's wasted time.
I take the adjectival "technical" to mean language behavior/quirks and compiler side effects such as performance of generated code.
To this end, the answer is: no(*). The (*) is "please consult your processor manual". If you are working with some edge-case RISC or FPGA system, you may need to check what instructions are generated and what they cost. But if you're using pretty much any conventional modern architecture, then there is no significant processor level difference in cost between lt, eq, ne and gt.
If you are using an edge case you could find that != requires three operations (cmp, not, beq) vs two (cmp, blt xtr myo). Again, RTM in that case.
For the most part, the reasons are defensive/hardening, especially when working with pointers or complex loops. Consider
// highly contrived example
size_t count_chars(char c, const char* str, size_t len) {
size_t count = 0;
bool quoted = false;
const char* p = str;
while (p != str + len) {
if (*p == '"') {
quote = !quote;
++p;
}
if (*(p++) == c && !quoted)
++count;
}
return count;
}
A less contrived example would be where you are using return values to perform increments, accepting data from a user:
#include <iostream>
int main() {
size_t len = 5, step;
for (size_t i = 0; i != len; ) {
std::cout << "i = " << i << ", step? " << std::flush;
std::cin >> step;
i += step; // here for emphasis, it could go in the for(;;)
}
}
Try this and input the values 1, 2, 10, 999.
You could prevent this:
#include <iostream>
int main() {
size_t len = 5, step;
for (size_t i = 0; i != len; ) {
std::cout << "i = " << i << ", step? " << std::flush;
std::cin >> step;
if (step + i > len)
std::cout << "too much.\n";
else
i += step;
}
}
But what you probably wanted was
#include <iostream>
int main() {
size_t len = 5, step;
for (size_t i = 0; i < len; ) {
std::cout << "i = " << i << ", step? " << std::flush;
std::cin >> step;
i += step;
}
}
There is also something of a convention bias towards <, because ordering in standard containers often relies on operator<, for instance hashing in several STL containers determines equality by saying
if (lhs < rhs) // T.operator <
lessthan
else if (rhs < lhs) // T.operator < again
greaterthan
else
equal
If lhs and rhs are a user defined class writing this code as
if (lhs < rhs) // requires T.operator<
lessthan
else if (lhs > rhs) // requires T.operator>
greaterthan
else
equal
The implementor has to provide two comparison functions. So < has become the favored operator.
There are several ways to write any kind of code (usually), there just happens to be two ways in this case (three if you count <= and >=).
In this case, people prefer > and < to make sure that even if something unexpected happens in the loop (like a bug), it won't loop infinitely (BAD). Consider the following code, for example.
for (int i = 1; i != 3; i++) {
//More Code
i = 5; //OOPS! MISTAKE!
//More Code
}
If we used (i < 3), we would be safe from an infinite loop because it placed a bigger restriction.
Its really your choice whether you want a mistake in your program to shut the whole thing down or keep functioning with the bug there.
Hope this helped!
The most common reason to use < is convention. More programmers think of loops like this as "while the index is in range" rather than "until the index reaches the end." There's value is sticking to convention when you can.
On the other hand, many answers here are claiming that using the < form helps avoid bugs. I'd argue that in many cases this just helps hide bugs. If the loop index is supposed to reach the end value, and, instead, it actually goes beyond it, then there's something happening you didn't expect which may cause a malfunction (or be a side effect of another bug). The < will likely delay discovery of the bug. The != is more likely to lead to a stall, hang, or even a crash, which will help you spot the bug sooner. The sooner a bug is found, the cheaper it is to fix.
Note that this convention is peculiar to array and vector indexing. When traversing nearly any other type of data structure, you'd use an iterator (or pointer) and check directly for an end value. In those cases you have to be sure the iterator will reach and not overshoot the actual end value.
For example, if you're stepping through a plain C string, it's generally more common to write:
for (char *p = foo; *p != '\0'; ++p) {
// do something with *p
}
than
int length = strlen(foo);
for (int i = 0; i < length; ++i) {
// do something with foo[i]
}
For one thing, if the string is very long, the second form will be slower because the strlen is another pass through the string.
With a C++ std::string, you'd use a range-based for loop, a standard algorithm, or iterators, even if though the length is readily available. If you're using iterators, the convention is to use != rather than <, as in:
for (auto it = foo.begin(); it != foo.end(); ++it) { ... }
Similarly, iterating a tree or a list or a deque usually involves watching for a null pointer or other sentinel rather than checking if an index remains within a range.
One reason not to use this construct is floating point numbers. != is a very dangerous comparison to use with floats as it'll rarely evaluate to true even if the numbers look the same. < or > removes this risk.
There are two related reasons for following this practice that both have to do with the fact that a programming language is, after all, a language that will be read by humans (among others).
(1) A bit of redundancy. In natural language we usually provide more information than is strictly necessary, much like an error correcting code. Here the extra information is that the loop variable i (see how I used redundancy here? If you didn't know what 'loop variable' means, or if you forgot the name of the variable, after reading "loop variable i" you have the full information) is less than 5 during the loop, not just different from 5. Redundancy enhances readability.
(2) Convention. Languages have specific standard ways of expressing certain situations. If you don't follow the established way of saying something, you will still be understood, but the effort for the recipient of your message is greater because certain optimisations won't work. Example:
Don't talk around the hot mash. Just illuminate the difficulty!
The first sentence is a literal translation of a German idiom. The second is a common English idiom with the main words replaced by synonyms. The result is comprehensible but takes a lot longer to understand than this:
Don't beat around the bush. Just explain the problem!
This is true even in case the synonyms used in the first version happen to fit the situation better than the conventional words in the English idiom. Similar forces are in effect when programmers read code. This is also why 5 != i and 5 > i are weird ways of putting it unless you are working in an environment in which it is standard to swap the more normal i != 5 and i < 5 in this way. Such dialect communities do exist, probably because consistency makes it easier to remember to write 5 == i instead of the natural but error prone i == 5.
Using relational comparisons in such cases is more of a popular habit than anything else. It gained its popularity back in the times when such conceptual considerations as iterator categories and their comparability were not considered high priority.
I'd say that one should prefer to use equality comparisons instead of relational comparisons whenever possible, since equality comparisons impose less requirements on the values being compared. Being EqualityComparable is a lesser requirement than being LessThanComparable.
Another example that demonstrates the wider applicability of equality comparison in such contexts is the popular conundrum with implementing unsigned iteration down to 0. It can be done as
for (unsigned i = 42; i != -1; --i)
...
Note that the above is equally applicable to both signed and unsigned iteration, while the relational version breaks down with unsigned types.
Besides the examples, where the loop variable will (unintentional) change inside the body, there are other reasions to use the smaller-than or greater-than operators:
Negations make code harder to understand
< or > is only one char, but != two
In addition to the various people who have mentioned that it mitigates risk, it also reduces the number of function overloads necessary to interact with various standard library components. As an example, if you want your type to be storable in a std::set, or used as a key for std::map, or used with some of the searching and sorting algorithms, the standard library usually uses std::less to compare objects as most algorithms only need a strict weak ordering. Thus it becomes a good habit to use the < comparisons instead of != comparisons (where it makes sense, of course).
There is no problem from a syntax perspective, but the logic behind that expression 5!=i is not sound.
In my opinion, using != to set the bounds of a for loop is not logically sound because a for loop either increments or decrements the iteration index, so setting the loop to iterate until the iteration index becomes out of bounds (!= to something) is not a proper implementation.
It will work, but it is prone to misbehavior since the boundary data handling is lost when using != for an incremental problem (meaning that you know from the start if it increments or decrements), that's why instead of != the <>>==> are used.
Recently, I've been thinking about all the ways that one could iterate through an array and wondered which of these is the most (and least) efficient. I've written a hypothetical problem and five possible solutions.
Problem
Given an int array arr with len number of elements, what would be the most efficient way of assigning an arbitrary number 42 to every element?
Solution 0: The Obvious
for (unsigned i = 0; i < len; ++i)
arr[i] = 42;
Solution 1: The Obvious in Reverse
for (unsigned i = len - 1; i >= 0; --i)
arr[i] = 42;
Solution 2: Address and Iterator
for (unsigned i = 0; i < len; ++i)
{ *arr = 42;
++arr;
}
Solution 3: Address and Iterator in Reverse
for (unsigned i = len; i; --i)
{ *arr = 42;
++arr;
}
Solution 4: Address Madness
int* end = arr + len;
for (; arr < end; ++arr)
*arr = 42;
Conjecture
The obvious solutions are almost always used, but I wonder whether the subscript operator could result in a multiplication instruction, as if it had been written like *(arr + i * sizeof(int)) = 42.
The reverse solutions try to take advantage of how comparing i to 0 instead of len might mitigate a subtraction operation. Because of this, I prefer Solution 3 over Solution 2. Also, I've read that arrays are optimized to be accessed forwards because of how they're stored in the cache, which could present an issue with Solution 1.
I don't see why Solution 4 would be any less efficient than Solution 2. Solution 2 increments the address and the iterator, while Solution 4 only increments the address.
In the end, I'm not sure which of these solutions I prefer. I'm think the answer also varies with the target architecture and optimization settings of your compiler.
Which of these do you prefer, if any?
Just use std::fill.
std::fill(arr, arr + len, 42);
Out of your proposed solutions, on a good compiler, neither should be faster than the others.
The ISO standard doesn't mandate the efficiency of the different ways of doing things in code (other than certain big-O type stuff for some collection algorithms), it simply mandates how it functions.
Unless your arrays are billions of elements in size, or you're wanting to set them millions of times per minute, it generally won't make the slightest difference which method you use.
If you really want to know (and I still maintain it's almost certainly unnecessary), you should benchmark the various methods in the target environment. Measure, don't guess!
As to which I prefer, my first inclination is to optimise for readability. Only if there's a specific performance problem do I then consider other possibilities. That would be simply something like:
for (size_t idx = 0; idx < len; idx++)
arr[idx] = 42;
I don't think that performance is an issue here - those are, if at all (I could imagine the compiler producing the identical assembly for most of them), micro optimizations hardly ever necessary.
Go with the solution that is most readable; the standard library provides you with std::fill, or for more complex assignments
for(unsigned k = 0; k < len; ++k)
{
// whatever
}
so it is obvious to other people looking at your code what you are doing. With C++11 you could also
for(auto & elem : arr)
{
// whatever
}
just don't try to obfuscate your code without any necessity.
For nearly all meaningful cases, the compiler will optimize all of the suggested ones to the same thing, and it's very unlikely to make any difference.
There used to be a trick where you could avoid the automatic prefetching of data if you ran the loop backwards, which under some bizarre set of circumstances actually made it more efficient. I can't recall the exact circumstances, but I expect modern processors will identify backwards loops as well as forwards loops for automatic prefetching anyway.
If it's REALLY important for your application to do this over a large number of elements, then looking at blocked access and using non-temporal storage will be the most efficient. But before you do that, make sure you have identified the filling of the array as an important performance point, and then make measurements for the current code and the improved code.
I may come back with some actual benchmarks to prove that "it makes little difference" in a bit, but I've got an errand to run before it gets too late in the day...
If i have a myVector which is a STL vector and execute a loop like this:
for(int i=0;i<myVector.size();++i) { ... }
Does the C++ compiler play some trick to call size() only once, or it will be called size()+1 times?
I am little confused, can anyone help?
Logically, myVector.size() will be called each time the loop is iterated - or at least the compiler must produce code as if it's called each time.
If the optimizer can determine that the size of the vector will not change in the body of the loop, it could hoist the call to size() outside the loop. Note that usually, vector::size() is an inline that's just a simple difference between pointers to the end and beginning of the vector (or something similar - maybe a simple load of a member that keeps track of the number of elements).
So there's actually probably little reason for concern about what happens for vector::size().
Note that list::size() could be a different story - the C++03 standard permits it to be linear complexity (though I think this is rare, and the C++0x standard changes list::size() requirements to be constant complexity).
I'm assuming that the vector doesn't change size in the loop. If it does change size, it's impossible to tell without knowing how it changes size.
On the C++ abstract machine it will be called exactly size()+1 times. And on a concrete implementation it will have an observable behaviour equivalent to it having been called size()+1 times (this is called the as if rule).
This means that the compiler can choose to call it just once, because the observable behaviour is the same. In fact, by following the as if rule, if the body of the loop is empty, the compiler can even choose to not call it at all and just skip the whole thing altogether. The observable behaviour is the same, because making your code run faster is not considered different observable behaviour.
It will be called size + 1 times. Changing the size of the vector will affect the number of iterations.
It may be called once, may be called size+1 times, or it may never be called at all. Assuming that the vector size doesn't change, your program will behave as if it had been called size+1 times.
There are two optimizations at play here: first, std::vector::size() is probably inlined, so it may never be "called" at all in the traditional sense. Second, the compiler may determine that it evaluate size() only once, or perhaps never:
For example, this code might never evaluate std::vector::size():
for(int i = 0; i < myVector.size(); ++i) { ; }
Either of these loops might evaluate std::vector::size() only once:
for(int i = 0; i < myVector.size(); ++i) { std::cout << "Hello, world.\n"; }
for(int i = 0; i < myVector.size(); ++i) { sum += myVector[i]; }
While this loop might evaluate std::vector::size() many times:
for(int i = 0; i < myVector.size(); ++i) { ExternalFunction(&myVector); }
In the final analysis, the key questions are:
Why do you care?, and
How would you know?
Why do you care how many times size() is invoked? Are you trying to make your program go faster?
How would you even know? Since size() has no visible side-effects, how would you even know who many times it was called (or otherwise evaluated)?
It will be called size() + 1 times (it may be that the compiler can recognize it as invariant in the loop, but you shouldn't count on it)
It will be called until the condition is not falsified (size() could change each time for example). If size() remains constant, it's size() + 1 times.
From the MSDN page about for:
for ( init-expression ; cond-expression ; loop-expression )
statement
cond-expression
Before execution of each iteration of statement,
including the first iteration. statement is executed only if
cond-expression evaluates to true (nonzero). An expression that
evaluates to an integral type or a class type that has an unambiguous
conversion to an integral type. Normally used to test for
loop-termination criteria.
It will be called size + 1 times, as Ernest have mentioned. However, if you are sure that size is not changing, you can apply an optimisation and make your code look like this:
for (unsigned int i = 0, e = myVector.size (); i < e; ++i)
... in which case size () will be called only once.
Actually myVector.size() will be inlined, so there will be no call at all. Just comparing register value with memory location. Of course I am talking about release build.
In debug build it will be called size()+1 times.
EDIT: No doubt that there exists such compiler or STL implementation which cannot optimize myVector.size(). But the chances to face them are very low.
This question already has answers here:
Why do c++ programmers use != instead of <
(7 answers)
Closed 3 years ago.
which is better for(int i = 0; i != 5; ++i) or for(int i = 0; i <= 5; i++)?
Please explain the rationale, if possible.
I read somewhere that != operator is better than comparison operators. also pre-increment operator is better than post-increment operator, since it doesn't require any temporary variable to store the intermediate value.
Is there any better form of for loop than these two?
p.s: I use the former one as suggested by one of the sources, which i don't remember now.
There are at least two differences:
the first one will iterate 5 times (from 0 to 4), while the second one will iterate 6 times (from 0 to 5).
This is a logic difference, and it depends on what you need to do.
If what you meant for the second example was i<=4 (versus i!=5) you shouldn't bother: any compiler will always be able to optimize such a trivial expression even with optimizations turned off.
the second difference is the use of operator ++: in a case you use the prefix version, while in the other the postfix version.
This doesn't make difference for native types, but could do some difference with user defined types (ie classes, structs, enums, ...), since the user could overload the operator, and the postfix one (var ++) could be a little slower sometimes.
Using "i < 5" would be the form most people would expect to see on a for loop based on common usage. There is nothing wrong of course with "i != 5" or "i <= 5" (expect they'll give different loop counts) but if I see either of those forms I have to stop and think because it's not the most common form.
So I would always write "i < 5" unless there was a strong reason to do otherwise
Generally, you should write for PEOPLE not the computer.
The "for(i = 0; i < 5; i++)" form makes it very clear that the valid range is "0 through 4" and nothing else.
And as other people said, this form make sure that funny code in the loop is much less likely to cause an infinite loop. And as other people have said, use the form that reflects what you mean.
Note that "less than" is the idiom commonly used in c (which means more programmers will expect that). That's another reason to prefer the "for(i = 0; i < 5; i++)" form.
My teacher in the C++ language told me to use the canonical forms: for(int x=0; x != 5; ++i)
Thou the other works just fine but suppose you want to use the loop on a iterator. Then <= does not has to be properly defined and a postfix inc might make your program spend alot of time copying objects.
So he made us use the forms
for(int i=begin; x != end; ++i) and for(int i=begin; end != i--; )
which is better for(int i = 0; i != 5; ++i) or for(int i = 0; i <= 5; i++)?
the 2nd one since its a Bolian Operator, then this for(int i = _ ; i <= __ or etc ; i++ increment )? --> it's widely used as now a days even when are beginners in programming.
You should use whichever of <, <=, != etc. best expresses your application logic, but if there's no reason to prefer any particular style, then be consistent. In informing that decision, the < and != operators have the advantage of allowing comparisons between 0-based indices and sizeof or STL-style size / length values, as in:
for (size_t i = 0; i < my_vector.size(); ++i)
...
The < and <= operators sometimes guard against incrementing past the terminating value (if you've got another condition inside the loop under which you increment/add-to i, or there's some floating point rounding/representation disparity). Not often actually significant, but saving an occasional bug is better than nothing.
Given '<' is the intersection of these two sets, it's a popular choice. To my mind, < also communicates the states under which the loop runs... a more positive assertion rather than the != style. In general, negative assertions are discouraged in programming (e.g. bool invalid is more complicated to get your head around reliably than bool valid, especially in complex boolean expression and when double-negatives are introduced).
This is of course for numeric types. Other types may imply other styles... for example, use of != sentinel. I prefer to have the choice of operator help document these usage implications of the type, though that's a personal choice and arguably redundant.
Otherwise, preincrement can be more efficient for complicated types than post- as it avoids a temporary.
The condition != is simply faster than both < and (even more) <= otherwise it is just question of taste, though one needs to be careful of possible infinite loops.