Why is the != operator not allowed with OpenMP? - c++

I was trying to compile the following code:
#pragma omp parallel shared (j)
{
#pragma omp for schedule(dynamic)
for(i = 0; i != j; i++)
{
// do something
}
}
but I got the following error: error: invalid controlling predicate.
The OpenMP standard states that for parallel for constructor it "only" allows one of the following operators: <, <=, > >=.
I do not understand the rationale for not allowing i != j. I could understand, in the case of the static schedule, since the compiler needs to pre-compute the number of iterations assigned to each thread. But I can't understand why this limitation in such case for example. Any clues?
EDIT: even if I make for(i = 0; i != 100; i++), although I could just have put "<" or "<=" .

.
I sent an email to OpenMP developers about this subject, the answer I got:
For signed int, the wrap around behavior is undefined. If we allow !=, programmers may get unexpected tripcount. The problem is whether the compiler can generate code to compute a trip count for the loop.
For a simple loop, like:
for( i = 0; i < n; ++i )
the compiler can determine that there are 'n' iterations, if n>=0, and zero iterations if n < 0.
For a loop like:
for( i = 0; i != n; ++i )
again, a compiler should be able to determine that there are 'n' iterations, if n >= 0; if n < 0, we don't know how many iterations it has.
For a loop like:
for( i = 0; i < n; i += 2 )
the compiler can generate code to compute the trip count (loop iteration count) as floor((n+1)/2) if n >= 0, and 0 if n < 0.
For a loop like:
for( i = 0; i != n; i += 2 )
the compiler can't determine whether 'i' will ever hit 'n'. What if 'n' is an odd number?
For a loop like:
for( i = 0; i < n; i += k )
the compiler can generate code to compute the trip count as floor((n+k-1)/k) if n >= 0, and 0 if n < 0, because the compiler knows that the loop must count up; in this case, if k < 0, it's not a legal OpenMP program.
For a loop like:
for( i = 0; i != n; i += k )
the compiler doesn't even know if i is counting up or down. It doesn't know if 'i' will ever hit 'n'. It may be an infinite loop.
Credits: OpenMP ARB

Contrary to what it may look like, schedule(dynamic) does not work with dynamic number of elements. Rather the assignment of iteration blocks to threads is what is dynamic. With static scheduling this assignment is precomputed at the beginning of the worksharing construct. With dynamic scheduling iteration blocks are given out to threads on the first come, first served basis.
The OpenMP standard is pretty clear that the amount of iteratons is precomputed once the workshare construct is encountered, hence the loop counter may not be modified inside the body of the loop (OpenMP 3.1 specification, §2.5.1 - Loop Construct):
The iteration count for each associated loop is computed before entry to the outermost
loop. If execution of any associated loop changes any of the values used to compute any
of the iteration counts, then the behavior is unspecified.
The integer type (or kind, for Fortran) used to compute the iteration count for the
collapsed loop is implementation defined.
A worksharing loop has logical iterations numbered 0,1,...,N-1 where N is the number of
loop iterations, and the logical numbering denotes the sequence in which the iterations
would be executed if the associated loop(s) were executed by a single thread. The
schedule clause specifies how iterations of the associated loops are divided into
contiguous non-empty subsets, called chunks, and how these chunks are distributed
among threads of the team. Each thread executes its assigned chunk(s) in the context of
its implicit task. The chunk_size expression is evaluated using the original list items of any variables that are made private in the loop construct. It is unspecified whether, in what order, or how many times, any side-effects of the evaluation of this expression occur. The use of a variable in a schedule clause expression of a loop construct causes an implicit reference to the variable in all enclosing constructs.
The rationale behind these relational operator restriction is quite simple - it provides clear indication on what is the direction of the loop, it alows easy computation of the number of iterations, and it provides similar semantics of the OpenMP worksharing directive in C/C++ and Fortran. Also other relational operations would require close inspection of the loop body in order to understand how the loop goes which would be unaceptable in many cases and would make the implementation cumbersome.
OpenMP 3.0 introduced the explicit task construct which allows for parallelisation of loops with unknown number of iterations. There is a catch though: tasks introduce some severe overhead and the one task per loop iteration only makes sense if these iterations take quite some time to be executed. Otherwise the overhead would dominate the execution time.

The answer is simple.
OpenMP does not allow premature termination of a team of threads.
With == or !=, OpenMP has no way of determining when the loop stops.
1. One or more threads could hit the termination condition, which might not be unique.
2. OpenMP has no way to shut down the other threads that might never detect the condition.

If I were to see the statement
for(i = 0; i != j; i++)
used instead of the statement
for(i = 0; i < j; i++)
I would be left wondering why the programmer had made that choice, never mind that it can mean the same thing. It may be that OpenMP is making a hard syntactic choice in order to force a certain clarity of code.
Here's code which raises challenges for the use of != and may help explain why it isn't allowed.
#include <cstdio>
int main(){
int j=10;
#pragma omp parallel for
for(int i = 0; i < j; i++){
printf("%d\n",i++);
}
}
notice that i is incremented in both the for statement as well as within the loop itself leading to the possibility (but not the guarantee) of an infinite loop.
If the predicate is < then the loop's behavior can still be well-defined in a parallel context without the compiler having to check within the loop for changes to i and determining how those changes will affect the loop's bounds.
If the predicate is != then the loop's behavior is no longer well-defined and it may be infinite in extent, preventing easy parallel subdivision.

I think there is perhaps no good reason other than having extended existing functionality to get this far.
IIRC originally these had to be static so that it could determine at compile time how to generate the loop code... it could just be a hangover from that.

Related

Increasing array index in openMP

I am new to using OpenMP. I am trying to parallelize a nested loop, and so far I have something of this form...
#pragma omp parallel for
for (j=0;j <m; j++) {
some work;
for (i= 0; i < n ; i++) {
p =b[i];
if (P< 0 && k < m) {
a[k] = c[i]; k++ ;
} else {
x=c[i];
}
}
some work
}
The outer loop is in parallel, and the inner loop updates k. The current value of k is needed for the other threads to update a[k] correctly. The problem is that all of the threads are updating a[k], but the proper order of k is not kept.
Some threads will update k and a[k], and some will not. How do I communicate the latest k between threads to update a[k] properly, since c[i] will have different values for each thread?
For example, when it runs serially, the program might set the first seven values of a to {1,3,5,7,3,9,13} and terminate with k equal to 7, but when done parallel, produces different results, or results in a different (therefore wrong) order.
How do I keep the same order and ensure parallelism at the same time?
Note: this answer was completely rewritten in light of OP clarifications. The original answer text is at the end.
How do I keep the same order and ensure parallelism at the same time?
Order dependency is antithetical to parallelism, as running operations in parallel inherently entails relaxing the relative order in which they are performed. Not all computations can be effectively parallelized.
Your case is not an exception. The second and each subsequent iteration of your outer loop needs to use the final value of k (among other things) computed by the previous iteration. How can it get that? Only by performing the previous iteration first. What room does that leave for concurrent operation? None. Concurrency is not the same thing as parallelism, but it is one of the main motivations for parallelism, because that's how parallelism yields improvements in elapsed time.
With no scope for concurrency, parallelism is actively counterproductive for you. Suppose you made the whole body of the outer loop a critical section, so that there was no concurrency in fact (as your present code requires) and no data races involving k. Then you would still pay the overhead for parallelism, get no speedup in return, and probably still get the wrong results because of evaluations of the outer-loop body being performed in the wrong order.
It may be that the whole thing can be rewritten to reduce or remove the data dependencies that prevent effective parallelization of the computation, or it may not. We haven't enough information to determine, as it depends in part on the details of "some work" and on the significance of the data. Probably you would need an altogether different algorithm for producing the desired results.
> Instead of giving a[n]={0,1,2,3,.......n} , it gives me garbage values for a when I use the reduction clause. I need the total sum of K, hence the reduction clause.
There is a closed-form equation for the sum of consecutive integers, and it has especially simple form when the first integer in the list is 0 or 1. In particular, the sum of the integers from 0 to n, inclusive, is n * (n + 1) / 2. You do not need a reduction for this.
If you wanted to use a reduction anyway, then you need to understand that it doesn't work the way you seem to think it does. What you get is a separate, private copy of the reduction variable for each thread executing the parallel construct, with the per thread (not per iteration) final values of those independant variables combined according to the reduction operator. Thus, if you really want to do the computation via an OpenMP reduction, then you would need to restructure the loop something like this:
#pragma omp parallel for reduction (+:k)
for (i = 0; i < 10; i++) {
a[i] = i;
k += i;
}
That assumes that the value of k is 0 immediately prior to the loop, as you indeed seem to be doing. If that were not a safe assumption then you would need something like
type_of_k k0 = k;
k = 0;
#pragma omp parallel for reduction (+:k)
for (i = 0; i < 10; i++) {
a[k0 + i] = i;
k += k0 + i;
}
Note that in either case, not only does that set up the reduction correctly, but it also breaks the data dependency between loop iterations that was previously carried by the expression k++.
It sounds like you're essentially filling in a with a filter of entries from c, and want to preserve their order. If this is the only use k has, some other methods spring to mind:
Always write a[i], but use a mark indicating unused values where the P predicate wasn't satisfied. This preserves order, but requires a larger a you can compact in a second pass.
Write an a_i array storing which index each entry belonged to. This still requires a #pragma omp atomic k_local = k++ access to k, and a second sort to restore order. And you'd need both a and a_i to be the full size again, or you might miss entries, so in all a terrible workaround.
Even with some sequential dependencies you can do optimizations, e.g. a scan to calculate what k would be for each i could be done in O(log n) rather than O(n). E.g. parallel prefix sum, openmp discussion on stack overflow. This sort of thing is what OpenMP's ordered depend is for, I believe. Anyhow, this leads to the third solution:
Generate a k array, holding the values k will have for each iteration, such that those threads that will write write to the correct places. This requires scanning the predicate.
It is useful to have higher level constructs like map, scan and reduce when planning out algorithms.

Are there workarounds for using OpenMP reduction on floating point addition?

I am currently working on parallelizing a nested for loop using C++ and OpenMP. Without going into the actual details of the program, I have constructed a basic example on the concepts I am using below:
float var = 0.f;
float distance = some float array;
float temp[] = some float array;
for(int i=0; i < distance.size; i++){
\\some work
for(int j=0; j < temp.size; j++){
var += temp[i]/distance[j]
}
}
I attempted to parallelize the above code in the following way:
float var = 0.f;
float distance = some float array;
float temp[] = some float array;
#pragma omp parallel for default(shared)
for(int i=0; i < distance.size; i++){
\\some work
#pragma omp parallel for reduction(+:var)
for(int j=0; j < temp.size; j++){
var += temp[i]/distance[j]
}
}
I then compared the serial program output with the parallel program output and I got incorrect result. I know that this is mainly due to the fact that floating point arithmetic is not associative. But are there any workarounds to this that give exact results?
Although the lack of associativity of floating point arithmetic might be an issue in some cases, the code you show here exposes a much more essential problem which you need to address first: the status of the var variable in the outer loop.
Indeed, since var is modified inside the i loop, even if only in the j part of the i loop, it needs to be "privatized" somehow. Now the exact status it needs to get depends on the value you expect it to store upon exit of the enclosing parallel region:
If you don't care about its value at all, just declare it private (or better, declare it inside the parallel region.
If you need its final value at the end of the i loop, and considering it accumulates a sum of values, most likely you'll need to declare it reduction(+:), although lastprivate might also be what you want (impossible to say without further details)
If private or lastprivate was all you needed, but you also need its initial value upon entrance of the parallel region, then you'll have to consider adding firstprivate too (no need of that if you went for reduction as it is already been taken care of)
That should be enough for fixing your issue.
Now, in your snippet, you also parallelized the inner loop. That is usually a bad idea to go for nested parallelism. So unless you have a very compelling reason for doing so, you will likely get much better performance by only parallelizing the outer loop, and leaving the inner loop alone. That won't mean the inner loop won't benefit from the parallelization, but rather that several instances of the inner loop will be computed in parallel (each one being sequential admittedly, but the whole process is parallel).
A nice side effect of removing the inner loop's parallelization (in addition to making the code faster) is that now all accumulations inside the privates var variables are done in the same order as when not in parallel. Therefore, your (hypothetical) floating point arithmetic issues inside the outer loop will now have disappeared, and only if you needed the final reduction upon exit of the parallel region might you still face them there.

Is it possible to multiply initialize using open mp in c++?

I have a for loop which access many memory pointers in each iteration. For each of these memory pointers, I created an index. My problem is that when I try to use open mp to parallelize this loop, I get the following error:
error: expected iteration declaration or initialization
I thought that this error would be one of the following:
-Open MP does not accept increment different than ++ or --
-Open MP does not accept multiple initialization in a loop
For reasons regarding performance, it is important to me to use these multiple indexes. Does anybody know the answer for my problem?
Here it is the code:
#pragma omp parallel default(shared)
{
int tID = omp_get_thread_num();
int i, iCF, iPF, iNF, iPJG, iCJG, iNJG, iPRJG, iCRJG;
##pragma omp for nowait
for(i=0, iCF=0, iPF=0, iNF=sqrBcksDim, iPJG=0, iCJG=0, iNJG=sqrBcksSize, iPRJG=0, iCRJG=0 ; iCF<RHSArraySize ; iPF=iCF, iCF=iNF, iNF+=sqrBcksDim, iPJG=iCJG, iCJG=iNJG, iNJG+=sqrBcksSize, iPRJG=iCRJG, iCRJG+=rectBcksSize, ++i)
{
}
}
Well, looking at that third clause, you’re doing a lot of inherently sequential computations that depend on the program state at the end of the previous iteration of the loop. You could move all of those operations but the += and ++ updates inside the body of the loop, and from the look of things possibly make the loop condition depend on iNF, correct? But some of them look like they still might be ordered. For a parallel algorithm, are there closed-form initializers you could use inside the loop body that depend only on i or something loop-invariant?
If not, and the inputs to each iteration really do depend on the results of previous iterations of the loop, then it’s not a parallel algorithm
One suggestion:
Here’s how I would try to fix this. You can only initialize i and increment it by a constant within the loop; however, you can equivalently move all the rest of those operations inside the loop. For example, I don’t know what else goes on inside the loop body, but if iCF is initialized to 0, iNF to sqrBcksDim and at the end of each iteration, iCF is set to the previous value of iNF and iNF is incremented by sqrBcksDim, it looks like you could rewrite the loop into something like:
int i;
#pragma omp for nowait
for ( i=0; i < RHSArraySize/sqrBcksDim; ++i )
{
const int iCF = i*sqrBcksDim;
const int iNF = iCF + sqrBcksDim;
// ...
}
Can you do that for your other variables? If you really have a parallel algorithm here, you should be able to, because each run of the loop should only depend on i and loop invariants, which you can use in your initializers. You’ll need to declare a variable outside the loop if you’re going to refer to it outside the body of the loop, but for the time being, just declare a new local variable and don’t read any variable outside the loop that you also write to inside the loop. If there are no implicit sequential dependencies, you should be able to initialize them all at the start of the loop body.
You might not end up doing it that way, but it might help you think about how to refactor.

Which one is more optimized for accessing array?

Solving the following exercise:
Write three different versions of a program to print the elements of
ia. One version should use a range for to manage the iteration, the
other two should use an ordinary for loop in one case using subscripts
and in the other using pointers. In all three programs write all the
types directly. That is, do not use a type alias, auto, or decltype to
simplify the code.[C++ Primer]
a question came up: Which of these methods for accessing array is optimized in terms of speed and why?
My Solutions:
Foreach Loop:
int ia[3][4]={{1,2,3,4},{5,6,7,8},{9,10,11,12}};
for (int (&i)[4]:ia) //1st method using for each loop
for(int j:i)
cout<<j<<" ";
Nested for loops:
for (int i=0;i<3;i++) //2nd method normal for loop
for(int j=0;j<4;j++)
cout<<ia[i][j]<<" ";
Using pointers:
int (*i)[4]=ia;
for(int t=0;t<3;i++,t++){ //3rd method. using pointers.
for(int x=0;x<4;x++)
cout<<(*i)[x]<<" ";
Using auto:
for(auto &i:ia) //4th one using auto but I think it is similar to 1st.
for(auto j:i)
cout<<j<<" ";
Benchmark result using clock()
1st: 3.6 (6,4,4,3,2,3)
2nd: 3.3 (6,3,4,2,3,2)
3rd: 3.1 (4,2,4,2,3,4)
4th: 3.6 (4,2,4,5,3,4)
Simulating each method 1000 times:
1st: 2.29375 2nd: 2.17592 3rd: 2.14383 4th: 2.33333
Process returned 0 (0x0) execution time : 13.568 s
Compiler used:MingW 3.2 c++11 flag enabled. IDE:CodeBlocks
I have some observations and points to make and I hope you get your answer from this.
The fourth version, as you mention yourself, is basically the same as the first version. auto can be thought of as only a coding shortcut (this is of course not strictly true, as using auto can result in getting different types than you'd expected and therefore result in different runtime behavior. But most of the time this is true.)
Your solution using pointers is probably not what people mean when they say that they are using pointers! One solution might be something like this:
for (int i = 0, *p = &(ia[0][0]); i < 3 * 4; ++i, ++p)
cout << *p << " ";
or to use two nested loops (which is probably pointless):
for (int i = 0, *p = &(ia[0][0]); i < 3; ++i)
for (int j = 0; j < 4; ++j, ++p)
cout << *p << " ";
from now on, I'm assuming this is the pointer solution you've written.
In such a trivial case as this, the part that will absolutely dominate your running time is the cout. The time spent in bookkeeping and checks for the loop(s) will be completely negligible comparing to doing I/O. Therefore, it won't matter which loop technique you use.
Modern compilers are great at optimizing such ubiquitous tasks and access patterns (iterating over an array.) Therefore, chances are that all these methods will generate exactly the same code (with the possible exception of the pointer version, which I will talk about later.)
The performance of most codes like this will depend more on the memory access pattern rather than how exactly the compiler generates the assembly branch instructions (and the rest of the operations.) This is because if a required memory block is not in the CPU cache, it's going to take a time roughly equivalent of several hundred CPU cycles (this is just a ballpark number) to fetch those bytes from RAM. Since all the examples access memory in exactly the same order, their behavior in respect to memory and cache will be the same and will have roughly the same running time.
As a side note, the way these examples access memory is the best way for it to be accessed! Linear, consecutive and from start to finish. Again, there are problems with the cout in there, which can be a very complicated operation and even call into the OS on every invocation, which might result, among other things, an almost complete deletion (eviction) of everything useful from the CPU cache.
On 32-bit systems and programs, the size of an int and a pointer are usually equal (both are 32 bits!) Which means that it doesn't matter much whether you pass around and use index values or pointers into arrays. On 64-bit systems however, a pointer is 64 bits but an int will still usually be 32 bits. This suggests that it is usually better to use indexes into arrays instead of pointers (or even iterators) on 64-bit systems and programs.
In this particular example, this is not significant at all though.
Your code is very specific and simple, but the general case, it is almost always better to give as much information to the compiler about your code as possible. This means that you must use the narrowest, most specific device available to you to do a job. This in turn means that a generic for loop (i.e. for (int i = 0; i < n; ++i)) is worse than a range-based for loop (i.e. for (auto i : v)) for the compiler, because in the latter case the compiler simply knows that you are going to iterate over the whole range and not go outside of it or break out of the loop or something, while in the generic for loop case, specially if your code is more complex, the compiler cannot be sure of this and has to insert extra checks and tests to make sure the code executes as the C++ standard says it should.
In many (most?) cases, although you might think performance matters, it does not. And most of the time you rewrite something to gain performance, you don't gain much. And most of the time the performance gain you get is not worth the loss in readability and maintainability that you sustain. So, design your code and data structures right (and keep performance in mind) but avoid this kind of "micro-optimization" because it's almost always not worth it and even harms the quality of the code too.
Generally, performance in terms of speed is very hard to reason about. Ideally you have to measure the time with real data on real hardware in real working conditions using sound scientific measuring and statistical methods. Even measuring the time it takes a piece of code to run is not at all trivial. Measuring performance is hard, and reasoning about it is harder, but these days it is the only way of recognizing bottlenecks and optimizing the code.
I hope I have answered your question.
EDIT: I wrote a very simple benchmark for what you are trying to do. The code is here. It's written for Windows and should be compilable on Visual Studio 2012 (because of the range-based for loops.) And here are the timing results:
Simple iteration (nested loops): min:0.002140, avg:0.002160, max:0.002739
Simple iteration (one loop): min:0.002140, avg:0.002160, max:0.002625
Pointer iteration (one loop): min:0.002140, avg:0.002160, max:0.003149
Range-based for (nested loops): min:0.002140, avg:0.002159, max:0.002862
Range(const ref)(nested loops): min:0.002140, avg:0.002155, max:0.002906
The relevant numbers are the "min" times (over 2000 runs of each test, for 1000x1000 arrays.) As you see, there is absolutely no difference between the tests. Note that you should turn on compiler optimizations or test 2 will be a disaster and cases 4 and 5 will be a little worse than 1 and 3.
And here are the code for the tests:
// 1. Simple iteration (nested loops)
unsigned sum = 0;
for (unsigned i = 0; i < gc_Rows; ++i)
for (unsigned j = 0; j < gc_Cols; ++j)
sum += g_Data[i][j];
// 2. Simple iteration (one loop)
unsigned sum = 0;
for (unsigned i = 0; i < gc_Rows * gc_Cols; ++i)
sum += g_Data[i / gc_Cols][i % gc_Cols];
// 3. Pointer iteration (one loop)
unsigned sum = 0;
unsigned * p = &(g_Data[0][0]);
for (unsigned i = 0; i < gc_Rows * gc_Cols; ++i)
sum += *p++;
// 4. Range-based for (nested loops)
unsigned sum = 0;
for (auto & i : g_Data)
for (auto j : i)
sum += j;
// 5. Range(const ref)(nested loops)
unsigned sum = 0;
for (auto const & i : g_Data)
for (auto const & j : i)
sum += j;
It has many factors affecting it:
It depends on the compiler
It depends on the compiler flags used
It depends on the computer used
There is only one way to know the exact answer: measuring the time used when dealing with huge arrays (maybe from a random number generator) which is the same method you have already done except that the array size should be at least 1000x1000.

How do I put two increment statements in a C++ 'for' loop?

I would like to increment two variables in a for-loop condition instead of one.
So something like:
for (int i = 0; i != 5; ++i and ++j)
do_something(i, j);
What is the syntax for this?
A common idiom is to use the comma operator which evaluates both operands, and returns the second operand. Thus:
for(int i = 0; i != 5; ++i,++j)
do_something(i,j);
But is it really a comma operator?
Now having wrote that, a commenter suggested it was actually some special syntactic sugar in the for statement, and not a comma operator at all. I checked that in GCC as follows:
int i=0;
int a=5;
int x=0;
for(i; i<5; x=i++,a++){
printf("i=%d a=%d x=%d\n",i,a,x);
}
I was expecting x to pick up the original value of a, so it should have displayed 5,6,7.. for x. What I got was this
i=0 a=5 x=0
i=1 a=6 x=0
i=2 a=7 x=1
i=3 a=8 x=2
i=4 a=9 x=3
However, if I bracketed the expression to force the parser into really seeing a comma operator, I get this
int main(){
int i=0;
int a=5;
int x=0;
for(i=0; i<5; x=(i++,a++)){
printf("i=%d a=%d x=%d\n",i,a,x);
}
}
i=0 a=5 x=0
i=1 a=6 x=5
i=2 a=7 x=6
i=3 a=8 x=7
i=4 a=9 x=8
Initially I thought that this showed it wasn't behaving as a comma operator at all, but as it turns out, this is simply a precedence issue - the comma operator has the lowest possible precedence, so the expression x=i++,a++ is effectively parsed as (x=i++),a++
Thanks for all the comments, it was an interesting learning experience, and I've been using C for many years!
Try this
for(int i = 0; i != 5; ++i, ++j)
do_something(i,j);
Try not to do it!
From http://www.research.att.com/~bs/JSF-AV-rules.pdf:
AV Rule 199
The increment expression in a for loop will perform no action other than to change a single
loop parameter to the next value for the loop.
Rationale: Readability.
for (int i = 0; i != 5; ++i, ++j)
do_something(i, j);
I came here to remind myself how to code a second index into the increment clause of a FOR loop, which I knew could be done mainly from observing it in a sample that I incorporated into another project, that written in C++.
Today, I am working in C#, but I felt sure that it would obey the same rules in this regard, since the FOR statement is one of the oldest control structures in all of programming. Thankfully, I had recently spent several days precisely documenting the behavior of a FOR loop in one of my older C programs, and I quickly realized that those studies held lessons that applied to today's C# problem, in particular to the behavior of the second index variable.
For the unwary, following is a summary of my observations. Everything I saw happening today, by carefully observing variables in the Locals window, confirmed my expectation that a C# FOR statement behaves exactly like a C or C++ FOR statement.
The first time a FOR loop executes, the increment clause (the 3rd of its three) is skipped. In Visual C and C++, the increment is generated as three machine instructions in the middle of the block that implements the loop, so that the initial pass runs the initialization code once only, then jumps over the increment block to execute the termination test. This implements the feature that a FOR loop executes zero or more times, depending on the state of its index and limit variables.
If the body of the loop executes, its last statement is a jump to the first of the three increment instructions that were skipped by the first iteration. After these execute, control falls naturally into the limit test code that implements the middle clause. The outcome of that test determines whether the body of the FOR loop executes, or whether control transfers to the next instruction past the jump at the bottom of its scope.
Since control transfers from the bottom of the FOR loop block to the increment block, the index variable is incremented before the test is executed. Not only does this behavior explain why you must code your limit clauses the way you learned, but it affects any secondary increment that you add, via the comma operator, because it becomes part of the third clause. Hence, it is not changed on the first iteration, but it is on the last iteration, which never executes the body.
If either of your index variables remains in scope when the loop ends, their value will be one higher than the threshold that stops the loop, in the case of the true index variable. Likewise, if, for example, the second variable is initialized to zero before the loop is entered, its value at the end will be the iteration count, assuming that it is an increment (++), not a decrement, and that nothing in the body of the loop changes its value.
I agree with squelart. Incrementing two variables is bug prone, especially if you only test for one of them.
This is the readable way to do this:
int j = 0;
for(int i = 0; i < 5; ++i) {
do_something(i, j);
++j;
}
For loops are meant for cases where your loop runs on one increasing/decreasing variable. For any other variable, change it in the loop.
If you need j to be tied to i, why not leave the original variable as is and add i?
for(int i = 0; i < 5; ++i) {
do_something(i,a+i);
}
If your logic is more complex (for example, you need to actually monitor more than one variable), I'd use a while loop.
int main(){
int i=0;
int a=0;
for(i;i<5;i++,a++){
printf("%d %d\n",a,i);
}
}
Use Maths. If the two operations mathematically depend on the loop iteration, why not do the math?
int i, j;//That have some meaningful values in them?
for( int counter = 0; counter < count_max; ++counter )
do_something (counter+i, counter+j);
Or, more specifically referring to the OP's example:
for(int i = 0; i != 5; ++i)
do_something(i, j+i);
Especially if you're passing into a function by value, then you should get something that does exactly what you want.