Related
Updated, see below!
I have heard and read that C++0x allows an compiler to print "Hello" for the following snippet
#include <iostream>
int main() {
while(1)
;
std::cout << "Hello" << std::endl;
}
It apparently has something to do with threads and optimization capabilities. It looks to me that this can surprise many people though.
Does someone have a good explanation of why this was necessary to allow? For reference, the most recent C++0x draft says at 6.5/5
A loop that, outside of the for-init-statement in the case of a for statement,
makes no calls to library I/O functions, and
does not access or modify volatile objects, and
performs no synchronization operations (1.10) or atomic operations (Clause 29)
may be assumed by the implementation to terminate. [ Note: This is intended to allow compiler transfor-
mations, such as removal of empty loops, even when termination cannot be proven. — end note ]
Edit:
This insightful article says about that Standards text
Unfortunately, the words "undefined behavior" are not used. However, anytime the standard says "the compiler may assume P," it is implied that a program which has the property not-P has undefined semantics.
Is that correct, and is the compiler allowed to print "Bye" for the above program?
There is an even more insightful thread here, which is about an analogous change to C, started off by the Guy done the above linked article. Among other useful facts, they present a solution that seems to also apply to C++0x (Update: This won't work anymore with n3225 - see below!)
endless:
goto endless;
A compiler is not allowed to optimize that away, it seems, because it's not a loop, but a jump. Another guy summarizes the proposed change in C++0x and C201X
By writing a loop, the programmer is asserting either that the
loop does something with visible behavior (performs I/O, accesses
volatile objects, or performs synchronization or atomic operations),
or that it eventually terminates. If I violate that assumption
by writing an infinite loop with no side effects, I am lying to the
compiler, and my program's behavior is undefined. (If I'm lucky,
the compiler might warn me about it.) The language doesn't provide
(no longer provides?) a way to express an infinite loop without
visible behavior.
Update on 3.1.2011 with n3225: Committee moved the text to 1.10/24 and say
The implementation may assume that any thread will eventually do one of the following:
terminate,
make a call to a library I/O function,
access or modify a volatile object, or
perform a synchronization operation or an atomic operation.
The goto trick will not work anymore!
To me, the relevant justification is:
This is intended to allow compiler transfor- mations, such as removal of empty loops, even when termination cannot be proven.
Presumably, this is because proving termination mechanically is difficult, and the inability to prove termination hampers compilers which could otherwise make useful transformations, such as moving nondependent operations from before the loop to after or vice versa, performing post-loop operations in one thread while the loop executes in another, and so on. Without these transformations, a loop might block all other threads while they wait for the one thread to finish said loop. (I use "thread" loosely to mean any form of parallel processing, including separate VLIW instruction streams.)
EDIT: Dumb example:
while (complicated_condition()) {
x = complicated_but_externally_invisible_operation(x);
}
complex_io_operation();
cout << "Results:" << endl;
cout << x << endl;
Here, it would be faster for one thread to do the complex_io_operation while the other is doing all the complex calculations in the loop. But without the clause you have quoted, the compiler has to prove two things before it can make the optimisation: 1) that complex_io_operation() doesn't depend on the results of the loop, and 2) that the loop will terminate. Proving 1) is pretty easy, proving 2) is the halting problem. With the clause, it may assume the loop terminates and get a parallelisation win.
I also imagine that the designers considered that the cases where infinite loops occur in production code are very rare and are usually things like event-driven loops which access I/O in some manner. As a result, they have pessimised the rare case (infinite loops) in favour of optimising the more common case (noninfinite, but difficult to mechanically prove noninfinite, loops).
It does, however, mean that infinite loops used in learning examples will suffer as a result, and will raise gotchas in beginner code. I can't say this is entirely a good thing.
EDIT: with respect to the insightful article you now link, I would say that "the compiler may assume X about the program" is logically equivalent to "if the program doesn't satisfy X, the behaviour is undefined". We can show this as follows: suppose there exists a program which does not satisfy property X. Where would the behaviour of this program be defined? The Standard only defines behaviour assuming property X is true. Although the Standard does not explicitly declare the behaviour undefined, it has declared it undefined by omission.
Consider a similar argument: "the compiler may assume a variable x is only assigned to at most once between sequence points" is equivalent to "assigning to x more than once between sequence points is undefined".
Does someone have a good explanation of why this was necessary to allow?
Yes, Hans Boehm provides a rationale for this in N1528: Why undefined behavior for infinite loops?, although this is WG14 document the rationale applies to C++ as well and the document refers to both WG14 and WG21:
As N1509 correctly points out, the current draft essentially gives
undefined behavior to infinite loops in 6.8.5p6. A major issue for
doing so is that it allows code to move across a potentially
non-terminating loop. For example, assume we have the following loops,
where count and count2 are global variables (or have had their address
taken), and p is a local variable, whose address has not been taken:
for (p = q; p != 0; p = p -> next) {
++count;
}
for (p = q; p != 0; p = p -> next) {
++count2;
}
Could these two loops be merged and replaced by the following loop?
for (p = q; p != 0; p = p -> next) {
++count;
++count2;
}
Without the special dispensation in 6.8.5p6 for infinite loops, this
would be disallowed: If the first loop doesn't terminate because q
points to a circular list, the original never writes to count2. Thus
it could be run in parallel with another thread that accesses or
updates count2. This is no longer safe with the transformed version
which does access count2 in spite of the infinite loop. Thus the
transformation potentially introduces a data race.
In cases like this, it is very unlikely that a compiler would be able
to prove loop termination; it would have to understand that q points
to an acyclic list, which I believe is beyond the ability of most
mainstream compilers, and often impossible without whole program
information.
The restrictions imposed by non-terminating loops are a restriction on
the optimization of terminating loops for which the compiler cannot
prove termination, as well as on the optimization of actually
non-terminating loops. The former are much more common than the
latter, and often more interesting to optimize.
There are clearly also for-loops with an integer loop variable in
which it would be difficult for a compiler to prove termination, and
it would thus be difficult for the compiler to restructure loops
without 6.8.5p6. Even something like
for (i = 1; i != 15; i += 2)
or
for (i = 1; i <= 10; i += j)
seems nontrivial to handle. (In the former case, some basic number
theory is required to prove termination, in the latter case, we need
to know something about the possible values of j to do so. Wrap-around
for unsigned integers may complicate some of this reasoning further.)
This issue seems to apply to almost all loop restructuring
transformations, including compiler parallelization and
cache-optimization transformations, both of which are likely to gain
in importance, and are already often important for numerical code.
This appears likely to turn into a substantial cost for the benefit of
being able to write infinite loops in the most natural way possible,
especially since most of us rarely write intentionally infinite loops.
The one major difference with C is that C11 provides an exception for controlling expressions that are constant expressions which differs from C++ and makes your specific example well-defined in C11.
I think the correct interpretation is the one from your edit: empty infinite loops are undefined behavior.
I wouldn't say it's particularly intuitive behavior, but this interpretation makes more sense than the alternative one, that the compiler is arbitrarily allowed to ignore infinite loops without invoking UB.
If infinite loops are UB, it just means that non-terminating programs aren't considered meaningful: according to C++0x, they have no semantics.
That does make a certain amount of sense too. They are a special case, where a number of side effects just no longer occur (for example, nothing is ever returned from main), and a number of compiler optimizations are hampered by having to preserve infinite loops. For example, moving computations across the loop is perfectly valid if the loop has no side effects, because eventually, the computation will be performed in any case.
But if the loop never terminates, we can't safely rearrange code across it, because we might just be changing which operations actually get executed before the program hangs. Unless we treat a hanging program as UB, that is.
The relevant issue is that the compiler is allowed to reorder code whose side effects do not conflict. The surprising order of execution could occur even if the compiler produced non-terminating machine code for the infinite loop.
I believe this is the right approach. The language spec defines ways to enforce order of execution. If you want an infinite loop that cannot be reordered around, write this:
volatile int dummy_side_effect;
while (1) {
dummy_side_effect = 0;
}
printf("Never prints.\n");
I think this is along the lines of the this type of question, which references another thread. Optimization can occasionally remove empty loops.
I think the issue could perhaps best be stated, as "If a later piece of code does not depend on an earlier piece of code, and the earlier piece of code has no side-effects on any other part of the system, the compiler's output may execute the later piece of code before, after, or intermixed with, the execution of the former, even if the former contains loops, without regard for when or whether the former code would actually complete. For example, the compiler could rewrite:
void testfermat(int n)
{
int a=1,b=1,c=1;
while(pow(a,n)+pow(b,n) != pow(c,n))
{
if (b > a) a++; else if (c > b) {a=1; b++}; else {a=1; b=1; c++};
}
printf("The result is ");
printf("%d/%d/%d", a,b,c);
}
as
void testfermat(int n)
{
if (fork_is_first_thread())
{
int a=1,b=1,c=1;
while(pow(a,n)+pow(b,n) != pow(c,n))
{
if (b > a) a++; else if (c > b) {a=1; b++}; else {a=1; b=1; c++};
}
signal_other_thread_and_die();
}
else // Second thread
{
printf("The result is ");
wait_for_other_thread();
}
printf("%d/%d/%d", a,b,c);
}
Generally not unreasonable, though I might worry that:
int total=0;
for (i=0; num_reps > i; i++)
{
update_progress_bar(i);
total+=do_something_slow_with_no_side_effects(i);
}
show_result(total);
would become
int total=0;
if (fork_is_first_thread())
{
for (i=0; num_reps > i; i++)
total+=do_something_slow_with_no_side_effects(i);
signal_other_thread_and_die();
}
else
{
for (i=0; num_reps > i; i++)
update_progress_bar(i);
wait_for_other_thread();
}
show_result(total);
By having one CPU handle the calculations and another handle the progress bar updates, the rewrite would improve efficiency. Unfortunately, it would make the progress bar updates rather less useful than they should be.
Does someone have a good explanation of why this was necessary to allow?
I have an explanation of why it is necessary not to allow, assuming that C++ still has the ambition to be a general-purpose language suitable for performance-critical, low-level programming.
This used to be a working, strictly conforming freestanding C++ program:
int main()
{
setup_interrupts();
for(;;)
{}
}
The above is a perfectly fine and common way to write main() in many low-end microcontroller systems. The whole program execution is done inside interrupt service routines (or in case of RTOS, it could be processes). And main() may absolutely not be allowed to return since these are bare metal systems and there is nobody to return to.
Typical real-world examples of where the above design can be used are PWM/motor driver microcontrollers, lighting control applications, simple regulator systems, sensor applications, simple household electronics and so on.
Changes to C++ have unfortunately made the language impossible to use for this kind of embedded systems programming. Existing real-world applications like the ones above will break in dangerous ways if ported to newer C++ compilers.
C++20 6.9.2.3 Forward progress [intro.progress]
The implementation may assume that any thread will eventually do one of the following:
(1.1) — terminate,
(1.2) — make a call to a library I/O function,
(1.3) — perform an access through a volatile glvalue, or
(1.4) — perform a synchronization operation or an atomic operation.
Lets address each bullet for the above, previously strictly conforming freestanding C++ program:
1.1. As any embedded systems beginner can tell the committee, bare metal/RTOS microcontroller systems never terminate. Therefore the loop cannot terminate either or main() would turn into runaway code, which would be a severe and dangerous error condition.
1.2 N/A.
1.3 Not necessarily. One may argue that the for(;;) loop is the proper place to feed the watchdog, which in turn is a side effect as it writes to a volatile register. But there may be implementation reasons why you don't want to use a watchdog. At any rate, it is not C++'s business to dictate that a watchdog should be used.
1.4 N/A.
Because of this, C++ is made even more unsuitable for embedded systems applications than it already was before.
Another problem here is that the standard speaks of "threads", which are higher level concepts. On real-world computers, threads are implemented through a low-level concept known as interrupts. Interrupts are similar to threads in terms of race conditions and concurrent execution, but they are not threads. On low-level systems there is just one core and therefore only one interrupt at a time is typically executed (kind of like threads used to work on single core PC back in the days).
And you can't have threads if you can't have interrupts. So threads would have to be implemented by a lower-level language more suitable for embedded systems than C++. The options being assembler or C.
By writing a loop, the programmer is asserting either that the loop does something with visible behavior (performs I/O, accesses volatile objects, or performs synchronization or atomic operations), or that it eventually terminates.
This is completely misguided and clearly written by someone who has never worked with microcontroller programming.
So what should those few remaining C++ embedded programmers who refuse to port their code to C "because of reasons" do? You have to introduce a side effect inside the for(;;) loop:
int main()
{
setup_interrupts();
for(volatile int i=0; i==0;)
{}
}
Or if you have the watchdog enabled as you ought to, feed it inside the for(;;) loop.
It is not decidable for the compiler for non-trivial cases if it is an infinite loop at all.
In different cases, it can happen that your optimiser will reach a better complexity class for your code (e.g. it was O(n^2) and you get O(n) or O(1) after optimisation).
So, to include such a rule that disallows removing an infinite loop into the C++ standard would make many optimisations impossible. And most people don't want this. I think this quite answers your question.
Another thing: I never have seen any valid example where you need an infinite loop which does nothing.
The one example I have heard about was an ugly hack that really should be solved otherwise: It was about embedded systems where the only way to trigger a reset was to freeze the device so that the watchdog restarts it automatically.
If you know any valid/good example where you need an infinite loop which does nothing, please tell me.
I think it's worth pointing out that loops which would be infinite except for the fact that they interact with other threads via non-volatile, non-synchronised variables can now yield incorrect behaviour with a new compiler.
I other words, make your globals volatile -- as well as arguments passed into such a loop via pointer/reference.
Updated, see below!
I have heard and read that C++0x allows an compiler to print "Hello" for the following snippet
#include <iostream>
int main() {
while(1)
;
std::cout << "Hello" << std::endl;
}
It apparently has something to do with threads and optimization capabilities. It looks to me that this can surprise many people though.
Does someone have a good explanation of why this was necessary to allow? For reference, the most recent C++0x draft says at 6.5/5
A loop that, outside of the for-init-statement in the case of a for statement,
makes no calls to library I/O functions, and
does not access or modify volatile objects, and
performs no synchronization operations (1.10) or atomic operations (Clause 29)
may be assumed by the implementation to terminate. [ Note: This is intended to allow compiler transfor-
mations, such as removal of empty loops, even when termination cannot be proven. — end note ]
Edit:
This insightful article says about that Standards text
Unfortunately, the words "undefined behavior" are not used. However, anytime the standard says "the compiler may assume P," it is implied that a program which has the property not-P has undefined semantics.
Is that correct, and is the compiler allowed to print "Bye" for the above program?
There is an even more insightful thread here, which is about an analogous change to C, started off by the Guy done the above linked article. Among other useful facts, they present a solution that seems to also apply to C++0x (Update: This won't work anymore with n3225 - see below!)
endless:
goto endless;
A compiler is not allowed to optimize that away, it seems, because it's not a loop, but a jump. Another guy summarizes the proposed change in C++0x and C201X
By writing a loop, the programmer is asserting either that the
loop does something with visible behavior (performs I/O, accesses
volatile objects, or performs synchronization or atomic operations),
or that it eventually terminates. If I violate that assumption
by writing an infinite loop with no side effects, I am lying to the
compiler, and my program's behavior is undefined. (If I'm lucky,
the compiler might warn me about it.) The language doesn't provide
(no longer provides?) a way to express an infinite loop without
visible behavior.
Update on 3.1.2011 with n3225: Committee moved the text to 1.10/24 and say
The implementation may assume that any thread will eventually do one of the following:
terminate,
make a call to a library I/O function,
access or modify a volatile object, or
perform a synchronization operation or an atomic operation.
The goto trick will not work anymore!
To me, the relevant justification is:
This is intended to allow compiler transfor- mations, such as removal of empty loops, even when termination cannot be proven.
Presumably, this is because proving termination mechanically is difficult, and the inability to prove termination hampers compilers which could otherwise make useful transformations, such as moving nondependent operations from before the loop to after or vice versa, performing post-loop operations in one thread while the loop executes in another, and so on. Without these transformations, a loop might block all other threads while they wait for the one thread to finish said loop. (I use "thread" loosely to mean any form of parallel processing, including separate VLIW instruction streams.)
EDIT: Dumb example:
while (complicated_condition()) {
x = complicated_but_externally_invisible_operation(x);
}
complex_io_operation();
cout << "Results:" << endl;
cout << x << endl;
Here, it would be faster for one thread to do the complex_io_operation while the other is doing all the complex calculations in the loop. But without the clause you have quoted, the compiler has to prove two things before it can make the optimisation: 1) that complex_io_operation() doesn't depend on the results of the loop, and 2) that the loop will terminate. Proving 1) is pretty easy, proving 2) is the halting problem. With the clause, it may assume the loop terminates and get a parallelisation win.
I also imagine that the designers considered that the cases where infinite loops occur in production code are very rare and are usually things like event-driven loops which access I/O in some manner. As a result, they have pessimised the rare case (infinite loops) in favour of optimising the more common case (noninfinite, but difficult to mechanically prove noninfinite, loops).
It does, however, mean that infinite loops used in learning examples will suffer as a result, and will raise gotchas in beginner code. I can't say this is entirely a good thing.
EDIT: with respect to the insightful article you now link, I would say that "the compiler may assume X about the program" is logically equivalent to "if the program doesn't satisfy X, the behaviour is undefined". We can show this as follows: suppose there exists a program which does not satisfy property X. Where would the behaviour of this program be defined? The Standard only defines behaviour assuming property X is true. Although the Standard does not explicitly declare the behaviour undefined, it has declared it undefined by omission.
Consider a similar argument: "the compiler may assume a variable x is only assigned to at most once between sequence points" is equivalent to "assigning to x more than once between sequence points is undefined".
Does someone have a good explanation of why this was necessary to allow?
Yes, Hans Boehm provides a rationale for this in N1528: Why undefined behavior for infinite loops?, although this is WG14 document the rationale applies to C++ as well and the document refers to both WG14 and WG21:
As N1509 correctly points out, the current draft essentially gives
undefined behavior to infinite loops in 6.8.5p6. A major issue for
doing so is that it allows code to move across a potentially
non-terminating loop. For example, assume we have the following loops,
where count and count2 are global variables (or have had their address
taken), and p is a local variable, whose address has not been taken:
for (p = q; p != 0; p = p -> next) {
++count;
}
for (p = q; p != 0; p = p -> next) {
++count2;
}
Could these two loops be merged and replaced by the following loop?
for (p = q; p != 0; p = p -> next) {
++count;
++count2;
}
Without the special dispensation in 6.8.5p6 for infinite loops, this
would be disallowed: If the first loop doesn't terminate because q
points to a circular list, the original never writes to count2. Thus
it could be run in parallel with another thread that accesses or
updates count2. This is no longer safe with the transformed version
which does access count2 in spite of the infinite loop. Thus the
transformation potentially introduces a data race.
In cases like this, it is very unlikely that a compiler would be able
to prove loop termination; it would have to understand that q points
to an acyclic list, which I believe is beyond the ability of most
mainstream compilers, and often impossible without whole program
information.
The restrictions imposed by non-terminating loops are a restriction on
the optimization of terminating loops for which the compiler cannot
prove termination, as well as on the optimization of actually
non-terminating loops. The former are much more common than the
latter, and often more interesting to optimize.
There are clearly also for-loops with an integer loop variable in
which it would be difficult for a compiler to prove termination, and
it would thus be difficult for the compiler to restructure loops
without 6.8.5p6. Even something like
for (i = 1; i != 15; i += 2)
or
for (i = 1; i <= 10; i += j)
seems nontrivial to handle. (In the former case, some basic number
theory is required to prove termination, in the latter case, we need
to know something about the possible values of j to do so. Wrap-around
for unsigned integers may complicate some of this reasoning further.)
This issue seems to apply to almost all loop restructuring
transformations, including compiler parallelization and
cache-optimization transformations, both of which are likely to gain
in importance, and are already often important for numerical code.
This appears likely to turn into a substantial cost for the benefit of
being able to write infinite loops in the most natural way possible,
especially since most of us rarely write intentionally infinite loops.
The one major difference with C is that C11 provides an exception for controlling expressions that are constant expressions which differs from C++ and makes your specific example well-defined in C11.
I think the correct interpretation is the one from your edit: empty infinite loops are undefined behavior.
I wouldn't say it's particularly intuitive behavior, but this interpretation makes more sense than the alternative one, that the compiler is arbitrarily allowed to ignore infinite loops without invoking UB.
If infinite loops are UB, it just means that non-terminating programs aren't considered meaningful: according to C++0x, they have no semantics.
That does make a certain amount of sense too. They are a special case, where a number of side effects just no longer occur (for example, nothing is ever returned from main), and a number of compiler optimizations are hampered by having to preserve infinite loops. For example, moving computations across the loop is perfectly valid if the loop has no side effects, because eventually, the computation will be performed in any case.
But if the loop never terminates, we can't safely rearrange code across it, because we might just be changing which operations actually get executed before the program hangs. Unless we treat a hanging program as UB, that is.
The relevant issue is that the compiler is allowed to reorder code whose side effects do not conflict. The surprising order of execution could occur even if the compiler produced non-terminating machine code for the infinite loop.
I believe this is the right approach. The language spec defines ways to enforce order of execution. If you want an infinite loop that cannot be reordered around, write this:
volatile int dummy_side_effect;
while (1) {
dummy_side_effect = 0;
}
printf("Never prints.\n");
I think this is along the lines of the this type of question, which references another thread. Optimization can occasionally remove empty loops.
I think the issue could perhaps best be stated, as "If a later piece of code does not depend on an earlier piece of code, and the earlier piece of code has no side-effects on any other part of the system, the compiler's output may execute the later piece of code before, after, or intermixed with, the execution of the former, even if the former contains loops, without regard for when or whether the former code would actually complete. For example, the compiler could rewrite:
void testfermat(int n)
{
int a=1,b=1,c=1;
while(pow(a,n)+pow(b,n) != pow(c,n))
{
if (b > a) a++; else if (c > b) {a=1; b++}; else {a=1; b=1; c++};
}
printf("The result is ");
printf("%d/%d/%d", a,b,c);
}
as
void testfermat(int n)
{
if (fork_is_first_thread())
{
int a=1,b=1,c=1;
while(pow(a,n)+pow(b,n) != pow(c,n))
{
if (b > a) a++; else if (c > b) {a=1; b++}; else {a=1; b=1; c++};
}
signal_other_thread_and_die();
}
else // Second thread
{
printf("The result is ");
wait_for_other_thread();
}
printf("%d/%d/%d", a,b,c);
}
Generally not unreasonable, though I might worry that:
int total=0;
for (i=0; num_reps > i; i++)
{
update_progress_bar(i);
total+=do_something_slow_with_no_side_effects(i);
}
show_result(total);
would become
int total=0;
if (fork_is_first_thread())
{
for (i=0; num_reps > i; i++)
total+=do_something_slow_with_no_side_effects(i);
signal_other_thread_and_die();
}
else
{
for (i=0; num_reps > i; i++)
update_progress_bar(i);
wait_for_other_thread();
}
show_result(total);
By having one CPU handle the calculations and another handle the progress bar updates, the rewrite would improve efficiency. Unfortunately, it would make the progress bar updates rather less useful than they should be.
Does someone have a good explanation of why this was necessary to allow?
I have an explanation of why it is necessary not to allow, assuming that C++ still has the ambition to be a general-purpose language suitable for performance-critical, low-level programming.
This used to be a working, strictly conforming freestanding C++ program:
int main()
{
setup_interrupts();
for(;;)
{}
}
The above is a perfectly fine and common way to write main() in many low-end microcontroller systems. The whole program execution is done inside interrupt service routines (or in case of RTOS, it could be processes). And main() may absolutely not be allowed to return since these are bare metal systems and there is nobody to return to.
Typical real-world examples of where the above design can be used are PWM/motor driver microcontrollers, lighting control applications, simple regulator systems, sensor applications, simple household electronics and so on.
Changes to C++ have unfortunately made the language impossible to use for this kind of embedded systems programming. Existing real-world applications like the ones above will break in dangerous ways if ported to newer C++ compilers.
C++20 6.9.2.3 Forward progress [intro.progress]
The implementation may assume that any thread will eventually do one of the following:
(1.1) — terminate,
(1.2) — make a call to a library I/O function,
(1.3) — perform an access through a volatile glvalue, or
(1.4) — perform a synchronization operation or an atomic operation.
Lets address each bullet for the above, previously strictly conforming freestanding C++ program:
1.1. As any embedded systems beginner can tell the committee, bare metal/RTOS microcontroller systems never terminate. Therefore the loop cannot terminate either or main() would turn into runaway code, which would be a severe and dangerous error condition.
1.2 N/A.
1.3 Not necessarily. One may argue that the for(;;) loop is the proper place to feed the watchdog, which in turn is a side effect as it writes to a volatile register. But there may be implementation reasons why you don't want to use a watchdog. At any rate, it is not C++'s business to dictate that a watchdog should be used.
1.4 N/A.
Because of this, C++ is made even more unsuitable for embedded systems applications than it already was before.
Another problem here is that the standard speaks of "threads", which are higher level concepts. On real-world computers, threads are implemented through a low-level concept known as interrupts. Interrupts are similar to threads in terms of race conditions and concurrent execution, but they are not threads. On low-level systems there is just one core and therefore only one interrupt at a time is typically executed (kind of like threads used to work on single core PC back in the days).
And you can't have threads if you can't have interrupts. So threads would have to be implemented by a lower-level language more suitable for embedded systems than C++. The options being assembler or C.
By writing a loop, the programmer is asserting either that the loop does something with visible behavior (performs I/O, accesses volatile objects, or performs synchronization or atomic operations), or that it eventually terminates.
This is completely misguided and clearly written by someone who has never worked with microcontroller programming.
So what should those few remaining C++ embedded programmers who refuse to port their code to C "because of reasons" do? You have to introduce a side effect inside the for(;;) loop:
int main()
{
setup_interrupts();
for(volatile int i=0; i==0;)
{}
}
Or if you have the watchdog enabled as you ought to, feed it inside the for(;;) loop.
It is not decidable for the compiler for non-trivial cases if it is an infinite loop at all.
In different cases, it can happen that your optimiser will reach a better complexity class for your code (e.g. it was O(n^2) and you get O(n) or O(1) after optimisation).
So, to include such a rule that disallows removing an infinite loop into the C++ standard would make many optimisations impossible. And most people don't want this. I think this quite answers your question.
Another thing: I never have seen any valid example where you need an infinite loop which does nothing.
The one example I have heard about was an ugly hack that really should be solved otherwise: It was about embedded systems where the only way to trigger a reset was to freeze the device so that the watchdog restarts it automatically.
If you know any valid/good example where you need an infinite loop which does nothing, please tell me.
I think it's worth pointing out that loops which would be infinite except for the fact that they interact with other threads via non-volatile, non-synchronised variables can now yield incorrect behaviour with a new compiler.
I other words, make your globals volatile -- as well as arguments passed into such a loop via pointer/reference.
In VHDL, there are two types for signal assignment:
concurrent ----> when...else
----> select...when...else
sequential ----> if...else
----> case...when
Problem is that some say that when...else conditions are checked line by line (king of sequential) while select...when...else conditionals are checked once. See this reference for example.
I say that when..else is also a sequential assignment because you are checking line by line. In other words, I say that there no need to say if..else within a process is equivalent to when..else. Why they assume when..else is a concurrent assignment?
Where you are hinting at in your problem has nothing to do with concurrent assignments or sequential statements. It has more to do with the difference between if and case. Before we get to that first lets understand a few equivalents. The concurrent conditional assignment:
Y <= A when ASel = '1' else B when BSel = '1' else C ;
Is exactly equivalent to a process with the following code:
process(A, ASel, B, BSel, C)
begin
if ASel = '1' then
Y <= A ;
elsif BSel = '1' then
Y <= B ;
else
Y <= C ;
end if ;
end process ;
Likewise the concurrent selected assignment:
With MuxSel select
Y <= A when "00", B when "01", C when others ;
Is equivalent to a process with the following:
process(MuxSel, A, B , C)
begin
case MuxSel is
when "00" => Y <= A;
when "01" => Y <= B ;
when others => Y <= C ;
end case ;
end process ;
From a coding perspective, the sequential forms above have a little more coding capability than the assignment form because case and if allow blocks of code, where the assignment form only assigns to one signal. However other than that, they have the same language restrictions and produce the same hardware (as much as synthesis tools do that). In addition for many simple hardware problems, the assignment form works well and is a concise capture of the problem.
So where your thoughts are leading really comes down to the difference between if and case. If statements (and their equivalent conditional assignments) that have have multiple "elsif" in (or implied in) them tend to create priority logic or at least cascaded logic. Where as case (and their equivalent selected assignments) tend to be well suited for things like multiplexers and their logic structure tends to be more of a balanced tree structure.
Sometimes tools will refactor an if statement to allow it to be equivalent to a case statement. Also for some targets (particularly LUT based logic like Xilinx and Altera), the difference between them in terms of hardware effiency does not show up until there are enough "elsif" branches though.
With VHDL-2008, the assignment forms are also allowed in sequential code. The transformation is the same except without the process wrapper.
Concurrent vs Sequential is about independence of execution.
A concurrent statement is simply a statement that is evaluated and/or executed independently of the code that surrounds it. Processes are concurrent. Component/Entity Instances are concurrent. Signal assignments and procedure calls that are done in the architecture are concurrent.
Sequential statements (other than wait) run when the code around it also runs.
Interesting note, while a process is concurrent (because it runs independently of other processes and concurrent assignments), it contains sequential statements.
Often when we write RTL code, the processes that we write are simple enough that it is hard to see the sequential nature of them. It really takes a statemachine or a testbench to see the true sequential nature of a process.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Everyone is aware of Dijkstra's Letters to the editor: go to statement considered harmful (also here .html transcript and here .pdf) and there has been a formidable push since that time to eschew the goto statement whenever possible. While it's possible to use goto to produce unmaintainable, sprawling code, it nevertheless remains in modern programming languages. Even the advanced continuation control structure in Scheme can be described as a sophisticated goto.
What circumstances warrant the use of goto? When is it best to avoid?
As a follow-up question: C provides a pair of functions, setjmp() and longjmp(), that provide the ability to goto not just within the current stack frame but within any of the calling frames. Should these be considered as dangerous as goto? More dangerous?
Dijkstra himself regretted that title, for which he was not responsible. At the end of EWD1308 (also here .pdf) he wrote:
Finally a short story for the record.
In 1968, the Communications of the ACM
published a text of mine under the
title "The goto statement considered
harmful", which in later years would
be most frequently referenced,
regrettably, however, often by authors
who had seen no more of it than its
title, which became a cornerstone of
my fame by becoming a template: we
would see all sorts of articles under
the title "X considered harmful" for
almost any X, including one titled
"Dijkstra considered harmful". But
what had happened? I had submitted a
paper under the title "A case against
the goto statement", which, in order
to speed up its publication, the
editor had changed into a "letter to
the Editor", and in the process he had
given it a new title of his own
invention! The editor was Niklaus
Wirth.
A well thought out classic paper about this topic, to be matched to that of Dijkstra, is Structured Programming with go to Statements, by Donald E. Knuth. Reading both helps to reestablish context and a non-dogmatic understanding of the subject. In this paper, Dijkstra's opinion on this case is reported and is even more strong:
Donald E. Knuth: I believe that by presenting such a
view I am not in fact disagreeing
sharply with Dijkstra's ideas, since
he recently wrote the following:
"Please don't fall into the trap of
believing that I am terribly
dogmatical about [the go to
statement]. I have the uncomfortable
feeling that others are making a
religion out of it, as if the
conceptual problems of programming
could be solved by a single trick, by
a simple form of coding discipline!"
A coworker of mine said the only reason to use a GOTO is if you programmed yourself so far into a corner that it is the only way out. In other words, proper design ahead of time and you won't need to use a GOTO later.
I thought this comic illustrates that beautifully "I could restructure the program's flow, or use one little 'GOTO' instead." A GOTO is a weak way out when you have weak design. Velociraptors prey on the weak.
The following statements are generalizations; while it is always possible to plead exception, it usually (in my experience and humble opinion) isn't worth the risks.
Unconstrained use of memory addresses (either GOTO or raw pointers) provides too many opportunities to make easily avoidable mistakes.
The more ways there are to arrive at a particular "location" in the code, the less confident one can be about what the state of the system is at that point. (See below.)
Structured programming IMHO is less about "avoiding GOTOs" and more about making the structure of the code match the structure of the data. For example, a repeating data structure (e.g. array, sequential file, etc.) is naturally processed by a repeated unit of code. Having built-in structures (e.g. while, for, until, for-each, etc.) allows the programmer to avoid the tedium of repeating the same cliched code patterns.
Even if GOTO is low-level implementation detail (not always the case!) it's below the level that the programmer should be thinking. How many programmers balance their personal checkbooks in raw binary? How many programmers worry about which sector on the disk contains a particular record, instead of just providing a key to a database engine (and how many ways could things go wrong if we really wrote all of our programs in terms of physical disk sectors)?
Footnotes to the above:
Regarding point 2, consider the following code:
a = b + 1
/* do something with a */
At the "do something" point in the code, we can state with high confidence that a is greater than b. (Yes, I'm ignoring the possibility of untrapped integer overflow. Let's not bog down a simple example.)
On the other hand, if the code had read this way:
...
goto 10
...
a = b + 1
10: /* do something with a */
...
goto 10
...
The multiplicity of ways to get to label 10 means that we have to work much harder to be confident about the relationships between a and b at that point. (In fact, in the general case it's undecideable!)
Regarding point 4, the whole notion of "going someplace" in the code is just a metaphor. Nothing is really "going" anywhere inside the CPU except electrons and photons (for the waste heat). Sometimes we give up a metaphor for another, more useful, one. I recall encountering (a few decades ago!) a language where
if (some condition) {
action-1
} else {
action-2
}
was implemented on a virtual machine by compiling action-1 and action-2 as out-of-line parameterless routines, then using a single two-argument VM opcode which used the boolean value of the condition to invoke one or the other. The concept was simply "choose what to invoke now" rather than "go here or go there". Again, just a change of metaphor.
Sometimes it is valid to use GOTO as an alternative to exception handling within a single function:
if (f() == false) goto err_cleanup;
if (g() == false) goto err_cleanup;
if (h() == false) goto err_cleanup;
return;
err_cleanup:
...
COM code seems to fall into this pattern fairly often.
I can only recall using a goto once. I had a series of five nested counted loops and I needed to be able to break out of the entire structure from the inside early based on certain conditions:
for{
for{
for{
for{
for{
if(stuff){
GOTO ENDOFLOOPS;
}
}
}
}
}
}
ENDOFLOOPS:
I could just have easily declared a boolean break variable and used it as part of the conditional for each loop, but in this instance I decided a GOTO was just as practical and just as readable.
No velociraptors attacked me.
Goto is extremely low on my list of things to include in a program just for the sake of it. That doesn't mean it's unacceptable.
Goto can be nice for state machines. A switch statement in a loop is (in order of typical importance): (a) not actually representative of the control flow, (b) ugly, (c) potentially inefficient depending on language and compiler. So you end up writing one function per state, and doing things like "return NEXT_STATE;" which even look like goto.
Granted, it is difficult to code state machines in a way which make them easy to understand. However, none of that difficulty is to do with using goto, and none of it can be reduced by using alternative control structures. Unless your language has a 'state machine' construct. Mine doesn't.
On those rare occasions when your algorithm really is most comprehensible in terms of a path through a sequence of nodes (states) connected by a limited set of permissible transitions (gotos), rather than by any more specific control flow (loops, conditionals, whatnot), then that should be explicit in the code. And you ought to draw a pretty diagram.
setjmp/longjmp can be nice for implementing exceptions or exception-like behaviour. While not universally praised, exceptions are generally considered a "valid" control structure.
setjmp/longjmp are 'more dangerous' than goto in the sense that they're harder to use correctly, never mind comprehensibly.
There never has been, nor will there
ever be, any language in which it is
the least bit difficult to write bad
code. -- Donald Knuth.
Taking goto out of C would not make it any easier to write good code in C. In fact, it would rather miss the point that C is supposed to be capable of acting as a glorified assembler language.
Next it'll be "pointers considered harmful", then "duck typing considered harmful". Then who will be left to defend you when they come to take away your unsafe programming construct? Eh?
We already had this discussion and I stand by my point.
Furthermore, I'm fed up with people describing higher-level language structures as “goto in disguise” because they clearly haven't got the point at all. For example:
Even the advanced continuation control structure in Scheme can be described as a sophisticated goto.
That is complete nonsense. Every control structure can be implemented in terms of goto but this observation is utterly trivial and useless. goto isn't considered harmful because of its positive effects but because of its negative consequences and these have been eliminated by structured programming.
Similarly, saying “GOTO is a tool, and as all tools, it can be used and abused” is completely off the mark. No modern construction worker would use a rock and claim it “is a tool.” Rocks have been replaced by hammers. goto has been replaced by control structures. If the construction worker were stranded in the wild without a hammer, of course he would use a rock instead. If a programmer has to use an inferior programming language that doesn't have feature X, well, of course she may have to use goto instead. But if she uses it anywhere else instead of the appropriate language feature she clearly hasn't understood the language properly and uses it wrongly. It's really as simple as that.
In Linux: Using goto In Kernel Code on Kernel Trap, there's a discussion with Linus Torvalds and a "new guy" about the use of GOTOs in Linux code. There are some very good points there and Linus dressed in that usual arrogance :)
Some passages:
Linus: "No, you've been brainwashed by
CS people who thought that Niklaus
Wirth actually knew what he was
talking about. He didn't. He doesn't
have a frigging clue."
-
Linus: "I think goto's are fine, and
they are often more readable than
large amounts of indentation."
-
Linus: "Of course, in stupid languages
like Pascal, where labels cannot be
descriptive, goto's can be bad."
In C, goto only works within the scope of the current function, which tends to localise any potential bugs. setjmp and longjmp are far more dangerous, being non-local, complicated and implementation-dependent. In practice however, they're too obscure and uncommon to cause many problems.
I believe that the danger of goto in C is greatly exaggerated. Remember that the original goto arguments took place back in the days of languages like old-fashioned BASIC, where beginners would write spaghetti code like this:
3420 IF A > 2 THEN GOTO 1430
Here Linus describes an appropriate use of goto: http://www.kernel.org/doc/Documentation/CodingStyle (chapter 7).
Today, it's hard to see the big deal about the GOTO statement because the "structured programming" people mostly won the debate and today's languages have sufficient control flow structures to avoid GOTO.
Count the number of gotos in a modern C program. Now add the number of break, continue, and return statements. Furthermore, add the number of times you use if, else, while, switch or case. That's about how many GOTOs your program would have had if you were writing in FORTRAN or BASIC in 1968 when Dijkstra wrote his letter.
Programming languages at the time were lacking in control flow. For example, in the original Dartmouth BASIC:
IF statements had no ELSE. If you wanted one, you had to write:
100 IF NOT condition THEN GOTO 200
...stuff to do if condition is true...
190 GOTO 300
200 REM else
...stuff to do if condition is false...
300 REM end if
Even if your IF statement didn't need an ELSE, it was still limited to a single line, which usually consisted of a GOTO.
There was no DO...LOOP statement. For non-FOR loops, you had to end the loop with an explicit GOTO or IF...GOTO back to the beginning.
There was no SELECT CASE. You had to use ON...GOTO.
So, you ended up with a lot of GOTOs in your program. And you couldn't depend on the restriction of GOTOs to within a single subroutine (because GOSUB...RETURN was such a weak concept of subroutines), so these GOTOs could go anywhere. Obviously, this made control flow hard to follow.
This is where the anti-GOTO movement came from.
Go To can provide a sort of stand-in for "real" exception handling in certain cases. Consider:
ptr = malloc(size);
if (!ptr) goto label_fail;
bytes_in = read(f_in,ptr,size);
if (bytes_in=<0) goto label_fail;
bytes_out = write(f_out,ptr,bytes_in);
if (bytes_out != bytes_in) goto label_fail;
Obviously this code was simplified to take up less space, so don't get too hung up on the details. But consider an alternative I've seen all too many times in production code by coders going to absurd lengths to avoid using goto:
success=false;
do {
ptr = malloc(size);
if (!ptr) break;
bytes_in = read(f_in,ptr,size);
if (count=<0) break;
bytes_out = write(f_out,ptr,bytes_in);
if (bytes_out != bytes_in) break;
success = true;
} while (false);
Now functionally this code does the exact same thing. In fact, the code generated by the compiler is nearly identical. However, in the programmer's zeal to appease Nogoto (the dreaded god of academic rebuke), this programmer has completely broken the underlying idiom that the while loop represents, and did a real number on the readability of the code. This is not better.
So, the moral of the story is, if you find yourself resorting to something really stupid in order to avoid using goto, then don't.
Donald E. Knuth answered this question in the book "Literate Programming", 1992 CSLI. On p. 17 there is an essay "Structured Programming with goto Statements" (PDF). I think the article might have been published in other books as well.
The article describes Dijkstra's suggestion and describes the circumstances where this is valid. But he also gives a number of counter examples (problems and algorithms) which cannot be easily reproduced using structured loops only.
The article contains a complete description of the problem, the history, examples and counter examples.
Goto considered helpful.
I started programming in 1975. To 1970s-era programmers, the words "goto considered harmful" said more or less that new programming languages with modern control structures were worth trying. We did try the new languages. We quickly converted. We never went back.
We never went back, but, if you are younger, then you have never been there in the first place.
Now, a background in ancient programming languages may not be very useful except as an indicator of the programmer's age. Notwithstanding, younger programmers lack this background, so they no longer understand the message the slogan "goto considered harmful" conveyed to its intended audience at the time it was introduced.
Slogans one does not understand are not very illuminating. It is probably best to forget such slogans. Such slogans do not help.
This particular slogan however, "Goto considered harmful," has taken on an undead life of its own.
Can goto not be abused? Answer: sure, but so what? Practically every programming element can be abused. The humble bool for example is abused more often than some of us would like to believe.
By contrast, I cannot remember meeting a single, actual instance of goto abuse since 1990.
The biggest problem with goto is probably not technical but social. Programmers who do not know very much sometimes seem to feel that deprecating goto makes them sound smart. You might have to satisfy such programmers from time to time. Such is life.
The worst thing about goto today is that it is not used enough.
Attracted by Jay Ballou adding an answer, I'll add my £0.02. If Bruno Ranschaert had not already done so, I'd have mentioned Knuth's "Structured Programming with GOTO Statements" article.
One thing that I've not seen discussed is the sort of code that, while not exactly common, was taught in Fortran text books. Things like the extended range of a DO loop and open-coded subroutines (remember, this would be Fortran II, or Fortran IV, or Fortran 66 - not Fortran 77 or 90). There's at least a chance that the syntactic details are inexact, but the concepts should be accurate enough. The snippets in each case are inside a single function.
Note that the excellent but dated (and out of print) book 'The Elements of Programming Style, 2nd Edn' by Kernighan & Plauger includes some real-life examples of abuse of GOTO from programming text books of its era (late-70s). The material below is not from that book, however.
Extended range for a DO loop
do 10 i = 1,30
...blah...
...blah...
if (k.gt.4) goto 37
91 ...blah...
...blah...
10 continue
...blah...
return
37 ...some computation...
goto 91
One reason for such nonsense was the good old-fashioned punch-card. You might notice that the labels (nicely out of sequence because that was canonical style!) are in column 1 (actually, they had to be in columns 1-5) and the code is in columns 7-72 (column 6 was the continuation marker column). Columns 73-80 would be given a sequence number, and there were machines that would sort punch card decks into sequence number order. If you had your program on sequenced cards and needed to add a few cards (lines) into the middle of a loop, you'd have to repunch everything after those extra lines. However, if you replaced one card with the GOTO stuff, you could avoid resequencing all the cards - you just tucked the new cards at the end of the routine with new sequence numbers. Consider it to be the first attempt at 'green computing' - a saving of punch cards (or, more specifically, a saving of retyping labour - and a saving of consequential rekeying errors).
Oh, you might also note that I'm cheating and not shouting - Fortran IV was written in all upper-case normally.
Open-coded subroutine
...blah...
i = 1
goto 76
123 ...blah...
...blah...
i = 2
goto 76
79 ...blah...
...blah...
goto 54
...blah...
12 continue
return
76 ...calculate something...
...blah...
goto (123, 79) i
54 ...more calculation...
goto 12
The GOTO between labels 76 and 54 is a version of computed goto. If the variable i has the value 1, goto the first label in the list (123); if it has the value 2, goto the second, and so on. The fragment from 76 to the computed goto is the open-coded subroutine. It was a piece of code executed rather like a subroutine, but written out in the body of a function. (Fortran also had statement functions - which were embedded functions that fitted on a single line.)
There were worse constructs than the computed goto - you could assign labels to variables and then use an assigned goto. Googling assigned goto tells me it was deleted from Fortran 95. Chalk one up for the structured programming revolution which could fairly be said to have started in public with Dijkstra's "GOTO Considered Harmful" letter or article.
Without some knowledge of the sorts of things that were done in Fortran (and in other languages, most of which have rightly fallen by the wayside), it is hard for us newcomers to understand the scope of the problem which Dijkstra was dealing with. Heck, I didn't start programming until ten years after that letter was published (but I did have the misfortune to program in Fortran IV for a while).
There is no such things as GOTO considered harmful.
GOTO is a tool, and as all tools, it can be used and abused.
There are, however, many tools in the programming world that have a tendency to be abused more than being used, and GOTO is one of them. the WITH statement of Delphi is another.
Personally I don't use either in typical code, but I've had the odd usage of both GOTO and WITH that were warranted, and an alternative solution would've contained more code.
The best solution would be for the compiler to just warn you that the keyword was tainted, and you'd have to stuff a couple of pragma directives around the statement to get rid of the warnings.
It's like telling your kids to not run with scissors. Scissors are not bad, but some usage of them are perhaps not the best way to keep your health.
Since I began doing a few things in the linux kernel, gotos don't bother me so much as they once did. At first I was sort of horrified to see they (kernel guys) added gotos into my code. I've since become accustomed to the use of gotos, in some limited contexts, and will now occasionally use them myself. Typically, it's a goto that jumps to the end of a function to do some kind of cleanup and bail out, rather than duplicating that same cleanup and bailout in several places in the function. And typically, it's not something large enough to hand off to another function -- e.g. freeing some locally (k)malloc'ed variables is a typical case.
I've written code that used setjmp/longjmp only once. It was in a MIDI drum sequencer program. Playback happened in a separate process from all user interaction, and the playback process used shared memory with the UI process to get the limited info it needed to do the playback. When the user wanted to stop playback, the playback process just did a longjmp "back to the beginning" to start over, rather than some complicated unwinding of wherever it happened to be executing when the user wanted it to stop. It worked great, was simple, and I never had any problems or bugs related to it in that instance.
setjmp/longjmp have their place -- but that place is one you'll not likely visit but once in a very long while.
Edit: I just looked at the code. It was actually siglongjmp() that I used, not longjmp (not that it's a big deal, but I had forgotten that siglongjmp even existed.)
It never was, as long as you were able to think for yourself.
Because goto can be used for confusing metaprogramming
Goto is both a high-level and a low-level control expression, and as a result it just doesn't have a appropriate design pattern suitable for most problems.
It's low-level in the sense that a goto is a primitive operation that implements something higher like while or foreach or something.
It's high-level in the sense that when used in certain ways it takes code that executes in a clear sequence, in an uninterrupted fashion, except for structured loops, and it changes it into pieces of logic that are, with enough gotos, a grab-bag of logic being dynamically reassembled.
So, there is a prosaic and an evil side to goto.
The prosaic side is that an upward pointing goto can implement a perfectly reasonable loop and a downward-pointing goto can do a perfectly reasonable break or return. Of course, an actual while, break, or return would be a lot more readable, as the poor human wouldn't have to simulate the effect of the goto in order to get the big picture. So, a bad idea in general.
The evil side involves a routine not using goto for while, break, or return, but using it for what's called spaghetti logic. In this case the goto-happy developer is constructing pieces of code out of a maze of goto's, and the only way to understand it is to simulate it mentally as a whole, a terribly tiring task when there are many goto's. I mean, imagine the trouble of evaluating code where the else is not precisely an inverse of the if, where nested ifs might allow in some things that were rejected by the outer if, etc, etc.
Finally, to really cover the subject, we should note that essentially all early languages except Algol initially made only single statements subject to their versions of if-then-else. So, the only way to do a conditional block was to goto around it using an inverse conditional. Insane, I know, but I've read some old specs. Remember that the first computers were programmed in binary machine code so I suppose any kind of an HLL was a lifesaver; I guess they weren't too picky about exactly what HLL features they got.
Having said all that I used to stick one goto into every program I wrote "just to annoy the purists".
If you're writing a VM in C, it turns out that using (gcc's) computed gotos like this:
char run(char *pc) {
void *opcodes[3] = {&&op_inc, &&op_lda_direct, &&op_hlt};
#define NEXT_INSTR(stride) goto *(opcodes[*(pc += stride)])
NEXT_INSTR(0);
op_inc:
++acc;
NEXT_INSTR(1);
op_lda_direct:
acc = ram[++pc];
NEXT_INSTR(1);
op_hlt:
return acc;
}
works much faster than the conventional switch inside a loop.
Denying the use of the GOTO statement to programmers is like telling a carpenter not to use a hammer as it Might damage the wall while he is hammering in a nail. A real programmer Knows How and When to use a GOTO. I’ve followed behind some of these so-called ‘Structured Programs’ I’ve see such Horrid code just to avoid using a GOTO, that I could shoot the programmer. Ok, In defense of the other side, I’ve seen some real spaghetti code too and again, those programmers should be shot too.
Here is just one small example of code I’ve found.
YORN = ''
LOOP
UNTIL YORN = 'Y' OR YORN = 'N' DO
CRT 'Is this correct? (Y/N) : ':
INPUT YORN
REPEAT
IF YORN = 'N' THEN
CRT 'Aborted!'
STOP
END
-----------------------OR----------------------
10: CRT 'Is this Correct (Y)es/(N)o ':
INPUT YORN
IF YORN='N' THEN
CRT 'Aborted!'
STOP
ENDIF
IF YORN<>'Y' THEN GOTO 10
"In this link http://kerneltrap.org/node/553/2131"
Ironically, eliminating the goto introduced a bug: the spinlock call was omitted.
The original paper should be thought of as "Unconditional GOTO Considered Harmful". It was in particular advocating a form of programming based on conditional (if) and iterative (while) constructs, rather than the test-and-jump common to early code. goto is still useful in some languages or circumstances, where no appropriate control structure exists.
About the only place I agree Goto could be used is when you need to deal with errors, and each particular point an error occurs requires special handling.
For instance, if you're grabbing resources and using semaphores or mutexes, you have to grab them in order and you should always release them in the opposite manner.
Some code requires a very odd pattern of grabbing these resources, and you can't just write an easily maintained and understood control structure to correctly handle both the grabbing and releasing of these resources to avoid deadlock.
It's always possible to do it right without goto, but in this case and a few others Goto is actually the better solution primarily for readability and maintainability.
-Adam
One modern GOTO usage is by the C# compiler to create state machines for enumerators defined by yield return.
GOTO is something that should be used by compilers and not programmers.
Until C and C++ (amongst other culprits) have labelled breaks and continues, goto will continue to have a role.
If GOTO itself were evil, compilers would be evil, because they generate JMPs. If jumping into a block of code, especially following a pointer, were inherently evil, the RETurn instruction would be evil. Rather, the evil is in the potential for abuse.
At times I have had to write apps that had to keep track of a number of objects where each object had to follow an intricate sequence of states in response to events, but the whole thing was definitely single-thread. A typical sequence of states, if represented in pseudo-code would be:
request something
wait for it to be done
while some condition
request something
wait for it
if one response
while another condition
request something
wait for it
do something
endwhile
request one more thing
wait for it
else if some other response
... some other similar sequence ...
... etc, etc.
endwhile
I'm sure this is not new, but the way I handled it in C(++) was to define some macros:
#define WAIT(n) do{state=(n); enque(this); return; L##n:;}while(0)
#define DONE state = -1
#define DISPATCH0 if state < 0) return;
#define DISPATCH1 if(state==1) goto L1; DISPATCH0
#define DISPATCH2 if(state==2) goto L2; DISPATCH1
#define DISPATCH3 if(state==3) goto L3; DISPATCH2
#define DISPATCH4 if(state==4) goto L4; DISPATCH3
... as needed ...
Then (assuming state is initially 0) the structured state machine above turns into the structured code:
{
DISPATCH4; // or as high a number as needed
request something;
WAIT(1); // each WAIT has a different number
while (some condition){
request something;
WAIT(2);
if (one response){
while (another condition){
request something;
WAIT(3);
do something;
}
request one more thing;
WAIT(4);
}
else if (some other response){
... some other similar sequence ...
}
... etc, etc.
}
DONE;
}
With a variation on this, there can be CALL and RETURN, so some state machines can act like subroutines of other state machines.
Is it unusual? Yes. Does it take some learning on the part of the maintainer? Yes. Does that learning pay off? I think so. Could it be done without GOTOs that jump into blocks? Nope.
I actually found myself forced to use a goto, because I literally couldn't think of a better (faster) way to write this code:
I had a complex object, and I needed to do some operation on it. If the object was in one state, then I could do a quick version of the operation, otherwise I had to do a slow version of the operation. The thing was that in some cases, in the middle of the slow operation, it was possible to realise that this could have been done with the fast operation.
SomeObject someObject;
if (someObject.IsComplex()) // this test is trivial
{
// begin slow calculations here
if (result of calculations)
{
// just discovered that I could use the fast calculation !
goto Fast_Calculations;
}
// do the rest of the slow calculations here
return;
}
if (someObject.IsmediumComplex()) // this test is slightly less trivial
{
Fast_Calculations:
// Do fast calculations
return;
}
// object is simple, no calculations needed.
This was in a speed critical piece of realtime UI code, so I honestly think that a GOTO was justified here.
Hugo
One thing I've not seen from any of the answers here is that a 'goto' solution is often more efficient than one of the structured programming solutions often mentioned.
Consider the many-nested-loops case, where using 'goto' instead of a bunch of if(breakVariable) sections is obviously more efficient. The solution "Put your loops in a function and use return" is often totally unreasonable. In the likely case that the loops are using local variables, you now have to pass them all through function parameters, potentially handling loads of extra headaches that arise from that.
Now consider the cleanup case, which I've used myself quite often, and is so common as to have presumably been responsible for the try{} catch {} structure not available in many languages. The number of checks and extra variables that are required to accomplish the same thing are far worse than the one or two instructions to make the jump, and again, the additional function solution is not a solution at all. You can't tell me that's more manageable or more readable.
Now code space, stack usage, and execution time may not matter enough in many situations to many programmers, but when you're in an embedded environment with only 2KB of code space to work with, 50 bytes of extra instructions to avoid one clearly defined 'goto' is just laughable, and this is not as rare a situation as many high-level programmers believe.
The statement that 'goto is harmful' was very helpful in moving towards structured programming, even if it was always an over-generalization. At this point, we've all heard it enough to be wary of using it (as we should). When it's obviously the right tool for the job, we don't need to be scared of it.
I avoid it since a coworker/manager will undoubtedly question its use either in a code review or when they stumble across it. While I think it has uses (the error handling case for example) - you'll run afoul of some other developer who will have some type of problem with it.
It’s not worth it.
Almost all situations where a goto can be used, you can do the same using other constructs. Goto is used by the compiler anyway.
I personally never use it explicitly, don't ever need to.
You can use it for breaking from a deeply nested loop, but most of the time your code can be refactored to be cleaner without deeply nested loops.
Updated, see below!
I have heard and read that C++0x allows an compiler to print "Hello" for the following snippet
#include <iostream>
int main() {
while(1)
;
std::cout << "Hello" << std::endl;
}
It apparently has something to do with threads and optimization capabilities. It looks to me that this can surprise many people though.
Does someone have a good explanation of why this was necessary to allow? For reference, the most recent C++0x draft says at 6.5/5
A loop that, outside of the for-init-statement in the case of a for statement,
makes no calls to library I/O functions, and
does not access or modify volatile objects, and
performs no synchronization operations (1.10) or atomic operations (Clause 29)
may be assumed by the implementation to terminate. [ Note: This is intended to allow compiler transfor-
mations, such as removal of empty loops, even when termination cannot be proven. — end note ]
Edit:
This insightful article says about that Standards text
Unfortunately, the words "undefined behavior" are not used. However, anytime the standard says "the compiler may assume P," it is implied that a program which has the property not-P has undefined semantics.
Is that correct, and is the compiler allowed to print "Bye" for the above program?
There is an even more insightful thread here, which is about an analogous change to C, started off by the Guy done the above linked article. Among other useful facts, they present a solution that seems to also apply to C++0x (Update: This won't work anymore with n3225 - see below!)
endless:
goto endless;
A compiler is not allowed to optimize that away, it seems, because it's not a loop, but a jump. Another guy summarizes the proposed change in C++0x and C201X
By writing a loop, the programmer is asserting either that the
loop does something with visible behavior (performs I/O, accesses
volatile objects, or performs synchronization or atomic operations),
or that it eventually terminates. If I violate that assumption
by writing an infinite loop with no side effects, I am lying to the
compiler, and my program's behavior is undefined. (If I'm lucky,
the compiler might warn me about it.) The language doesn't provide
(no longer provides?) a way to express an infinite loop without
visible behavior.
Update on 3.1.2011 with n3225: Committee moved the text to 1.10/24 and say
The implementation may assume that any thread will eventually do one of the following:
terminate,
make a call to a library I/O function,
access or modify a volatile object, or
perform a synchronization operation or an atomic operation.
The goto trick will not work anymore!
To me, the relevant justification is:
This is intended to allow compiler transfor- mations, such as removal of empty loops, even when termination cannot be proven.
Presumably, this is because proving termination mechanically is difficult, and the inability to prove termination hampers compilers which could otherwise make useful transformations, such as moving nondependent operations from before the loop to after or vice versa, performing post-loop operations in one thread while the loop executes in another, and so on. Without these transformations, a loop might block all other threads while they wait for the one thread to finish said loop. (I use "thread" loosely to mean any form of parallel processing, including separate VLIW instruction streams.)
EDIT: Dumb example:
while (complicated_condition()) {
x = complicated_but_externally_invisible_operation(x);
}
complex_io_operation();
cout << "Results:" << endl;
cout << x << endl;
Here, it would be faster for one thread to do the complex_io_operation while the other is doing all the complex calculations in the loop. But without the clause you have quoted, the compiler has to prove two things before it can make the optimisation: 1) that complex_io_operation() doesn't depend on the results of the loop, and 2) that the loop will terminate. Proving 1) is pretty easy, proving 2) is the halting problem. With the clause, it may assume the loop terminates and get a parallelisation win.
I also imagine that the designers considered that the cases where infinite loops occur in production code are very rare and are usually things like event-driven loops which access I/O in some manner. As a result, they have pessimised the rare case (infinite loops) in favour of optimising the more common case (noninfinite, but difficult to mechanically prove noninfinite, loops).
It does, however, mean that infinite loops used in learning examples will suffer as a result, and will raise gotchas in beginner code. I can't say this is entirely a good thing.
EDIT: with respect to the insightful article you now link, I would say that "the compiler may assume X about the program" is logically equivalent to "if the program doesn't satisfy X, the behaviour is undefined". We can show this as follows: suppose there exists a program which does not satisfy property X. Where would the behaviour of this program be defined? The Standard only defines behaviour assuming property X is true. Although the Standard does not explicitly declare the behaviour undefined, it has declared it undefined by omission.
Consider a similar argument: "the compiler may assume a variable x is only assigned to at most once between sequence points" is equivalent to "assigning to x more than once between sequence points is undefined".
Does someone have a good explanation of why this was necessary to allow?
Yes, Hans Boehm provides a rationale for this in N1528: Why undefined behavior for infinite loops?, although this is WG14 document the rationale applies to C++ as well and the document refers to both WG14 and WG21:
As N1509 correctly points out, the current draft essentially gives
undefined behavior to infinite loops in 6.8.5p6. A major issue for
doing so is that it allows code to move across a potentially
non-terminating loop. For example, assume we have the following loops,
where count and count2 are global variables (or have had their address
taken), and p is a local variable, whose address has not been taken:
for (p = q; p != 0; p = p -> next) {
++count;
}
for (p = q; p != 0; p = p -> next) {
++count2;
}
Could these two loops be merged and replaced by the following loop?
for (p = q; p != 0; p = p -> next) {
++count;
++count2;
}
Without the special dispensation in 6.8.5p6 for infinite loops, this
would be disallowed: If the first loop doesn't terminate because q
points to a circular list, the original never writes to count2. Thus
it could be run in parallel with another thread that accesses or
updates count2. This is no longer safe with the transformed version
which does access count2 in spite of the infinite loop. Thus the
transformation potentially introduces a data race.
In cases like this, it is very unlikely that a compiler would be able
to prove loop termination; it would have to understand that q points
to an acyclic list, which I believe is beyond the ability of most
mainstream compilers, and often impossible without whole program
information.
The restrictions imposed by non-terminating loops are a restriction on
the optimization of terminating loops for which the compiler cannot
prove termination, as well as on the optimization of actually
non-terminating loops. The former are much more common than the
latter, and often more interesting to optimize.
There are clearly also for-loops with an integer loop variable in
which it would be difficult for a compiler to prove termination, and
it would thus be difficult for the compiler to restructure loops
without 6.8.5p6. Even something like
for (i = 1; i != 15; i += 2)
or
for (i = 1; i <= 10; i += j)
seems nontrivial to handle. (In the former case, some basic number
theory is required to prove termination, in the latter case, we need
to know something about the possible values of j to do so. Wrap-around
for unsigned integers may complicate some of this reasoning further.)
This issue seems to apply to almost all loop restructuring
transformations, including compiler parallelization and
cache-optimization transformations, both of which are likely to gain
in importance, and are already often important for numerical code.
This appears likely to turn into a substantial cost for the benefit of
being able to write infinite loops in the most natural way possible,
especially since most of us rarely write intentionally infinite loops.
The one major difference with C is that C11 provides an exception for controlling expressions that are constant expressions which differs from C++ and makes your specific example well-defined in C11.
I think the correct interpretation is the one from your edit: empty infinite loops are undefined behavior.
I wouldn't say it's particularly intuitive behavior, but this interpretation makes more sense than the alternative one, that the compiler is arbitrarily allowed to ignore infinite loops without invoking UB.
If infinite loops are UB, it just means that non-terminating programs aren't considered meaningful: according to C++0x, they have no semantics.
That does make a certain amount of sense too. They are a special case, where a number of side effects just no longer occur (for example, nothing is ever returned from main), and a number of compiler optimizations are hampered by having to preserve infinite loops. For example, moving computations across the loop is perfectly valid if the loop has no side effects, because eventually, the computation will be performed in any case.
But if the loop never terminates, we can't safely rearrange code across it, because we might just be changing which operations actually get executed before the program hangs. Unless we treat a hanging program as UB, that is.
The relevant issue is that the compiler is allowed to reorder code whose side effects do not conflict. The surprising order of execution could occur even if the compiler produced non-terminating machine code for the infinite loop.
I believe this is the right approach. The language spec defines ways to enforce order of execution. If you want an infinite loop that cannot be reordered around, write this:
volatile int dummy_side_effect;
while (1) {
dummy_side_effect = 0;
}
printf("Never prints.\n");
I think this is along the lines of the this type of question, which references another thread. Optimization can occasionally remove empty loops.
I think the issue could perhaps best be stated, as "If a later piece of code does not depend on an earlier piece of code, and the earlier piece of code has no side-effects on any other part of the system, the compiler's output may execute the later piece of code before, after, or intermixed with, the execution of the former, even if the former contains loops, without regard for when or whether the former code would actually complete. For example, the compiler could rewrite:
void testfermat(int n)
{
int a=1,b=1,c=1;
while(pow(a,n)+pow(b,n) != pow(c,n))
{
if (b > a) a++; else if (c > b) {a=1; b++}; else {a=1; b=1; c++};
}
printf("The result is ");
printf("%d/%d/%d", a,b,c);
}
as
void testfermat(int n)
{
if (fork_is_first_thread())
{
int a=1,b=1,c=1;
while(pow(a,n)+pow(b,n) != pow(c,n))
{
if (b > a) a++; else if (c > b) {a=1; b++}; else {a=1; b=1; c++};
}
signal_other_thread_and_die();
}
else // Second thread
{
printf("The result is ");
wait_for_other_thread();
}
printf("%d/%d/%d", a,b,c);
}
Generally not unreasonable, though I might worry that:
int total=0;
for (i=0; num_reps > i; i++)
{
update_progress_bar(i);
total+=do_something_slow_with_no_side_effects(i);
}
show_result(total);
would become
int total=0;
if (fork_is_first_thread())
{
for (i=0; num_reps > i; i++)
total+=do_something_slow_with_no_side_effects(i);
signal_other_thread_and_die();
}
else
{
for (i=0; num_reps > i; i++)
update_progress_bar(i);
wait_for_other_thread();
}
show_result(total);
By having one CPU handle the calculations and another handle the progress bar updates, the rewrite would improve efficiency. Unfortunately, it would make the progress bar updates rather less useful than they should be.
Does someone have a good explanation of why this was necessary to allow?
I have an explanation of why it is necessary not to allow, assuming that C++ still has the ambition to be a general-purpose language suitable for performance-critical, low-level programming.
This used to be a working, strictly conforming freestanding C++ program:
int main()
{
setup_interrupts();
for(;;)
{}
}
The above is a perfectly fine and common way to write main() in many low-end microcontroller systems. The whole program execution is done inside interrupt service routines (or in case of RTOS, it could be processes). And main() may absolutely not be allowed to return since these are bare metal systems and there is nobody to return to.
Typical real-world examples of where the above design can be used are PWM/motor driver microcontrollers, lighting control applications, simple regulator systems, sensor applications, simple household electronics and so on.
Changes to C++ have unfortunately made the language impossible to use for this kind of embedded systems programming. Existing real-world applications like the ones above will break in dangerous ways if ported to newer C++ compilers.
C++20 6.9.2.3 Forward progress [intro.progress]
The implementation may assume that any thread will eventually do one of the following:
(1.1) — terminate,
(1.2) — make a call to a library I/O function,
(1.3) — perform an access through a volatile glvalue, or
(1.4) — perform a synchronization operation or an atomic operation.
Lets address each bullet for the above, previously strictly conforming freestanding C++ program:
1.1. As any embedded systems beginner can tell the committee, bare metal/RTOS microcontroller systems never terminate. Therefore the loop cannot terminate either or main() would turn into runaway code, which would be a severe and dangerous error condition.
1.2 N/A.
1.3 Not necessarily. One may argue that the for(;;) loop is the proper place to feed the watchdog, which in turn is a side effect as it writes to a volatile register. But there may be implementation reasons why you don't want to use a watchdog. At any rate, it is not C++'s business to dictate that a watchdog should be used.
1.4 N/A.
Because of this, C++ is made even more unsuitable for embedded systems applications than it already was before.
Another problem here is that the standard speaks of "threads", which are higher level concepts. On real-world computers, threads are implemented through a low-level concept known as interrupts. Interrupts are similar to threads in terms of race conditions and concurrent execution, but they are not threads. On low-level systems there is just one core and therefore only one interrupt at a time is typically executed (kind of like threads used to work on single core PC back in the days).
And you can't have threads if you can't have interrupts. So threads would have to be implemented by a lower-level language more suitable for embedded systems than C++. The options being assembler or C.
By writing a loop, the programmer is asserting either that the loop does something with visible behavior (performs I/O, accesses volatile objects, or performs synchronization or atomic operations), or that it eventually terminates.
This is completely misguided and clearly written by someone who has never worked with microcontroller programming.
So what should those few remaining C++ embedded programmers who refuse to port their code to C "because of reasons" do? You have to introduce a side effect inside the for(;;) loop:
int main()
{
setup_interrupts();
for(volatile int i=0; i==0;)
{}
}
Or if you have the watchdog enabled as you ought to, feed it inside the for(;;) loop.
It is not decidable for the compiler for non-trivial cases if it is an infinite loop at all.
In different cases, it can happen that your optimiser will reach a better complexity class for your code (e.g. it was O(n^2) and you get O(n) or O(1) after optimisation).
So, to include such a rule that disallows removing an infinite loop into the C++ standard would make many optimisations impossible. And most people don't want this. I think this quite answers your question.
Another thing: I never have seen any valid example where you need an infinite loop which does nothing.
The one example I have heard about was an ugly hack that really should be solved otherwise: It was about embedded systems where the only way to trigger a reset was to freeze the device so that the watchdog restarts it automatically.
If you know any valid/good example where you need an infinite loop which does nothing, please tell me.
I think it's worth pointing out that loops which would be infinite except for the fact that they interact with other threads via non-volatile, non-synchronised variables can now yield incorrect behaviour with a new compiler.
I other words, make your globals volatile -- as well as arguments passed into such a loop via pointer/reference.