How could this manner of writing code be called? - c++

I'm reviewing a quite old project and see code like this for the second time already (C++ - like pseudocode):
if( conditionA && conditionB ) {
actionA();
actionB();
} else {
if( conditionA ) {
actionA();
}
if( conditionB ) {
actionB();
}
}
in this code conditionA evaluates to the same result on both computations and the same goes for conditionB. So the code is equivalent to just:
if( conditionA ) {
actionA();
}
if( conditionB ) {
actionB();
}
So the former variant is just twice more code for the same effect. How could such manner (I mean the former variant) of writing code be called?

This is indeed bad coding practice, but be warned that if condition A and B evaluations have any side effects (var increments, etc.) the two fragments are not equivalent.

I would call it bad code. Though I've tended to find similar constructs in project that grew without any code review being done. (Or other lax development practices).

Guys? Look at this part: ( conditionA && conditionB )
Basically, if conditionA happens to be false, then it won't evaluate conditionB.
Now, it would be a bad coding style but if conditionA and conditionB aren't just evaluating data but if there's also some code behind these conditions that change data, there could be a huge difference between both notations!
if conditionA is false then conditionA is evaluated twice and conditionB is evaluated just once.
If conditionA is true and conditionB is false, then both conditions are evaluated twice.
If both conditions are true, both are executed just once.
In the second suggestion, both conditions are executed just once... Thus, these methods are only equivalent if both methods evaluate to true.
To make things more complex, if conditionB is false then actionA could change something that would change this validation! Thus the else branch would then execute actionB too. But if both conditions evaluates to true and actionA would change the evaluation of conditionB to false, it would still execute actionB.
I tend to refer to this kind of code as: "Why make things easy when you can do it the hard way?" and consider this a design pattern. Actually, it's "Mortgage-Driven development" where code is made more complex so the main developer will be the only one to understand it, while other developers will just become confused and hopefully give up to redesign the whole thing. As a result, the original developer is required to stay just to maintain this code, which is called "Job security" and thus be able to pay his mortgage for many, many years.I wonder why something like this would be used, then realized that I use a similar structure in my own code. Similar, but not the same:
if (A&&B){
action1;
} elseif(A){
action2;
} elseif(B){
action3;
} else{action4}
In this case, every action would be the display of a different message. A message that could not be generated by just concatenating two strings. Say, it's a part of a game and A checks if you have enough energy while B checks if you have enough mass. Of you don't have enough mass AND energy, you can't build anything anymore and a critical warning needs to be displayed. If you only have energy, a warning that you have to find more mass would be enough. With only energy, your builders would need to recharge. And with both you can continue to build. Four different actions, thus this weird construction.
However, the sample in the Q shows something completely different. Basically, you'd get one message that you're out of mass and another that you're out of energy. These two messages aren't combined into a single message.
Now, in the example, if conditionA would detect energy levels and conditionB would detect mass levels then both solution would just work fine. Now, if actionA tells your builders to drop their mass and start recharging, you'd suddenly gain a little bit of mass again in your game. But if conditionB indicated that you ran out of mass, that would not be true anymore! Simply because actionA released mass again. if actionB would be the command to tell builders to start collecting mass as soon as they're able then the first solution will give all builders this command and they would start collecting mass first, then they would continue their other actions. In the second solution, no such command would be given. The builders are recharged again and start using the little mass that was just released. If this check is done every 5 minutes then those builders would e.g. recharge in one minute to be idle for 4 more minutes because they ran out of mass. In the first solution, they would immediately start collecting mass.
Yeah, it's a stupid example. Been playing Supreme Commander again. :-) Just wanted to come up with a possible scenario, but it could be improved a lot!...

It's code written by someone who doesn't know how to use a Karnaugh Map.

This is very close to the 'fizzbuzz' Design Pattern:
if( fizz && buzz ) {
printFizz();
printBuzz();
} else {
if( fizz ) {
printFizz();
}
else if( buzz ) {
printBuzz();
}
else {
printValue();
}
}
Maybe the code started life as an instance of fizzbuzz (maybe copied-n-pasted), then was refactored slightly into what you see today due to slightly different requirements, but the refactoring didn't go as far as it probably should have (boolean logic can sometimes be a bit trickier than one might think - hence fizzbuzz as an interview weed-out technique).

I would call it bad code too.
There is no best way on indenting, but there is one golden rule : choose one and stick with it.

This is "redundant" code, and yes it is bad. If some precondition must be added to the calls to actionA (assuming that the precondition can't be put into actionA itself), we now have to modify the code in 2 places, and therefore run the risk of overlooking one of them.
This is one of those times where you can feel better about deleting some lines of code, than in writing new ones.

inefficient code?
Also, could be called Paid per line

I would call it 'twice is better'. It's made like that to be sure that the runtime really understood the question ;).
(although in multi-threaded, not-safe environment, the result may differ between the two variants.)

I might call it "code I wrote last month while I was in a hurry / not focused / tired". It happens, we all make or have made these kind of mistakes. Just change it. If you want to you can try and find out who did this, hope it is not you, and give him/her feedback.

Since you said you've seen this more than once, it seems that it's more than a one-time error due to being tired. I see several reasons for someone to repeatedly come up with such code:
The code was originally different, got refactored, but whoever did this oversaw that this is redundant.
Whoever did this didn't have a good grasp of boolean logic.
(Also, there's the slight possibility that there might be more to this than what your simplified snipped shows.)

As pgast has said in a comment there is nothing wrong with this code if actionA effects conditionB (note that this is not a condition with a side effect but an action with a side effect (which you kind of expect))

Related

Read and write variable in an IF statement

I'm hoping to perform the following steps in a single IF statement to save on code writing:
If ret is TRUE, set ret to the result of function lookup(). If ret is now FALSE, print error message.
The code I've written to do this is as follows:
BOOLEAN ret = TRUE;
// ... functions assigning to `ret`
if ( ret && !(ret = lookup()) )
{
fprintf(stderr, "Error in lookup()\n");
}
I've got a feeling that this isn't as simple as it looks. Reading from, assigning to and reading again from the same variable in an IF statement. As far as I'm aware, the compiler will always split statements like this up into their constituent operations according to precedence and evaluates conjuncts one at a time, failing immediately when evaluating an operand to false rather than evaluating them all. If so, then I expect the code to follow the steps I wrote above.
I've used assignments in IF statements a lot and I know they work, but not with another read beforehand.
Is there any reason why this isn't good code? Personally, I think it's easy to read and the meaning is clear, I'm just concerned about the compiler maybe not producing the equivalent logic for whatever reason. Perhaps compiler vendor disparities, optimisations or platform dependencies could be an issue, though I doubt this.
...to save on code writing This is almost never a valid argument. Don't do this. Particularly, don't obfuscate your code into a buggy, unreadable mess to "save typing". That is very bad programming.
I've got a feeling that this isn't as simple as it looks. Reading from, assigning to and reading again from the same variable in an IF statement.
Correct. It has little to do with the if statement in itself though, and everything to do with the operators involved.
As far as I'm aware, the compiler will always split statements like this up into their constituent operations according to precedence and evaluates conjuncts one at a time
Well, yes... but there is operator precedence and there is order of evaluation of subexpressions, they are different things. To make things even more complicated, there are sequence points.
If you don't know the difference between operator precedence and order of evaluation, or if you don't know what sequence points are, you need to instantly stop stuffing as many operators as you can into a single line, because in that case, you are going to write horrible bugs all over the place.
In your specific case, you get away with the bad programming just because as a special case, there happens to be a sequence point between the left and right evaluation of the && operator. Had you written some similar mess with a different operator, for example ret + !(ret = lookup(), your code would have undefined behavior. A bug which will take hours, days or weeks to find. Well, at least you saved 10 seconds of typing!
Also, in both C and C++ use the standard bool type and not some home-brewed version.
You need to correct your code into something more readable and safe:
bool ret = true;
if(ret)
{
ret = lookup();
}
if(!ret)
{
fprintf(stderr, "Error in lookup()\n");
}
Is there any reason why this isn't good code?
Yes, there are a lot issues whith such dirty code fragments!
1)
Nobody can read it and it is not maintainable. A lot of coding guidlines contain a rule which tells you: "One statement per line".
2) If you combine multiple expressions in one if statement, only the first statements will be executed until the expression is defined! This means: if you have multiple expressions which combined with AND the first expression which generates false will be the last one which will be executed. Same with OR combinations: The first one which evaluates to true is the last one which is executed.You already wrote this and you! know this, but this is a bit of tricky programming. If all your colleges write code that way, it is maybe ok, but as I know, my colleagues will not understand what you are doing in the first step!
3) You should never compare and assign in one statement. It is simply ugly!
4) if YOU! already think about " I'm just concerned about the compiler maybe not producing the equivalent logic" you should think again why you are not sure what you are doing! I believe that everybody who must work with such a dirty code will think again on such combinations.
Hint: Don't do that! Never!

IF statements and code efficiency

this is not related to any problem in particular but just me thinking.Does the presence of lots of IF statements in code signify bad code design and reduce efficiency or not.
If you really want to optimize the code, be aware of this:
if (complexCalculation(someVariable) > 10)
{
}
else if (complexCalculation(someVariable) > 5)
{
}
the point is, if you are trying to optimize some code, try to "cache" the result of calculations in variables, instead of redoing many times the same calculation
int cached = complexCalculation(someVariable);
if (cached > 10)
{
}
else if (cached > 5)
{
}
Why this? Now... If complexCalculation is deterministic based on its parameters (so complexCalculation(N) == complexCalculation(N) always, in simple words, you call it twice with the same parameters and you will receive both times the same result always) and is without side-effects (so it doesn't modify anything else), then the compiler could optimize it freely. The problem is that quite often the compiler isn't able to verify if a function is deterministic and without side-effects, and very very few languages (primarily the functional languages like F#, Haskell...) make it easy to tell it to the compiler (technically in the functiona languages all the functions should be deterministic and without side effects :-) ).
Theoretically, lots of 'if' statements would not significantly reduce code efficiency. The code simply determines the boolean value of the expression and decides whether or not to continue. If there are many 'if' statements within an iterative loop, however, that could cause a larger problem in terms of efficiency.
Bad design is a whole other issue that I'm not going to mention (Ziminji has covered it better than I could).
In general, the presence of a lot of "if" statement is considered bad design. Consider replacing conditionals with polymorphism. This one of the topics in Martin Fowler's book "Refactoring: Improving the Design of Existing Code" on page 255. Checkout the following article if you don't have the book: http://sourcemaking.com/refactoring/replace-conditional-with-polymorphism

Another way to use continue keyword in C++

Recently we found a "good way" to comment out lines of code by using continue:
for(int i=0; i<MAX_NUM; i++){
....
.... //--> about 30 lines of code
continue;
....//--> there is about 30 lines of code after continue
....
}
I scratch my head by asking why the previous developer put the continue keyword inside the intensive loop. Most probably is he/she feel it's easier to put a "continue" keyword instead of removing all the unwanted code...
It trigger me another question, by looking at below scenario:
Scenario A:
for(int i=0; i<MAX_NUM; i++){
....
if(bFlag)
continue;
....//--> there is about 100 lines of code after continue
....
}
Scenario B:
for(int i=0; i<MAX_NUM; i++){
....
if(!bFlag){
....//--> there is about 100 lines of code after continue
....
}
}
Which do you think is the best? Why?
How about break keyword?
Using continue in this case reduces nesting greatly and often makes code more readable.
For example:
for(...) {
if( condition1 ) {
Object* pointer = getObject();
if( pointer != 0 ) {
ObjectProperty* property = pointer->GetProperty();
if( property != 0 ) {
///blahblahblah...
}
}
}
becomes just
for(...) {
if( !condition1 ) {
continue;
}
Object* pointer = getObject();
if( pointer == 0 ) {
continue;
}
ObjectProperty* property = pointer->GetProperty();
if( property == 0 ) {
continue;
}
///blahblahblah...
}
You see - code becomes linear instead of nested.
You might also find answers to this closely related question helpful.
For your first question, it may be a way of skipping the code without commenting it out or deleting it. I wouldn't recommend doing this. If you don't want your code to be executed, don't precede it with a continue/break/return, as this will raise confusion when you/others are reviewing the code and may be seen as a bug.
As for your second question, they are basically identical (depends on assembly output) performance wise, and greatly depends on design. It depends on the way you want the readers of the code to "translate" it into english, as most do when reading back code.
So, the first example may read "Do blah, blah, blah. If (expression), continue on to the next iteration."
While the second may read "Do blah, blah, blah. If (expression), do blah, blah, blah"
So, using continue of an if statement may undermine the importance of the code that follows it.
In my opinion, I would prefer the continue if I could, because it would reduce nesting.
I hate comment out unused code. What I did is that,
I remove them completely and then check-in into version control.
Who still need to comment out unused code after the invention of source code control?
That "comment" use of continue is about as abusive as a goto :-). It's so easy to put an #if 0/#endif or /*...*/, and many editors will then colour-code the commented code so it's immediately obvious that it's not in use. (I sometimes like e.g. #ifdef USE_OLD_VERSION_WITH_LINEAR_SEARCH so I know what's left there, given it's immediately obvious to me that I'd never have such a stupid macro name if I actually expected someone to define it during the compile... guess I'd have to explain that to the team if I shared the code in that state though.) Other answers point out source control systems allow you to simply remove the commented code, and while that's my practice before commit - there's often a "working" stage where you want it around for maximally convenient cross-reference, copy-paste etc..
For scenarios: practically, it doesn't matter which one you use unless your project has a consistent approach that you need to fit in with, so I suggest using whichever seems more readable/expressive in the circumstances. In longer code blocks, a single continue may be less visible and hence less intuitive, while a group of them - or many scattered throughout the loop - are harder to miss. Overly nested code can get ugly too. So choose either if unsure then change it if the alternative starts to look appealing.
They communicate subtly different information to the reader too: continue means "hey, rule out all these circumstances and then look at the code below", whereas the if block means you have to "push" a context but still have them all in your mind as you try to understand the rest of the loop internals (here, only to find the if immediately followed by the loop termination, so all that mental effort was wasted. Countering this, continue statements tend to trigger a mental check to ensure all necessary steps have been completed before the next loop iteration - that it's all just as valid as whatever follows might be, and if someone say adds an extra increment or debug statement at the bottom of the loop then they have to know there are continue statements they may also want to handle.
You may even decide which to use based on how trivial the test is, much as some programmers will use early return statements for exceptional error conditions but will use a "result" variable and structured programming for anticipated flows. It can all get messy - programming has to be at least as complex as the problems - your job is to make it minimally messier / more-complex than that.
To be productive, it's important to remember "Don't sweat the small stuff", but in IT it can be a right pain learning what's small :-).
Aside: you may find it useful to do some background reading on the pros/cons of structured programming, which involves single entry/exit points, gotos etc..
I agree with other answerers that the first use of continue is BAD. Unused code should be removed (should you still need it later, you can always find it from your SCM - you do use an SCM, right? :-)
For the second, some answers have emphasized readability, but I miss one important thing: IMO the first move should be to extract that 100 lines of code into one or more separate methods. After that, the loop becomes much shorter and simpler, and the flow of execution becomes obvious. If I can extract the code into a single method, I personally prefer an if:
for(int i=0; i<MAX_NUM; i++){
....
if(!bFlag){
doIntricateCalculation(...);
}
}
But a continue would be almost equally fine to me. In fact, if there are multiple continues / returns / breaks within that 100 lines of code, it is impossible to extract it into a single method, so then the refactoring might end up with a series of continues and method calls:
for(int i=0; i<MAX_NUM; i++){
....
if(bFlag){
continue;
}
SomeClass* someObject = doIntricateCalculation(...);
if(!someObject){
continue;
}
SomeOtherClass* otherObject = doAnotherIntricateCalculation(someObject);
if(!otherObject){
continue;
}
// blah blah
}
continue is useful in a high complexity for loop. It's bad practice to use it to comment out the remaining code of a loop even for temporary debugging since people tends to forget...
Think on readability first, which is what is going to make your code more maintainable. Using a continue statement is clear to the user: under this condition there is nothing else I can/want to do with this element, forget about it and try the next one. On the other hand, the if is only telling that the next block of code does not apply to those for which the condition is not met, but if the block is big enough, you might not know whether there is actually any further code that will apply to this particular element.
I tend to prefer the continue over the if for this particular reason. It more explicitly states the intent.

When should I use do-while instead of while loops? [duplicate]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
When I was taking CS in college (mid 80's), one of the ideas that was constantly repeated was to always write loops which test at the top (while...) rather than at the bottom (do ... while) of the loop. These notions were often backed up with references to studies which showed that loops which tested at the top were statistically much more likely to be correct than their bottom-testing counterparts.
As a result, I almost always write loops which test at the top. I don't do it if it introduces extra complexity in the code, but that case seems rare. I notice that some programmers tend to almost exclusively write loops that test at the bottom. When I see constructs like:
if (condition)
{
do
{
...
} while (same condition);
}
or the inverse (if inside the while), it makes me wonder if they actually wrote it that way or if they added the if statement when they realized the loop didn't handle the null case.
I've done some googling, but haven't been able to find any literature on this subject. How do you guys (and gals) write your loops?
I always follow the rule that if it should run zero or more times, test at the beginning, if it must run once or more, test at the end. I do not see any logical reason to use the code you listed in your example. It only adds complexity.
Use while loops when you want to test a condition before the first iteration of the loop.
Use do-while loops when you want to test a condition after running the first iteration of the loop.
For example, if you find yourself doing something like either of these snippets:
func();
while (condition) {
func();
}
//or:
while (true){
func();
if (!condition) break;
}
You should rewrite it as:
do{
func();
} while(condition);
Difference is that the do loop executes "do something" once and then checks the condition to see if it should repeat the "do something" while the while loop checks the condition before doing anything
Does avoiding do/while really help make my code more readable?
No.
If it makes more sense to use a do/while loop, then do so. If you need to execute the body of a loop once before testing the condition, then a do/while loop is probably the most straightforward implementation.
First one may not execute at all if condition is false. Other one will execute at least once, then check the conidition.
For the sake of readability it seems sensible to test at the top. The fact it is a loop is important; the person reading the code should be aware of the loop conditions before trying to comprehend the body of the loop.
Here's a good real-world example I came across recently. Suppose you have a number of processing tasks (like processing elements in an array) and you wish to split the work between one thread per CPU core present. There must be at least one core to be running the current code! So you can use a do... while something like:
do {
get_tasks_for_core();
launch_thread();
} while (cores_remaining());
It's almost negligable, but it might be worth considering the performance benefit: it could equally be written as a standard while loop, but that would always make an unnecessary initial comparison that would always evaluate true - and on single-core, the do-while condition branches more predictably (always false, versus alternating true/false for a standard while).
Yaa..its true.. do while will run atleast one time.
Thats the only difference. Nothing else to debate on this
The first tests the condition before performing so it's possible your code won't ever enter the code underneath. The second will perform the code within before testing the condition.
The while loop will check "condition" first; if it's false, it will never "do something." But the do...while loop will "do something" first, then check "condition".
Yes, just like using for instead of while, or foreach instead of for improves readability. That said some circumstances need do while and I agree you would be silly to force those situations into a while loop.
It's more helpful to think in terms of common usage. The vast majority of while loops work quite naturally with while, even if they could be made to work with do...while, so basically you should use it when the difference doesn't matter. I would thus use do...while for the rare scenarios where it provides a noticeable improvement in readability.
The use cases are different for the two. This isn't a "best practices" question.
If you want a loop to execute based on the condition exclusively than use
for or while
If you want to do something once regardless of the the condition and then continue doing it based the condition evaluation.
do..while
For anyone who can't think of a reason to have a one-or-more times loop:
try {
someOperation();
} catch (Exception e) {
do {
if (e instanceof ExceptionIHandleInAWierdWay) {
HandleWierdException((ExceptionIHandleInAWierdWay)e);
}
} while ((e = e.getInnerException())!= null);
}
The same could be used for any sort of hierarchical structure.
in class Node:
public Node findSelfOrParentWithText(string text) {
Node node = this;
do {
if(node.containsText(text)) {
break;
}
} while((node = node.getParent()) != null);
return node;
}
A while() checks the condition before each execution of the loop body and a do...while() checks the condition after each execution of the loop body.
Thus, **do...while()**s will always execute the loop body at least once.
Functionally, a while() is equivalent to
startOfLoop:
if (!condition)
goto endOfLoop;
//loop body goes here
goto startOfLoop;
endOfLoop:
and a do...while() is equivalent to
startOfLoop:
//loop body
//goes here
if (condition)
goto startOfLoop;
Note that the implementation is probably more efficient than this. However, a do...while() does involve one less comparison than a while() so it is slightly faster. Use a do...while() if:
you know that the condition will always be true the first time around, or
you want the loop to execute once even if the condition is false to begin with.
Here is the translation:
do { y; } while(x);
Same as
{ y; } while(x) { y; }
Note the extra set of braces are for the case you have variable definitions in y. The scope of those must be kept local like in the do-loop case. So, a do-while loop just executes its body at least once. Apart from that, the two loops are identical. So if we apply this rule to your code
do {
// do something
} while (condition is true);
The corresponding while loop for your do-loop looks like
{
// do something
}
while (condition is true) {
// do something
}
Yes, you see the corresponding while for your do loop differs from your while :)
As noted by Piemasons, the difference is whether the loop executes once before doing the test, or if the test is done first so that the body of the loop might never execute.
The key question is which makes sense for your application.
To take two simple examples:
Say you're looping through the elements of an array. If the array has no elements, you don't want to process number one of zero. So you should use WHILE.
You want to display a message, accept a response, and if the response is invalid, ask again until you get a valid response. So you always want to ask once. You can't test if the response is valid until you get a response, so you have to go through the body of the loop once before you can test the condition. You should use DO/WHILE.
I tend to prefer do-while loops, myself. If the condition will always be true at the start of the loop, I prefer to test it at the end. To my eye, the whole point of testing conditions (other than assertions) is that one doesn't know the result of the test. If I see a while loop with the condition test at the top, my inclination is to consider the case that the loop executes zero times. If that can never happen, why not code in a way that clearly shows that?
It's actually meant for a different things. In C, you can use do - while construct to achieve both scenario (runs at least once and runs while true). But PASCAL has repeat - until and while for each scenario, and if I remember correctly, ADA has another construct that lets you quit in the middle, but of course that's not what you're asking.
My answer to your question : I like my loop with testing on top.
Both conventions are correct if you know how to write the code correctly :)
Usually the use of second convention ( do {} while() ) is meant to avoid have a duplicated statement outside the loop. Consider the following (over simplified) example:
a++;
while (a < n) {
a++;
}
can be written more concisely using
do {
a++;
} while (a < n)
Of course, this particular example can be written in an even more concise way as (assuming C syntax)
while (++a < n) {}
But I think you can see the point here.
while( someConditionMayBeFalse ){
// this will never run...
}
// then the alternative
do{
// this will run once even if the condition is false
while( someConditionMayBeFalse );
The difference is obvious and allows you to have code run and then evaluate the result to see if you have to "Do it again" and the other method of while allows you to have a block of script ignored if the conditional is not met.
I write mine pretty much exclusively testing at the top. It's less code, so for me at least, it's less potential to screw something up (e.g., copy-pasting the condition makes two places you always have to update it)
It really depends there are situations when you want to test at the top, others when you want to test at the bottom, and still others when you want to test in the middle.
However the example given seems absurd. If you are going to test at the top, don't use an if statement and test at the bottom, just use a while statement, that's what it is made for.
You should first think of the test as part of the loop code. If the test logically belongs at the start of the loop processing, then it's a top-of-the-loop test. If the test logically belongs at the end of the loop (i.e. it decides if the loop should continue to run), then it's probably a bottom-of-the-loop test.
You will have to do something fancy if the test logically belongs in them middle. :-)
I guess some people test at the bottom because you could save one or a few machine cycles by doing that 30 years ago.
To write code that is correct, one basically needs to perform a mental, perhaps informal proof of correctness.
To prove a loop correct, the standard way is to choose a loop invariant, and an induction proof. But skip the complicated words: what you do, informally, is figure out something that is true of each iteration of the loop, and that when the loop is done, what you wanted accomplished is now true. The loop invariant is false at the end, for the loop to terminate.
If the loop conditions map fairly easily to the invariant, and the invariant is at the top of the loop, and one infers that the invariant is true at the next iteration of the loop by working through the code of the loop, then it is easy to figure out that the loop is correct.
However, if the invariant is at the bottom of the loop, then unless you have an assertion just prior to the loop (a good practice) then it becomes more difficult because you have to essentially infer what that invariant should be, and that any code that ran before the loop makes the loop invariant true (since there is no loop precondition, code will execute in the loop). It just becomes that more difficult to prove correct, even if it is an informal in-your-head proof.
This isn't really an answer but a reiteration of something one of my lecturers said and it interested me at the time.
The two types of loop while..do and do..while are actually instances of a third more generic loop, which has the test somewhere in the middle.
begin loop
<Code block A>
loop condition
<Code block B>
end loop
Code block A is executed at least once and B is executed zero or more times, but isn't run on the very last (failing) iteration. a while loop is when code block a is empty and a do..while is when code block b is empty. But if you're writing a compiler, you might be interested in generalizing both cases to a loop like this.
In a typical Discrete Structures class in computer science, it's an easy proof that there is an equivalence mapping between the two.
Stylistically, I prefer while (easy-expr) { } when easy-expr is known up front and ready to go, and the loop doesn't have a lot of repeated overhead/initialization. I prefer do { } while (somewhat-less-easy-expr); when there is more repeated overhead and the condition may not be quite so simple to set up ahead of time. If I write an infinite loop, I always use while (true) { }. I can't explain why, but I just don't like writing for (;;) { }.
I would say it is bad practice to write if..do..while loops, for the simple reason that this increases the size of the code and causes code duplications. Code duplications are error prone and should be avoided, as any change to one part must be performed on the duplicate as well, which isn't always the case. Also, bigger code means a harder time on the cpu cache. Finally, it handles null cases, and solves head aches.
Only when the first loop is fundamentally different should one use do..while, say, if the code that makes you pass the loop condition (like initialization) is performed in the loop. Otherwise, if it certain that loop will never fall on the first iteration, then yes, a do..while is appropriate.
From my limited knowledge of code generation I think it may be a good idea to write bottom test loops since they enable the compiler to perform loop optimizations better. For bottom test loops it is guaranteed that the loop executes at least once. This means loop invariant code "dominates" the exit node. And thus can be safely moved just before the loop starts.

Is returning early from a function more elegant than an if statement?

Myself and a colleague have a dispute about which of the following is more elegant. I won't say who's who, so it is impartial. Which is more elegant?
public function set hitZone(target:DisplayObject):void
{
if(_hitZone != target)
{
_hitZone.removeEventListener(MouseEvent.ROLL_OVER, onBtOver);
_hitZone.removeEventListener(MouseEvent.ROLL_OUT, onBtOut);
_hitZone.removeEventListener(MouseEvent.MOUSE_DOWN, onBtDown);
_hitZone = target;
_hitZone.addEventListener(MouseEvent.ROLL_OVER, onBtOver, false, 0, true);
_hitZone.addEventListener(MouseEvent.ROLL_OUT, onBtOut, false, 0, true);
_hitZone.addEventListener(MouseEvent.MOUSE_DOWN, onBtDown, false, 0, true);
}
}
...or...
public function set hitZone(target:DisplayObject):void
{
if(_hitZone == target)return;
_hitZone.removeEventListener(MouseEvent.ROLL_OVER, onBtOver);
_hitZone.removeEventListener(MouseEvent.ROLL_OUT, onBtOut);
_hitZone.removeEventListener(MouseEvent.MOUSE_DOWN, onBtDown);
_hitZone = target;
_hitZone.addEventListener(MouseEvent.ROLL_OVER, onBtOver, false, 0, true);
_hitZone.addEventListener(MouseEvent.ROLL_OUT, onBtOut, false, 0, true);
_hitZone.addEventListener(MouseEvent.MOUSE_DOWN, onBtDown, false, 0, true);
}
In most cases, returning early reduces the complexity and makes the code more readable.
It's also one of the techniques applied in Spartan programming:
Minimal use of Control
Minimizing the use of conditionals by using specialized
constructs such ternarization,
inheritance, and classes such as Class
Defaults, Class Once and Class
Separator
Simplifying conditionals with early return.
Minimizing the use of looping constructs, by using action applicator
classes such as Class Separate and
Class FileSystemVisitor.
Simplifying logic of iteration with early exits (via return,
continue and break statements).
In your example, I would choose option 2, as it makes the code more readable. I use the same technique when checking function parameters.
This is one of those cases where it's ok to break the rules (i.e. best practices). In general you want to have as few return points in a function as possible. The practical reason for this is that it simplifies your reading of the code, since you can just always assume that each and every function will take its arguments, do its logic, and return its result. Putting in extra returns for various cases tends to complicate the logic and increase the amount of time necessary to read and fully grok the code. Once your code reaches the maintenance stage then multiple returns can have a huge impact on the productivity of new programmers as they try to decipher the logic (its especially bad when comments are sparse and the code unclear). The problem grows exponentially with respect to the length of the function.
So then why in this case does everyone prefer option 2? It's because you're are setting up a contract that the function enforces through validating incoming data, or other invariants that might need to be checked. The prettiest syntax for constructing the validation is the check each condition, returning immediately if the condition fails validity. That way you don't have to maintain some kind of isValid boolean through all of your checks.
To sum things up: we're really looking at how to write validation code and not general logic; option 2 is better for validation code.
As long as the early returns are organized as a block at the top of the function/method body, then I think they're much more readable than adding another layer of nesting.
I try to avoid early returns in the middle of the body. Sometimes they're the best way, but most of the time I think they complicate.
Also, as a general rule I try to minimize nesting control structures. Obviously you can take this one too far, so you have to use some discretion. Converting nested if's to a single switch/case is much clearer to me, even if the predicates repeat some sub-expressions (and assuming this isn't a performance critical loop in a language too dumb to do subexpression elimination). Particularly I dislike the combination of nested ifs in long function/method bodies, since if you jump into the middle of the code for some reason you end up scrolling up and down to mentally reconstruct the context of a given line.
In my experience, the issue with using early returns in a project is that if others on the project aren't used to them, they won't look for them. So early returns or not - if there are multiple programmers involved, make sure everyone's at least aware of their presence.
I personally write code to return as soon as it can, as delaying a return often introduces extra complexity eg trying to safely exit a bunch of nested loops and conditions.
So when I look at an unfamiliar function, the very first thing I do is look for all the returns. What really helps there is to set up your syntax colouring to give return a different colour from anything else. (I go for red.) That way, the returns become a useful tool for determining what the function does, rather than hidden stumbling blocks for the unwary.
Ah the guardian.
Imho, yes - the logic of it is clearer because the return is explicit and right next to the condition, and it can be nicely grouped with similar structures. This is even more applicable where "return" is replaced with "throw new Exception".
As said before, early return is more readable, specially if the body of a function is long, you may find that deleting a } by mistake in a 3 page function (wich in itself is not very elegant) and trying to compile it can take several minutes of non-automatable debugging.
It also makes the code more declarative, because that's the way you would describe it to another human, so probably a developer is close enough to one to understand it.
If the complexity of the function increases later, and you have good tests, you can simply wrap each alternative in a new function, and call them in case branches, that way you mantain the declarative style.
In this case (one test, no else clause) I like the test-and-return. It makes it clear that in that case, there's nothing to do, without having to read the rest of the function.
However, this is splitting the finest of hairs. I'm sure you must have bigger issues to worry about :)
option 2 is more readable, but the manageability of the code fails when a else may be required to be added.
So if you are sure, there is no else go for option 2, but if there could be scope for an else condition then i would prefer option 1
Option 1 is better, because you should have a minimal number of return points in procedure.
There are exceptions like
if (a) {
return x;
}
return y;
because of the way a language works, but in general it's better to have as few exit points as it is feasible.
I prefer to avoid an immediate return at the beginning of a function, and whenever possible put the qualifying logic to prevent entry to the method prior to it being called. Of course, this varies depending on the purpose of the method.
However, I do not mind returning in the middle of the method, provided the method is short and readable. In the event that the method is large, in my opinion, it is already not very readable, so it will either be refactored into multiple functions with inline returns, or I will explicitly break from the control structure with a single return at the end.
I am tempted to close it as exact duplicate, as I saw some similar threads already, including Invert “if” statement to reduce nesting which has good answers.
I will let it live for now... ^_^
To make that an answer, I am a believer that early return as guard clause is better than deeply nested ifs.
I have seen both types of codes and I prefer first one as it is looks easily readable and understandable for me but I have read many places that early exist is the better way to go.
There's at least one other alternative. Separate the details of the actual work from the decision about whether to perform the work. Something like the following:
public function setHitZone(target:DisplayObject):void
{
if(_hitZone != target)
setHitZoneUnconditionally(target);
}
public function setHitZoneUnconditionally(target:DisplayObject):void
{
_hitZone.removeEventListener(MouseEvent.ROLL_OVER, onBtOver);
_hitZone.removeEventListener(MouseEvent.ROLL_OUT, onBtOut);
_hitZone.removeEventListener(MouseEvent.MOUSE_DOWN, onBtDown);
_hitZone = target;
_hitZone.addEventListener(MouseEvent.ROLL_OVER, onBtOver, false, 0, true);
_hitZone.addEventListener(MouseEvent.ROLL_OUT, onBtOut, false, 0, true);
_hitZone.addEventListener(MouseEvent.MOUSE_DOWN, onBtDown, false, 0, true);
}
Any of these three (your two plus the third above) are reasonable for cases as small as this. However, it would be A Bad Thing to have a function hundreds of lines long with multiple "bail-out points" sprinkled throughout.
I've had this debate with my own code over the years. I started life favoring one return and slowly have lapsed.
In this case, I prefer option 2 (one return) simply because we're only talking about 7 lines of code wrapped by an if() with no other complexity. It's far more readable and function-like. It flows top to bottom. You know you start at the top and end at the bottom.
That being said, as others have said, if there were more guards at the beginning or more complexity or if the function grows, then I would prefer option 1: return immediately at the beginning for a simple validation.