C/C++ an int value that isn't a number? - c++

Can this ever happen ?
3 asserts, where one should activate.
int nr = perform_calc();
assert( nr == 0);
assert( nr > 0);
assert( nr < 0);
Can there be a case when the program doesn't activate the asserts on g++ 3.4.4.
And no I don't have the possibility to change the code in order to print the number out in case the asserts don't activate.
Any ideas?
Edit: After reading several comments I was forced to edit. Show the code? why are you doing this stupid thing ? I don't believe it ! Where is it used ?
From my question it should have been obvious that I will not post/change the code because of several possible reasons:
I'm a total beginner and is ashamed of the code (no crime there, sure it makes answering to the question much easier if I did post it)
I was asked to help out a friend with only little information (and no I did not ask him why can't you check the number returned, or why can't he just add a breakpoint).
I am writing my code in emacs without any compiler and is sending it to a remote server that compiles it, runs it and only can return failed asserts if something goes wrong.
If you believed that I was making a prank or a hoax you should have voted for a closure of the thread instead. I would have been perfectly fine with that. But adding unnecessary comments like this only made me want an "attitude" flag to be implemented.
I want to thank others for their comments and answers that actually tried to explain and answered my question.

assert is unchecked if the macro NDEBUG is defined. Make sure you #undef NDEBUG when compiling this translation unit.
You can invoke gcc with the -E switch to verify that your assert statements are still in the code.

As I've seen so ugly things in my life, it could be explained if perform_calc() has a buffer overrun that overwrites the return address in the stack. When the function ends, the overwritten address is recovered from the stack and set to the current PC, leading to a jump maybe in another area of the program, apparently past the assertion calls.
Although this is a very remote possibility, so it's what you are showing.
Another possibility is that someone did an ugly macro trick. check if you have things like
#define assert
or some colleague put something like this in a header while you were at the restroom
#define < ==
#define > ==
As suggested in another answer, check with gcc -E to see what code is actually compiled.

It doesn't look like that code is correct in the first place. If debugging is on (DEBUG and/or _DEBUG are set and NDEBUG is unset):
assert( nr == 0);
The above line will call exit() if nr != 0. Therefore, if this line passes, the second assert will execute:
assert( nr > 0);
... And call exit() because nr == 0 and !(nr > 0).
assert( nr < 0);
And this third line will never run at all.
What, precisely, is the point of this code? And why, if these asserts could be added, could you not instead add a printf()?

Is this code multithreaded? Maybe you have a race condition.

And no I dont have the possibility to
change the code in order to print the
number out..
Strange. You obviously have the ability to insert the assert() statements, because if they were actually in real code you couldn't touch, the code could not possibly work. So why can't you print the value the assert() calls test?

I suspect you've accidentally eliminated the problem while sanitizing the code fragment. There's either more code (and nr is getting changed between asserts) or it doesn't actually look like that (or, per rlbond, you don't have assert turned on).
Try posting a less sanitized code segment, and let's see if we can't work it out.

Could it be a NaN? In that case, the following assert would fail:
assert( nr == nr );

Related

Read and write variable in an IF statement

I'm hoping to perform the following steps in a single IF statement to save on code writing:
If ret is TRUE, set ret to the result of function lookup(). If ret is now FALSE, print error message.
The code I've written to do this is as follows:
BOOLEAN ret = TRUE;
// ... functions assigning to `ret`
if ( ret && !(ret = lookup()) )
{
fprintf(stderr, "Error in lookup()\n");
}
I've got a feeling that this isn't as simple as it looks. Reading from, assigning to and reading again from the same variable in an IF statement. As far as I'm aware, the compiler will always split statements like this up into their constituent operations according to precedence and evaluates conjuncts one at a time, failing immediately when evaluating an operand to false rather than evaluating them all. If so, then I expect the code to follow the steps I wrote above.
I've used assignments in IF statements a lot and I know they work, but not with another read beforehand.
Is there any reason why this isn't good code? Personally, I think it's easy to read and the meaning is clear, I'm just concerned about the compiler maybe not producing the equivalent logic for whatever reason. Perhaps compiler vendor disparities, optimisations or platform dependencies could be an issue, though I doubt this.
...to save on code writing This is almost never a valid argument. Don't do this. Particularly, don't obfuscate your code into a buggy, unreadable mess to "save typing". That is very bad programming.
I've got a feeling that this isn't as simple as it looks. Reading from, assigning to and reading again from the same variable in an IF statement.
Correct. It has little to do with the if statement in itself though, and everything to do with the operators involved.
As far as I'm aware, the compiler will always split statements like this up into their constituent operations according to precedence and evaluates conjuncts one at a time
Well, yes... but there is operator precedence and there is order of evaluation of subexpressions, they are different things. To make things even more complicated, there are sequence points.
If you don't know the difference between operator precedence and order of evaluation, or if you don't know what sequence points are, you need to instantly stop stuffing as many operators as you can into a single line, because in that case, you are going to write horrible bugs all over the place.
In your specific case, you get away with the bad programming just because as a special case, there happens to be a sequence point between the left and right evaluation of the && operator. Had you written some similar mess with a different operator, for example ret + !(ret = lookup(), your code would have undefined behavior. A bug which will take hours, days or weeks to find. Well, at least you saved 10 seconds of typing!
Also, in both C and C++ use the standard bool type and not some home-brewed version.
You need to correct your code into something more readable and safe:
bool ret = true;
if(ret)
{
ret = lookup();
}
if(!ret)
{
fprintf(stderr, "Error in lookup()\n");
}
Is there any reason why this isn't good code?
Yes, there are a lot issues whith such dirty code fragments!
1)
Nobody can read it and it is not maintainable. A lot of coding guidlines contain a rule which tells you: "One statement per line".
2) If you combine multiple expressions in one if statement, only the first statements will be executed until the expression is defined! This means: if you have multiple expressions which combined with AND the first expression which generates false will be the last one which will be executed. Same with OR combinations: The first one which evaluates to true is the last one which is executed.You already wrote this and you! know this, but this is a bit of tricky programming. If all your colleges write code that way, it is maybe ok, but as I know, my colleagues will not understand what you are doing in the first step!
3) You should never compare and assign in one statement. It is simply ugly!
4) if YOU! already think about " I'm just concerned about the compiler maybe not producing the equivalent logic" you should think again why you are not sure what you are doing! I believe that everybody who must work with such a dirty code will think again on such combinations.
Hint: Don't do that! Never!

Is it preferred to use else or else-if for the final branch of a conditional

What's preferred
if n > 0
# do something
elsif n == 0
# do something
elsif n < 0
# do something
end
or
if n > 0
# do something
elsif n == 0
# do something
else
# do something
end
I was using else for awhile. I recently switched to doing elsif. My conclusions are the first option adds readability at the cost of more typing, but could confuse some people if they expect an else. I don't have enough experience to know if the first option would create more readability or more confusion and if there are other pros/cons I've missed.
For my specific example, where the scope of else is comparable to the previous conditions and what it does catch can be expressed as a simple condition, is using else preferable for the final branch? Also, would a different conditional influence the answer? I wrote the code in Ruby but I assume the answer is language agnostic. If it isn't, I would also like to know why.
You should know every execution path through your conditional logic regardless. That being said, a simple else for the remaining case is usually the most clear. If you only have an else if it makes the next guy scratch his head and wonder if you are missing something. If it makes you feel better to say "else what" then put an inline comment in else # n < 0
edit after your edit:
For my specific example, where the scope of else is comparable to the previous conditions and what it does catch can be expressed as a simple condition, is using else preferable for the final branch? Also, would a different conditional influence the answer? I wrote the code in Ruby but I assume the answer is language agnostic. If it isn't, I would also like to know why.
Yes, else is still preferable. I can't think of any scenario that changes this answer. Having the else implies completeness in your logic. Not having the else will be distracting - others (or you at a later date) will have to spend more time scrutinizing the code to double check to make sure all conditions are being handled. In fact the coding standard I abide by, the Embedded C Coding Standard by BARR Group says this:
"Any if statement with an else if clause shall end with an else clause. ... This is the equivalent of requiring a default case in every switch."
This echos MISRA rule 14.10
All if ... else if constructs shall be terminated with an else clause.
*All of my examples pertain to C, but as you said in the post, this is someone agnostic to the language being used.
I always prefer the plain else - it makes it obvious to anyone that your intention is "if all else fails, do this" (no pun intended).
In your example you only have 3 possible states, but what about if there were more?
If you want the final conditional statement to be a catch-all, then you should use just else. If you would like, for clarity, you could do something like this:
else # n < 0
# do something
(Note the n < 0 is just a comment). A couple reasons for this:
Depending on the language/compiler, there could be a speed improvement
Less likely to make a programming mistake. Your example above is fairly simple, but imagine a situation in which there is a fourth option that you did not consider at the time of programming? In that case, no action would be taken, when you probably meant for it to fall into the else statement.
else creates a catch-all, with only elsifs none of the choices might be executed. Each specific situation needs its own solution, so I would use what is necessary.
else clarifies that it is the only branch of the program at that point. If the execution makes it there, no matter what it should go through the else. If that's your intention, use else, not elseif because although you can make the exact same program using elseif as the last statement, it sort of linguistically implies that it will only conditionally execute that block of code.

Is there any reason for using if(1 || !Foo())?

I read some legacy code:
if ( 1 || !Foo() )
Is there any seen reason why not to write:
if ( !Foo() )
The two are not the same. The first will never evaluate Foo() because the 1 short-circuits the ||.
Why it's done - probably someone wanted to force entry in the then branch for debugging purposes and left it there. It could also be that this was written before source control, so they didn't want the code to be lost, rather just bypassed for now.
if (1 || !Foo() ) will be always satisfied. !Foo() will not even be reached because of short-circuits evaluation.
This happens when you want to make sure that the code below the if will be executed, but you don't want to remove the real condition in it, probably for debug purposes.
Additional information that might help you:
if(a && b) - if a is false, b won't be checked.
if(a && b) - if a is true, b will be checked, because if it's false, the expression will be false.
if(a || b) - if a is true, b won't be checked, because this is true anyway.
if(a || b) - if a is false, b will be checked, because if b is true then it'll be true.
It's highly recommended to have a macro for this purpose, say DEBUG_ON 1, that will make it easier to understand what the programmer means, and not to have magic numbers in the code (Thanks #grigeshchauhan).
1 || condition
is always true, regardless whether the condition is true or not. In this case, the condition is never even being evaluated. The following code:
int c = 5;
if (1 || c++){}
printf("%d", c);
outputs 5 since c is never incremented, however if you changed 1 to 0, the c++ would be actually called, making the output 6.
A usual practical usage of this is in the situation when you want to test some piece of code that is being invoked when the condition that evaluates to true only seldom is met:
if (1 || condition ) {
// code I want to test
}
This way condition will never be evaluated and therefore // code I want to test always invoked. However it is definitely not the same as:
if (condition) { ...
which is a statement where condition will actually be evaluated (and in your case Foo will be called)
The question was answered properly - the difference is the right side of the or operation is short-circuited, suggesting this is debug code to force entry into the if block.
But in the interest of best practices, at least my rough stab at a best practice, I'd suggest alternatives, in order of increasing preference (best is last):
note: noticed after I coded examples this was a C++ question, examples are C#. Hopefully you can translate. If anyone needs me to, just post a comment.
In-line comment:
if (1 /*condition*/) //temporary debug
Out-of-line comment:
//if(condition)
if(true) //temporary debug
Name-Indicative Function
//in some general-use container
bool ForceConditionForDebug(bool forcedResult, string IgnoredResult)
{
#if DEBUG
Debug.WriteLine(
string.Format(
"Conditional {0} forced to {1} for debug purposes",
IgnoredResult,
forcedResult));
return forcedResult;
#else
#if ALLOW_DEBUG_CODE_IN_RELEASE
return forcedResult;
#else
throw new ApplicationException("Debug code detected in release mode");
#endif
#endif
}
//Where used
if(ForceConditionForDebug(true, "condition"))...
//Our case
if(ForceConditionForDebug(true, "!Foo()"))...
And if you wanted a really robust solution, you could add a repository rule to source control to reject any checked in code that called ForceConditionForDebug. This code should never have been written that way because it obviously doesn't communicate intent. It never should have been checked in (or have been allowed to be checked in) (source control? peer review?) And it should definitely never be allowed to execute in production in its current form.

Another way to use continue keyword in C++

Recently we found a "good way" to comment out lines of code by using continue:
for(int i=0; i<MAX_NUM; i++){
....
.... //--> about 30 lines of code
continue;
....//--> there is about 30 lines of code after continue
....
}
I scratch my head by asking why the previous developer put the continue keyword inside the intensive loop. Most probably is he/she feel it's easier to put a "continue" keyword instead of removing all the unwanted code...
It trigger me another question, by looking at below scenario:
Scenario A:
for(int i=0; i<MAX_NUM; i++){
....
if(bFlag)
continue;
....//--> there is about 100 lines of code after continue
....
}
Scenario B:
for(int i=0; i<MAX_NUM; i++){
....
if(!bFlag){
....//--> there is about 100 lines of code after continue
....
}
}
Which do you think is the best? Why?
How about break keyword?
Using continue in this case reduces nesting greatly and often makes code more readable.
For example:
for(...) {
if( condition1 ) {
Object* pointer = getObject();
if( pointer != 0 ) {
ObjectProperty* property = pointer->GetProperty();
if( property != 0 ) {
///blahblahblah...
}
}
}
becomes just
for(...) {
if( !condition1 ) {
continue;
}
Object* pointer = getObject();
if( pointer == 0 ) {
continue;
}
ObjectProperty* property = pointer->GetProperty();
if( property == 0 ) {
continue;
}
///blahblahblah...
}
You see - code becomes linear instead of nested.
You might also find answers to this closely related question helpful.
For your first question, it may be a way of skipping the code without commenting it out or deleting it. I wouldn't recommend doing this. If you don't want your code to be executed, don't precede it with a continue/break/return, as this will raise confusion when you/others are reviewing the code and may be seen as a bug.
As for your second question, they are basically identical (depends on assembly output) performance wise, and greatly depends on design. It depends on the way you want the readers of the code to "translate" it into english, as most do when reading back code.
So, the first example may read "Do blah, blah, blah. If (expression), continue on to the next iteration."
While the second may read "Do blah, blah, blah. If (expression), do blah, blah, blah"
So, using continue of an if statement may undermine the importance of the code that follows it.
In my opinion, I would prefer the continue if I could, because it would reduce nesting.
I hate comment out unused code. What I did is that,
I remove them completely and then check-in into version control.
Who still need to comment out unused code after the invention of source code control?
That "comment" use of continue is about as abusive as a goto :-). It's so easy to put an #if 0/#endif or /*...*/, and many editors will then colour-code the commented code so it's immediately obvious that it's not in use. (I sometimes like e.g. #ifdef USE_OLD_VERSION_WITH_LINEAR_SEARCH so I know what's left there, given it's immediately obvious to me that I'd never have such a stupid macro name if I actually expected someone to define it during the compile... guess I'd have to explain that to the team if I shared the code in that state though.) Other answers point out source control systems allow you to simply remove the commented code, and while that's my practice before commit - there's often a "working" stage where you want it around for maximally convenient cross-reference, copy-paste etc..
For scenarios: practically, it doesn't matter which one you use unless your project has a consistent approach that you need to fit in with, so I suggest using whichever seems more readable/expressive in the circumstances. In longer code blocks, a single continue may be less visible and hence less intuitive, while a group of them - or many scattered throughout the loop - are harder to miss. Overly nested code can get ugly too. So choose either if unsure then change it if the alternative starts to look appealing.
They communicate subtly different information to the reader too: continue means "hey, rule out all these circumstances and then look at the code below", whereas the if block means you have to "push" a context but still have them all in your mind as you try to understand the rest of the loop internals (here, only to find the if immediately followed by the loop termination, so all that mental effort was wasted. Countering this, continue statements tend to trigger a mental check to ensure all necessary steps have been completed before the next loop iteration - that it's all just as valid as whatever follows might be, and if someone say adds an extra increment or debug statement at the bottom of the loop then they have to know there are continue statements they may also want to handle.
You may even decide which to use based on how trivial the test is, much as some programmers will use early return statements for exceptional error conditions but will use a "result" variable and structured programming for anticipated flows. It can all get messy - programming has to be at least as complex as the problems - your job is to make it minimally messier / more-complex than that.
To be productive, it's important to remember "Don't sweat the small stuff", but in IT it can be a right pain learning what's small :-).
Aside: you may find it useful to do some background reading on the pros/cons of structured programming, which involves single entry/exit points, gotos etc..
I agree with other answerers that the first use of continue is BAD. Unused code should be removed (should you still need it later, you can always find it from your SCM - you do use an SCM, right? :-)
For the second, some answers have emphasized readability, but I miss one important thing: IMO the first move should be to extract that 100 lines of code into one or more separate methods. After that, the loop becomes much shorter and simpler, and the flow of execution becomes obvious. If I can extract the code into a single method, I personally prefer an if:
for(int i=0; i<MAX_NUM; i++){
....
if(!bFlag){
doIntricateCalculation(...);
}
}
But a continue would be almost equally fine to me. In fact, if there are multiple continues / returns / breaks within that 100 lines of code, it is impossible to extract it into a single method, so then the refactoring might end up with a series of continues and method calls:
for(int i=0; i<MAX_NUM; i++){
....
if(bFlag){
continue;
}
SomeClass* someObject = doIntricateCalculation(...);
if(!someObject){
continue;
}
SomeOtherClass* otherObject = doAnotherIntricateCalculation(someObject);
if(!otherObject){
continue;
}
// blah blah
}
continue is useful in a high complexity for loop. It's bad practice to use it to comment out the remaining code of a loop even for temporary debugging since people tends to forget...
Think on readability first, which is what is going to make your code more maintainable. Using a continue statement is clear to the user: under this condition there is nothing else I can/want to do with this element, forget about it and try the next one. On the other hand, the if is only telling that the next block of code does not apply to those for which the condition is not met, but if the block is big enough, you might not know whether there is actually any further code that will apply to this particular element.
I tend to prefer the continue over the if for this particular reason. It more explicitly states the intent.

How could this manner of writing code be called?

I'm reviewing a quite old project and see code like this for the second time already (C++ - like pseudocode):
if( conditionA && conditionB ) {
actionA();
actionB();
} else {
if( conditionA ) {
actionA();
}
if( conditionB ) {
actionB();
}
}
in this code conditionA evaluates to the same result on both computations and the same goes for conditionB. So the code is equivalent to just:
if( conditionA ) {
actionA();
}
if( conditionB ) {
actionB();
}
So the former variant is just twice more code for the same effect. How could such manner (I mean the former variant) of writing code be called?
This is indeed bad coding practice, but be warned that if condition A and B evaluations have any side effects (var increments, etc.) the two fragments are not equivalent.
I would call it bad code. Though I've tended to find similar constructs in project that grew without any code review being done. (Or other lax development practices).
Guys? Look at this part: ( conditionA && conditionB )
Basically, if conditionA happens to be false, then it won't evaluate conditionB.
Now, it would be a bad coding style but if conditionA and conditionB aren't just evaluating data but if there's also some code behind these conditions that change data, there could be a huge difference between both notations!
if conditionA is false then conditionA is evaluated twice and conditionB is evaluated just once.
If conditionA is true and conditionB is false, then both conditions are evaluated twice.
If both conditions are true, both are executed just once.
In the second suggestion, both conditions are executed just once... Thus, these methods are only equivalent if both methods evaluate to true.
To make things more complex, if conditionB is false then actionA could change something that would change this validation! Thus the else branch would then execute actionB too. But if both conditions evaluates to true and actionA would change the evaluation of conditionB to false, it would still execute actionB.
I tend to refer to this kind of code as: "Why make things easy when you can do it the hard way?" and consider this a design pattern. Actually, it's "Mortgage-Driven development" where code is made more complex so the main developer will be the only one to understand it, while other developers will just become confused and hopefully give up to redesign the whole thing. As a result, the original developer is required to stay just to maintain this code, which is called "Job security" and thus be able to pay his mortgage for many, many years.I wonder why something like this would be used, then realized that I use a similar structure in my own code. Similar, but not the same:
if (A&&B){
action1;
} elseif(A){
action2;
} elseif(B){
action3;
} else{action4}
In this case, every action would be the display of a different message. A message that could not be generated by just concatenating two strings. Say, it's a part of a game and A checks if you have enough energy while B checks if you have enough mass. Of you don't have enough mass AND energy, you can't build anything anymore and a critical warning needs to be displayed. If you only have energy, a warning that you have to find more mass would be enough. With only energy, your builders would need to recharge. And with both you can continue to build. Four different actions, thus this weird construction.
However, the sample in the Q shows something completely different. Basically, you'd get one message that you're out of mass and another that you're out of energy. These two messages aren't combined into a single message.
Now, in the example, if conditionA would detect energy levels and conditionB would detect mass levels then both solution would just work fine. Now, if actionA tells your builders to drop their mass and start recharging, you'd suddenly gain a little bit of mass again in your game. But if conditionB indicated that you ran out of mass, that would not be true anymore! Simply because actionA released mass again. if actionB would be the command to tell builders to start collecting mass as soon as they're able then the first solution will give all builders this command and they would start collecting mass first, then they would continue their other actions. In the second solution, no such command would be given. The builders are recharged again and start using the little mass that was just released. If this check is done every 5 minutes then those builders would e.g. recharge in one minute to be idle for 4 more minutes because they ran out of mass. In the first solution, they would immediately start collecting mass.
Yeah, it's a stupid example. Been playing Supreme Commander again. :-) Just wanted to come up with a possible scenario, but it could be improved a lot!...
It's code written by someone who doesn't know how to use a Karnaugh Map.
This is very close to the 'fizzbuzz' Design Pattern:
if( fizz && buzz ) {
printFizz();
printBuzz();
} else {
if( fizz ) {
printFizz();
}
else if( buzz ) {
printBuzz();
}
else {
printValue();
}
}
Maybe the code started life as an instance of fizzbuzz (maybe copied-n-pasted), then was refactored slightly into what you see today due to slightly different requirements, but the refactoring didn't go as far as it probably should have (boolean logic can sometimes be a bit trickier than one might think - hence fizzbuzz as an interview weed-out technique).
I would call it bad code too.
There is no best way on indenting, but there is one golden rule : choose one and stick with it.
This is "redundant" code, and yes it is bad. If some precondition must be added to the calls to actionA (assuming that the precondition can't be put into actionA itself), we now have to modify the code in 2 places, and therefore run the risk of overlooking one of them.
This is one of those times where you can feel better about deleting some lines of code, than in writing new ones.
inefficient code?
Also, could be called Paid per line
I would call it 'twice is better'. It's made like that to be sure that the runtime really understood the question ;).
(although in multi-threaded, not-safe environment, the result may differ between the two variants.)
I might call it "code I wrote last month while I was in a hurry / not focused / tired". It happens, we all make or have made these kind of mistakes. Just change it. If you want to you can try and find out who did this, hope it is not you, and give him/her feedback.
Since you said you've seen this more than once, it seems that it's more than a one-time error due to being tired. I see several reasons for someone to repeatedly come up with such code:
The code was originally different, got refactored, but whoever did this oversaw that this is redundant.
Whoever did this didn't have a good grasp of boolean logic.
(Also, there's the slight possibility that there might be more to this than what your simplified snipped shows.)
As pgast has said in a comment there is nothing wrong with this code if actionA effects conditionB (note that this is not a condition with a side effect but an action with a side effect (which you kind of expect))