In the source-code for nanodns, there is an atypical use of the ternary operator in an attempt to reduce the size of the code:
/* If the incoming packet has an AR record (such as in an EDNS request),
* mark the reply as "NOT IMPLEMENTED"; using a?b:c form to save one byte*/
q[11]?q[3]|=4:1;
It’s not obvious what this line does. At first glance, it looks like it is assigning a value to one of two array elements, but it is not. Rather, it seems to be either or’ing an array element, or else, doing nothing (running the “command” 1).
It looks like it is supposed to be a replacement for this line of code (which is indeed one byte longer):
if(q[11])q[3]|=4;
The literal equivalent would be this:
if (q[11])
q[3]|=4;
else
1;
The ternary operator is typically used as part of an expression, so seeing it used as a standalone command seems odd. Coupled with the seemingly out of place 1, this line almost qualifies as obfuscated code.
I did a quick test and was able to compile and run a C(++) program with data constants as “command”, such as void main() {0; 'a'; "foobar"; false;}. It seems to be a sort of nop command, but I cannot find any information about such usage—Google isn’t very amenable to this type of search query).
Can anyone explain exactly what it is and how it works?
In C and C++ any expression can be made into a statement by putting ; at the end.
Another example is that the expression x = 5 can be made into a statement: x = 5; . Hopefully you agree that this is a good idea.
It would needlessly complicate the language to try and "ban" some subset of expressions from having ; come after them. This code isn't very useful but it is legal.
Please note that the code you linked to is awful and written by a really bad programmer. Particularly, the statement
"It is common practice in tiny C programs to define reused expressions
to make the code smaller"
is complete b***s***. That statement is where things started to go terribly wrong.
The size of the source code has no relation to the size of the compiler executable, nor any relation to that executable's memory consumption, nor any relation to program performance. The only thing it affects is the size of the source code files on the programmers computer, expressed in bytes.
Unless you are programming on some 8086 computer from mid-80s with very limited hard drive space, you never need to "reduce the size of the code". Instead, write readable code.
That being said, since q is an array of characters , the code you linked is equivalent to
if(q[11])
{
(int)(q[3] |= 4);
}
else
{
1;
}
Where 1 is a statement with no side effect, it will get optimized away. It was only placed there because the ?: operator demands a 3rd operator.
The only difference between if statements and the ?: operator is subtle: the ?: implicitly balances the type between the 2nd and 3rd operand.
To increase readability and produce self-documenting code, the code should get rewritten to something like
if (q[AR_INDEX] != 0)
{
q[REPLY_INDEX] |= NOT_IMPLEMENTED;
}
As a side note, there is a bug here: q[2]|=128;. q is of type char, which has implementation-defined signedness, so this line is potentially disastrous. The core problem is that you should never use the char type for bit-wise operations or any form of arithmetic, which is a classic beginner mistake. It must be replaced with uint8_t or unsigned char.
Related
I'm hoping to perform the following steps in a single IF statement to save on code writing:
If ret is TRUE, set ret to the result of function lookup(). If ret is now FALSE, print error message.
The code I've written to do this is as follows:
BOOLEAN ret = TRUE;
// ... functions assigning to `ret`
if ( ret && !(ret = lookup()) )
{
fprintf(stderr, "Error in lookup()\n");
}
I've got a feeling that this isn't as simple as it looks. Reading from, assigning to and reading again from the same variable in an IF statement. As far as I'm aware, the compiler will always split statements like this up into their constituent operations according to precedence and evaluates conjuncts one at a time, failing immediately when evaluating an operand to false rather than evaluating them all. If so, then I expect the code to follow the steps I wrote above.
I've used assignments in IF statements a lot and I know they work, but not with another read beforehand.
Is there any reason why this isn't good code? Personally, I think it's easy to read and the meaning is clear, I'm just concerned about the compiler maybe not producing the equivalent logic for whatever reason. Perhaps compiler vendor disparities, optimisations or platform dependencies could be an issue, though I doubt this.
...to save on code writing This is almost never a valid argument. Don't do this. Particularly, don't obfuscate your code into a buggy, unreadable mess to "save typing". That is very bad programming.
I've got a feeling that this isn't as simple as it looks. Reading from, assigning to and reading again from the same variable in an IF statement.
Correct. It has little to do with the if statement in itself though, and everything to do with the operators involved.
As far as I'm aware, the compiler will always split statements like this up into their constituent operations according to precedence and evaluates conjuncts one at a time
Well, yes... but there is operator precedence and there is order of evaluation of subexpressions, they are different things. To make things even more complicated, there are sequence points.
If you don't know the difference between operator precedence and order of evaluation, or if you don't know what sequence points are, you need to instantly stop stuffing as many operators as you can into a single line, because in that case, you are going to write horrible bugs all over the place.
In your specific case, you get away with the bad programming just because as a special case, there happens to be a sequence point between the left and right evaluation of the && operator. Had you written some similar mess with a different operator, for example ret + !(ret = lookup(), your code would have undefined behavior. A bug which will take hours, days or weeks to find. Well, at least you saved 10 seconds of typing!
Also, in both C and C++ use the standard bool type and not some home-brewed version.
You need to correct your code into something more readable and safe:
bool ret = true;
if(ret)
{
ret = lookup();
}
if(!ret)
{
fprintf(stderr, "Error in lookup()\n");
}
Is there any reason why this isn't good code?
Yes, there are a lot issues whith such dirty code fragments!
1)
Nobody can read it and it is not maintainable. A lot of coding guidlines contain a rule which tells you: "One statement per line".
2) If you combine multiple expressions in one if statement, only the first statements will be executed until the expression is defined! This means: if you have multiple expressions which combined with AND the first expression which generates false will be the last one which will be executed. Same with OR combinations: The first one which evaluates to true is the last one which is executed.You already wrote this and you! know this, but this is a bit of tricky programming. If all your colleges write code that way, it is maybe ok, but as I know, my colleagues will not understand what you are doing in the first step!
3) You should never compare and assign in one statement. It is simply ugly!
4) if YOU! already think about " I'm just concerned about the compiler maybe not producing the equivalent logic" you should think again why you are not sure what you are doing! I believe that everybody who must work with such a dirty code will think again on such combinations.
Hint: Don't do that! Never!
Lately, I have seen a lot of questions being asked about output for some crazy yet syntactically allowed code statements like like i = ++i + 1 and i=(i,i++,i)+1;.
Frankly realistically speaking hardly anyone writes any such code in actual programing.To be frank I have never encountered any such code in my professional experience. So I usually end up skipping such questions here on SO. But lately the sheer volume of such Q's being asked makes me think if I am missing out some important theory by skipping such Q's. I gather that the such Q's revolve around Sequence points. I hardly know anything about sequence points to be frank and I am just wondering if not knowing about it is a handicap in some way. So can someone please explain the theory /concept of Sequence points, or If possible point to a resource which explains about the concept. Also, is it worth to invest time in knowing about this concept/theory?
The simplest answer I can think of is:
C++ is defined in terms of an abstract machine. The output of a program executed on the abstract machine is defined ONLY in terms of the order that "side effects" are performed. And Side effects are defined as calls into IO library functions, and changes to variables marked volatile.
C++ compilers are allowed to do whatever they want internally to optimize code, but they cannot change the order of writes to volatile variables, and io calls.
Sequence points define the c/c++ program's heartbeat - side effects before the sequence point are "complete" and side effects after the sequence point have not yet taken place. But, side effects (or, code that can effect a side effect indirectly( within a sequence point can be re-ordered.
Which is why understanding them is important. Without that understanding, your fundamental understanding of what a c++ program is (And how it might be optimized by an agressive compiler) is flawed.
See http://en.wikipedia.org/wiki/Sequence_point.
It's a quite simple concept, so you don't need to invest much time :)
The exact technical details of sequence points can get hairy, yes. But following these guideline solves almost all the practical issues:
If an expression modifies a value, there must be a sequence point between the modification and any other use of that value.
If you're not sure whether two uses of a value are separated by a sequence point or not, break up your code into more statements.
Here "modification" includes assignment operations on the left-hand value in =, +=, etc., and also the ++x, x++, --x, and x-- syntaxes. (It's usually these increment/decrement expressions where some people try to be clever and end up getting into trouble.)
Luckily, there are sequence points in most of the "expected" places:
At the end of every statement or declaration.
At the beginning and end of every function call.
At the built-in && and || operators.
At the ? in a ternary expression.
At the built-in , comma operator. (Most commonly seen in for conditions, e.g. for (a=0, b=0; a<m && b<n; ++a, ++b).) A comma which separates function arguments is not the comma operator and is not a sequence point.
Overloaded operator&&, operator||, and operator, do not cause sequence points. Potential surprises from that fact is one reason overloading them is usually discouraged.
It is worth knowing that sequence points exist because if you don't know about them you can easily write code which seems to run fine in testing but actually is undefined and might fail when you run it on another computer or with different compile options. In particular if you write for example x++ as part of a larger expression that also includes x you can easily run into problems.
I don't think it is necessary to learn all the rules fully - but you need to know when you need to check the specification, or perhaps better - when to rewrite your code to make it so that you aren't relying on sequence points rules if a simpler design would work too.
int n,n_squared;
for(n=n_squared=0;n<100;n_squared+=n+ ++n)
printf("%i squared might or might not be %i\n",n,n_squared);
... doesn't always do what you think it will do. This can make debugging painful.
The reason is the ++n retrieves, modifies, and stores the value of n, which could be before or after n is retrieved. Therefore, the value of n_squared isn't clearly defined after the first iteration. Sequence points guarantee that the subexpressions are evaluated in order.
Is the "missing semicolon" error really required? Why not treat it as a warning?
When I compile this code
int f = 1
int h=2;
the compiler intelligently tells me that where I am missing it. But to me it's like - "If you know it, just treat it as if it's there and go ahead. (Later I can fix the warning.)
int sdf = 1, df=2;
sdf=1 df =2
Even for this code, it behaves the same. That is, even if multiple statements (without ;) are in the same line, the compiler knows.
So, why not just remove this requirement? Why not behave like Python, Visual Basic, etc.
Summary of discussion
Two examples/instances were missing, and a semi-colon would actually cause a problem.
1.
return
(a+b)
This was presented as one of the worst aspects of JavaScript. But, in this scenario, semicolon insertion is a problem for JavaScript, but not
for C++. In C++, you will get another error if ; insertion is done after return. That is, a missing return value.
2
int *y;
int f = 1
*y = 2;
For this I guess, there is no better way than to introduce as statement separator, that is, a semicolon.
It's very good that the C++ compiler doesn't do this. One of the worst aspects of JavaScript is the semicolon insertion. Picture this:
return
(a + b);
The C++ compiler will happily continue on the next line as expected, while a language that "inserts" semicolons, like JavaScript, will treat it as "return;" and miss out the "(a + b);".
Instead of rely on compiler error-fixing, make it a habit to use semicolons.
There are many cases where a semicolon is needed.
What if you had:
int *y;
int f = 1
*y = 2;
This would be parsed as
int *y;
int f = 1 * y = 2;
So without the semicolons it is ambiguous.
First, this is only a small example; are you sure the compiler can intelligently tell you what's wrong for more complex code? For any piece of code? Could all compilers intelligently recognize this in the same way, so that a piece of C++ code could be guaranteed portable with missing semicolons?
Second, C++ was created more than a decade ago when computing resources aren't nearly what they are now. Even today, builds can take a considerable amount of time. Semicolons help to clearly demarcate different commands (for the user and for the compiler!) and assist both the programmer and the compiler in understanding what's happening.
; is for the programmer's convenience. If the line of code is very long then we can press enter and go to second line because we have ; for line separator. It is programming conventions. There must be a line separator.
Having semi-colons (or line breaks, pick one) makes the compiler vastly simpler and error messages more readable.
But contrary to what other people have said, neither form of delimiters (as an absolute) is strictly necessary.
Consider, for example, Haskell, which doesn’t have either. Even the current version of VB allows line breaks in many places inside a statement, as does Python. Neither requires line continuations in many places.
For example, VB now allows the following code:
Dim result = From element in collection
Where element < threshold
Select element
No statement delimiters, no line continuations, and yet no ambiguities whatsoever.
Theoretically, this could be driven much further. All ambiguities can be eliminated (again, look at Haskell) by introducing some rules. But again, this makes the parser much more complicated (it has to be context sensitive in a lot of places, e.g. your return example, which cannot be resolved without first knowing the return type of the function). And again, it makes it much harder to output meaningful diagnostics since an erroneous line break could mean any of several things so the compiler cannot know which error the user has made, and not even where the error was made.
In C programs semicolons are statement terminators, not separators. You might want to read this fun article.
+1 to you both.
The semi-colon is a command line delimiter, unlike VB, python etc. C and C++ ignore white space within lines of code including carriage returns! This was originally because at inception of C computer monitors could only cope with 80 characters of text and as C++ is based on the C specification it followed suit.
I could post up the question "Why must I keep getting errors about missing \ characters in VB when I try and write code over several lines, surely if VB knows of the problem it can insert it?"
Auto insertion as has already been pointed out could be a nightmare, especially on code that wraps onto a second line.
I won't extend much of the need for semi-colon vs line continuation characters, both have advantages and disadvantages and in the end it's a simple language design choice (even though it affects all the users).
I am more worried about the suggestion for the compiler to fix the code.
If you have ever seen a marvelous tool (such as... hum let's pick up a merge tool) and the way it does its automated work, you would be very glad indeed that the compiler did not modify the code. Ultimately if the compiler knew how to fix the code, then it would mean it knew your intent, and thought transmission has not been implemented yet.
As for the warning ? Any programmer worth its salt knows that warnings should be treated as errors (and compilation stopped) so what would be the advantage ?
int sdf = 1,df=2;
sdf=1 df =2
I think the general problem is that without the semicolon there's no telling what the programmer could have actually have meant (e.g may-be the second line was intended as sdf = 1 + df - 2; with serious typos). Something like this might well result from completely arbitrary typos and have any intended meaning, wherefore it might not be such a good idea after all to have the compiler silently "correct" errors.
You may also have noticed that you often get "expected semicolon" where the real problem is not a lack of a semicolon but something completely different instead. Imagine a malformed expression that the compiler could make sense out of by silently going and inserting semicolons.
The semicolon may seem redundant but it is a simple way for the programmer to confirm "yes, that was my intention".
Also, warnings instead of compiler errors are too weak. People compile code with warnings off, ignore warnings they get, and AFAIK the standard never prescribes what the compiler must warn about.
This question is inspired by this question, which features the following code snippet.
int s;
if((s = foo()) == ERROR)
print_error();
I find this style hard to read and prone to error (as the original question demonstrates -- it was prompted by missing parentheses around the assignment). I would instead write the following, which is actually shorter in terms of characters.
int s = foo();
if(s == ERROR)
print_error();
This is not the first time I've seen this idiom though, and I'm guessing there are reasons (perhaps historical) for it being so often used. What are those reasons?
I think it's for hysterical reasons, that early compilers were not so smart at optimizing. By putting it on one line as a single expression, it gives the compiler a hint that the same value fetched from foo() can be tested rather than specifically loading the value from s.
I prefer the clarity of your second example, with the assignment and test done later. A modern compiler will have no trouble optimizing this into registers, avoiding unnecessary loads from memory store.
When you are writing a loop, it is sometimes desirable to use the first form, as in this famous example from K&R:
int c;
while ((c = getchar()) != EOF) {
/* stuff */
}
There is no elegant "second-form" way of writing this without a repetition:
int c = getchar();
while (c != EOF) {
/* stuff */
c = getchar();
}
Or:
int c;
for (c = getchar(); c != EOF; c = getchar()) {
/* stuff */
}
Now that the assignment to c is repeated, the code is more error-prone, because one has to keep both the statements in sync.
So one has to be able to learn to read and write the first form easily. And given that, it seems logical to use the same form in if conditions as well.
I tend to use the first form mostly because I find it easy to read—as someone else said, it couples the function call and the return value test much more closely.
I make a conscious attempt at combining the two whenever possible. The "penalty" in size isn't enough to overcome the advantage in clarity, IMO.
The advantage in clarity comes from one fact: for a function like this, you should always think of calling the function and testing the return value as a single action that cannot be broken into two parts ("atomic", if you will). You should never call such a function without immediately testing its return value.
Separating the two (at all) leads to a much greater likelihood that you'll sometimes skip checking the return value completely. Other times, you'll accidentally insert some code between the call and the test of the return value that actually depends on that function having succeeded. If you always combine it all into a single statement, it (nearly) eliminates any possibility of falling into these traps.
I would always go for the second. It's easier to read, there's no danger of omitting the parentheses around the assignment and it is easier to step through with a debugger.
I often find the separation of the assignment out into a different line makes debugger watch or "locals" windows behave better vis-a-vis the presence and correct value of "s", at least in non-optimized builds.
It also allows the use of step-over separately on the assignment and test lines (again, in non-optimized builds), which can be helpful if you don't want to go mucking around in disassembly or mixed view.
YMMV per compiler and debugger and for optimized builds, of course.
I personally prefer for assignments and tests to be on different lines. It is less syntactically complicated, less error prone, and more easily understood. It also allows the compiler to give you more precise error/warning locations and often makes debugging easier.
It also allows me to more easily do things like:
int rc = function();
DEBUG_PRINT(rc);
if (rc == ERROR) {
recover_from_error();
} else {
keep_on_going(rc);
}
I prefer this style so much that in the case of loops I would rather:
while (1) {
int rc = function();
if (rc == ERROR) {
break;
}
keep_on_going(rc);
}
than do the assignment in the while conditional. I really don't like for my tests to have side-effects.
I often prefer the first form. I couldn't say exactly why, but it has something to do with the semantic involved.
The second style feels to me more like 2 separate operations. Call the function and then do something with the result, 2 different things. In the first style it's one logical unit. Call the function, save the temprary result and eventually handle the error case.
I know it's pretty vague and far from being completely rational, so I will use one or the other depending on the importance of the saved variable or the test case.
I believe that clarity should always prime over optimizations or "simplifications" based only on the amount of characters typed. This belief has stopped me from making many silly mistakes.
Separating the assignement and the comparison makes both clearer and so less error-prone, even if the duplication of the comparison might introduce a mistake once in a while. Among other things, parentheses become quickly hard to distinguish and keeping everything on one line introduces more parentheses. Also, splitting it up limits statements to doing only one of either fetching a value or assigning one.
However, if you expect people who will read your code to be more comfortable using the one-line idiom, then it is wide-spread enough not to cause any problems for most programmers. C programmers will definately be aware of it, even those that might find it awkward.
The C++ comma operator is used to chain individual expressions, yielding the value of the last executed expression as the result.
For example the skeleton code (6 statements, 6 expressions):
step1;
step2;
if (condition)
step3;
return step4;
else
return step5;
May be rewritten to: (1 statement, 6 expressions)
return step1,
step2,
condition?
step3, step4 :
step5;
I noticed that it is not possible to perform step-by-step debugging of such code, as the expression chain seems to be executed as a whole. Does it means that the compiler is able to perform special optimizations which are not possible with the traditional statement approach (specially if the steps are const or inline)?
Note: I'm not talking about the coding style merit of that way of expressing sequence of expressions! Just about the possible optimisations allowed by replacing statements by expressions.
Most compilers will break your code down into "basic blocks", which are stretches of code with no jumps/branches in or out. Optimisations will be performed on a graph of these blocks: that graph captures all the control flow in the function. The basic blocks are equivalent in your two versions of the code, so I doubt that you'd get different optimisations. That the basic blocks are the same isn't entirely obvious: it relies on the fact that the control flow between the steps is the same in both cases, and so are the sequence points. The most plausible difference is that you might find in the second case there is only one block including a "return", and in the first case there are two. The blocks are still equivalent, since the optimiser can replace two blocks that "do the same thing" with one block that is jumped to from two different places. That's a very common optimisation.
It's possible, of course, that a particular compiler doesn't ignore or eliminate the differences between your two functions when optimising. But there's really no way of saying whether any differences would make the result faster or slower, without examining what that compiler is doing. In short there's no difference between the possible optimisations, but it doesn't necessarily follow that there's no difference between the actual optimisations.
The reason you can't single-step your second version of the code is just down to how the debugger works, not the compiler. Single-step usually means, "run to the next statement", so if you break your code into multiple statements, you can more easily debug each one. Otherwise, if your debugger has an assembly view, then in the second case you could switch to that and single-step the assembly, allowing you to see how it progresses. Or if any of your steps involve function calls, then you may be able to "do the hokey-cokey", by repeatedly doing "step in, step out" of the functions, and separate them that way.
Using the comma operator neither promotes nor hinders optimization in any circumstances I'm aware of, because the C++ standard guarantee is only that evaluation will be in left-to-right order, not that statement execution necessarily will be. (This is the same guarantee you get with statement line order.)
What it is likely to do, though, is turn your code into a confusing mess, since many programmers are unaware that the comma-as-operator even exists, and are apt to confuse it with commas used as parameter separators. (Want to really make your code unreadable? Call a function like my_func((++i, y), x).)
The "best" use of the comma operator I've seen is to work with multiple variables in the iteration statement of a for loop:
for (int i = 0, j = 0;
i < 10 && j < 12;
i += j, ++j) // each time through the loop we're tinkering with BOTH i and j
{
}
Very unlikely IMHO. The thing get's compiled down to assembler/machine code, then further low-level optimizations are done, so it probably turns out to the same thing.
OTOH, if the comma operator is overloaded, the game changes completely. But I'm sure you know that. ;)
The obligatory list:
Don't worry about rewriting almost equivalent code to gain performance
If you have a perf-problem, profile to see what the problem is
If you can't get it faster by algorithmic ops, look at the disassembly and see that the compiler does what you intended
If not, ask here and post source and disassembly for both versions. :)