Does double exclamation (!!) in C++ cost more CPU time? [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I know it's a trick to do boolean conversion. My question is primarily about resource cost when writing this way. Will complier just ignore the "!!" and do implicit boolean conversion directly?

If you have any doubts you can check the generated assembly; noting at the assembly level there is no such thing as a boolean type anyway. So yes, it's probably all optimised out.
As a rule of thumb, code that mixes types therefore necessitating type conversions will run slower, although that is masked by another rule of thumb which is write clear code.

It depends.
If you limit attention just to basic types that are convertible to bool and can be an operand of the ! operator, then it depends on the compiler. Depending on target system, the compiler may emit a sequence of instructions that gives the required effect, but not in the way you envisage. Also, a given compiler may treat things differently, with different optimisation settings (e.g compiling for debugging versus release).
The only way to be sure is to examine the code emitted by the compiler. In practice, it is unlikely to make much difference. As others have commented, you would be better off worrying about getting your code clear and working correctly,than about the merits of premature optimisation techniques. If you have a real need (e.g. the operation is in a hotspot identified by a profiler) than you will have data to understand what the need is, and identify realistic options to do something about it. Practically, I would doubt there are many real-world cases where there would be any difference.
In C++, with user-defined types, all bets are off. There are many possibilities, such as classes that have an operator!() that returns a class type, a class that has an operator!() but not an operator bool(). The list goes on, and there are many permutations. There are cases where the compiler would be incorrect in doing such a transformation (e.g. !!x would be expected to be equivalent to x.operator!().operator!() but there is not actually a requirement (coding guidelines aside) for that sequence to give the same net effect as x.operator bool()). Practically, I wouldn't expect too many compilers to even attempt to identify an opportunity in such cases - the analysis would be non-trivial, probably not give many practical benefits (optimising single instructions is rarely where the gains are to be made in compiler optimisation). Again, it is better for the programmer to focus on getting code clear and correct, rather than worrying about how the compiler optimises single expressions like this. For example, if calling an operator bool() is intended, then it is better to provide that operator AND write an expression that uses it (e.g. bool(x)) rather than hoping the compiler will convert a hack like !!x into a call of x.operator bool().

Related

Will a compiler remove effectless references? [duplicate]

This question already has answers here:
What exactly is the "as-if" rule?
(3 answers)
Closed 3 years ago.
Situation is that I'd like to use descriptive variable names as member variables so they are well understandable in headers (for example: min_distance_to_polygon). Yet in complex algorithms I'd find it smoother to have much shorter variable names because the context is clear (for example min_dist in that case).
So in the definition of a method I'd just write:
int & min_dist = min_distance_to_polygon;
Does this cause an overhead after compilation and would this be acceptable coding style?
EDIT: Would this be better (as it prevents a possible copy)?
int & min_dist{min_distance_to_polygon};
Does this cause an overhead after compilation
Not with an optimizing compiler, no. This is bread-and-butter optimization for a compiler. In fact, even copying the value would likely not cause any overhead (assuming that it remains unchanged) due to the compiler's value tracking and/or how CPU registers actually work behind the scenes (see Register Renaming).
and would this be acceptable coding style?
That's opinion-based and debatable. I posit that there exists code where this is a reasonable choice, but that such code is rare. In the end it's up to you to judge whether future readers will find either version easier to read and comprehend.
Would this be better (as it prevents a possible copy)?
The two code snippets you show are exactly identical in their semantics - both are initialization. No operator= is invoked (even conceptually) in X x = y;.
Will a compiler remove effectless references?
It depends. It may, if it can. The language doesn't mandate optimisation, it is permitted.
Does this cause an overhead after compilation and would this be acceptable coding style?
Overhead would be highly unlikely in this case, assuming the compiler optimises the program.
In general, you can verify that a reference is optimised away by comparing the generated assembly with or without the reference. If the generated assembly is identical, then there will not be any overhead.
More generally, you can verify whether any change has significant overhead by measuring.
Would this be better
int & min_dist{min_distance_to_polygon};
It would be effectively identical.

In the performance's point of view: will C++ 11 perform better than its predecessors [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I know that C++ 11 and later ones provide a lot of hands-on tools that make developers easier to work. However, in the performance's point of view, will C++ 11 perform better than its predecessors?
For an instance, C++ 11's std::thread can work cross-platforms but in C++ 0x we have to make a wrapper for both Windows and Linux. In terms of performance, will std::thread perform better?
C++11's main performance benefits are support for move construction/assignment and standard specified copy-elision. The latter improves a lot of code for free, and the former still benefits existing code that is recompiled using the STL collections and other standard types (which will often support and benefit from move semantics even if the code using them doesn't explicitly opt in; the benefits are limited without the code explicitly using std::move and the like appropriately, but still there).
The vast majority of the rest of it is essentially syntactic sugar to my knowledge (std::thread is just wrapping existing threading APIs; it's largely templated/inlined, so overhead is trivial or nonexistent, but neither is it somehow gaining you a performance benefit). That said, runtime performance is often less important than developer cycles; the syntactic sugar is a huge benefit because it means it's easier to write C++11. The existence of auto alone means that templates can do much more complex things, without forcing the developer to write absurd declarations describing what the template is doing; they just use auto and let type deduction figure it out.
Several C++11 features, namely move semantics, can result in dramatic performance improvement. But mostly for code that's explicitly written to take advantage of it.
But a lot of the performance improvements will also automatically be in scope for unmodified code, as the compiler will be able to automatically deduce where move semantics, and other new language features, can be used.
But, for best results, write your code to explicitly take advantage of C++11's features.
Some performance improvements will be indirect. Other language features make it easier, and faster, to write optimal code; I specifically have constexpr in mind. This often results in better performance, too.

Is there a great deal of extra overhead when using C++ over C? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have been programming microcontrollers for a bit using C, but like the intuition that C++ brings to the table with its object-oriented nature.
What are the major drawbacks of using C++ in general? Aside from class instantiation and deletion, where the associated constructors and destructors are called, is there a significant amount of overhead compared to an equivalent implementation using C?
Specifically, I am concerned about the following areas:
extra memory usage (RAM)
extra instructions required (and consequentially CPU time)
extra memory required to store the C++ program (i.e. result of compilation)
Programming in C++ won't inherently give you a slower/bigger/< insert worst nightmare here> program. However, that are some reasons to prefer C to C++ for microcontrollers:
Writing a C++ compiler is much harder than writing a C compiler. Thus it can be impossible to find a C++ compiler for a small processor, but a C compiler can always be found. This may or may not bother you. Even if it doesn't bother you now, it might in the future if you want to port your code.
C++ can do things behind your back. Vectors are much easier to deal with than arrays because a lot of the work is done for you. But this means that the library is allocating memory for you and it does it when it wants to. If memory is at a premium then you might want to have full control. Also, if there is an element of real time in your use case then you probably want to allocate all memory up front, so that each call is predictable (an insert to a vector might take a long time if you hit the bounds where it needs to grow ... this might mean copying the vector to a new location on the heap).
C++ has features that take up more memory that are very easy to use. If you make functions virtual then the compiler might need to have a virtual function table (more memory, and a slightly slower function call). This might be what you want, but these things are easier to introduce in C++ than in C.
Overall, C++ will let you introduce code that is larger and slower than C will. But if you want those features then doing it in C is a pain (think of function pointers rather than a virtual function call ... they are effectively the same thing). And the C version will end up taking the same time and resources, so there is no saving by using C.
Dynamic dispatch (i.e. methods marked virtual) have slightly more cost (though negligibly so) than non-virtual methods (but, good news, you don't have to mark a method as virtual unless you intend to override it, and when you do use it, it will probably be faster than whatever you would have crafted by hand in C to do the same thing) and exception handling can be slow (though you don't need to use exceptions in your code). Other than that, there is no difference, except that C++ will greatly simplify the code over the equivalent C code.

Is an if statement without brackets faster? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I recently did a code test for a company that is using C (and sort of C++) to write their own language. I was somewhat appalled at all the if statements that were in the code that they sent me that had no brackets. Initially I just thought they were hacks but then I was wondering if they did it that way because it is actually (minimally) faster. Also, if anyone has seen the bit of code that was the cause of the security breach in iOS recently, you'll note that curly braces would have thwarted the bug. Are they writing for speed as well?
This question is open to any C (syntax) type language as I imagine there could be some differences.
Braces have nothing to do with speed in a compiled language.
In cases where it is optional, it is just a style preference, albeit one with a higher potential for mistakes (e.g. Apple's faux pas).
All of these languages are compiled. The brace itself is not an instruction of any sort, it is simply a higher level syntactic element that you use to tell the compiler that a group of statements forms a coherent block of some kind. (The fact that it is a curly brace in many languages is probably more a matter of tradition than anything else.) It is similar in spirit to semicolons, parentheses, colons, etc. It is nothing more than a grammatical symbol used to help you express your program accurately to the compiler.
As far as I know there is no processor or virtual machine that has the equivalent of an fyi_curly_brace_was_here instruction.
This question is akin to asking if white-space or extra semicolons affect performance in compiled languages - these are all either optional formatting or necessary syntactic elements.
The reason we mention "compiled" languages is that certain interpreted languages, where the code is parsed as it is executed, could conceivably incur a modest speed penalty just due to parsing, but even in those types of languages, the effect would likely be completely negligible compared to whatever else the code itself is doing.
In compiled languages like C or C++, the existence or non-existence of brackets cannot make the actual program faster.
My guess: They just hacked it in faster without them.
No. Compiled code is going to be the same.

Why many other langues do not allow bool-to-int implicit conversion? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I know that C++ do the implicit conversion from boolean to integer type, so it is possible to write:
bool a = (b > c);
But many other languages such as Java, Ada, Haskell.. do not support this feature.
Therefore I think that it must have some reasonable reasons for forbidding it.
What is the disadvantages of bool-to-int implicit conversion?
In the history of languages, there are many examples of languages that have strong type-safety (which makes it less likely that you accidentially convert some type to a different type without actually meaning to). The drawback of such languages is that it can be pretty difficult (sometimes impossible) to directly convert some type into another type.
On the other end of the scale, we'll find machine language (assembler), where there are "no" types (there are often different sized integer units, and some sizes of floating point in most architectures).
C was originally designed as a "replacement for machine language" to make it easier to "port" the Unix operating system to different machines. As such, it didn't have much in the way of type-safety.
Some languages lets you freely convert from one type to another with very little effort, for example, PHP allows you to do the following:
$foo = "Hi";
$foo += 7;
echo $foo;
On encountering $foo += 7; It will convert "Hi" to the value zero, since it's not a valid integer value, then add the number 7, so the output will be "7". This type of conversion can really lead to mysterious problems.
In the end, it's a decision by the language creator(s) whether the language should have strong, weak or intermediate type-safety.
In general, the purpose of strong type-safety is to stop the programmer from being a fool. It does not really matter much to the eventually generated code if you have to type some extra to tell the compiler you really want to convert from one type to another, or if you don't - the compiler will still do whatever conversion you asked for in some way in the generated machine code (in some cases, that means "just move the data", in other cases, such as converting a floating point to an integer, it means using some sort of conversion instruction).
Is it better to do one or the other? That's clearly a matter of opinion, and I'm fairly convinced that a "middle ground" is good.