Shall I optimize or let compiler to do that? - c++

What is the preferred method of writing loops according to efficiency:
Way a)
/*here I'm hoping that compiler will optimize this
code and won't be calling size every time it iterates through this loop*/
for (unsigned i = firstString.size(); i < anotherString.size(), ++i)
{
//do something
}
or maybe should I do it this way:
Way b)
unsigned first = firstString.size();
unsigned second = anotherString.size();
and now I can write:
for (unsigned i = first; i < second, ++i)
{
//do something
}
the second way seems to me like worse option for two reasons: scope polluting and verbosity but it has the advantage of being sure that size() will be invoked once for each object.
Looking forward to your answers.

I usually write this code as:
/* i and size are local to the loop */
for (size_t i = firstString.size(), size = anotherString.size(); i < size; ++i) {
// do something
}
This way I do not pollute the parent scope and avoid calling anotherString.size() for each loop iteration.
It is especially useful with iterators:
for(some_generic_type<T>::forward_iterator it = container.begin(), end = container.end();
it != end; ++it) {
// do something with *it
}
Since C++ 11 the code can be shortened even more by writing a range-based for loop:
for(const auto& item : container) {
// do something with item
}
or
for(auto item : container) {
// do something with item
}

In general, let the compiler do it. Focus on the algorithmic complexity of what you're doing rather than micro-optimizations.
However, note that your two examples are not semantically identical - if the body of the loop changes the size of the second string, the two loops will not iterate the same amount of times. For that reason, the compiler might not be able to perform the specific optimization you're talking about.

I would first use the first version, simply because it looks cleaner and easier to type. Then you can profile it to see if anything needs to be more optimized.
But I highly doubt that the first version will cause a noticable performance drop. If the container implements size() like this:
inline size_t size() const
{
return _internal_data_member_representing_size;
}
then the compiler should be able to inline the function, eliding the function call. My compiler's implementation of the standard containers all do this.

How will a good compiler optimize your code? Not at all, as it can't be sure size() has any side-effects. If size() had any side effects your code relied on, they'd now be gone after a possible compiler optimization.
This kind of optimization really isn't safe from a compiler's perspective, you need to do it on your own. Doing on your own doesn't mean you need to introduce two additional local variables. Depending on your implementation of size, it might be an O(1) operation. If size is also declared inline, you'll also spare the function call, making the call to size() as good as a local member access.

Don't pre-optimize your code. If you have a performance problem, use a profiler to find it, otherwise you are wasting development time. Just write the simplest / cleanest code that you can.

This is one of those things that you should test yourself. Run the loops 10,000 or even 100,000 iterations and see what difference, if any, exists.
That should tell you everything you want to know.

My recommendation is to let inconsequential optimizations creep into your style. What I mean by this is that if you learn a more optimal way of doing something, and you cant see any disadvantages to it (as far as maintainability, readability, etc) then you might as well adopt it.
But don't become obsessed. Optimizations that sacrifice maintainability should be saved for very small sections of code that you have measured and KNOW will have a major impact on your application. When you do decide to optimize, remember that picking the right algorithm for the job is often far more important than tight code.

I'm hoping that compiler will optimize this...
You shouldn't. Anything involving
A call to an unknown function or
A call to a method that might be overridden
is hard for a C++ compiler to optimize. You might get lucky, but you can't count on it.
Nevertheless, because you find the first version simpler and easier to read and understand, you should write the code exactly the way it is shown in your simple example, with the calls to size() in the loop. You should consider the second version, where you have extra variables that pull the common call out of the loop, only if your application is too slow and if you have measurements showing that this loop is a bottleneck.

Here's how I look at it. Performance and style are both important, and you have to choose between the two.
You can try it out and see if there is a performance hit. If there is an unacceptable performance hit, then choose the second option, otherwise feel free to choose style.

You shouldn't optimize your code, unless you have a proof (obtained via profiler) that this part of code is bottleneck. Needless code optimization will only waste your time, it won't improve anything.
You can waste hours trying to improve one loop, only to get 0.001% performance increase.
If you're worried about performance - use profilers.

There's nothing really wrong with way (b) if you just want to write something that will probably be no worse than way (a), and possibly faster. It also makes it clearer that you know that the string's size will remain constant.
The compiler may or may not spot that size will remain constant; just in case, you might as well perform this optimization yourself. I'd certainly do this if I was suspicious that the code I was writing was going to be run a lot, even if I wasn't sure that it would be a big deal. It's very straightforward to do, it takes no more than 10 extra seconds thinking about it, it's very unlikely to slow things down, and, if nothing else, will almost certainly make the unoptimized build run a bit more quickly.
(Also the first variable in style (b) is unnecessary; the code for the init expression is run only once.)

How much percent of time is spent in for as opposed to // do something? (Don't guess - sample it.) If it is < 10% you probably have bigger issues elsewhere.
Everybody says "Compilers are so smart these days."
Well they're no smarter than the poor coders who write them.
You need to be smart too. Maybe the compiler can optimize it but why tempt it not to?

For the "std::size_t size()const" member function which not only is O(1) but is also declared "const" and so can be automatically pulled out of the loop by the compiler, it probably doesn't matter. That said, I wouldn't count on the compiler to remove it from the loop, and I think it is a good habit to get into to factor out the calls within the loop for cases where the function isn't constant or O(1). In addition, I think assigning the values to a variable leads to the code being more readable. I would not suggest, though, that you make any premature optimizations if it will result in the code being harder to read. Again, though, I think the following code is more readable, since there is less to read within the loop:
std::size_t firststrlen = firststr.size();
std::size_t secondstrlen = secondstr.size();
for ( std::size_t i = firststrlen; i < secondstrlen; i++ ){
// ...
}
Also, I should point out that you should use "std::size_t" instead of "unsigned", as the type of "std::size_t" can vary from one platform to another, and using "unsigned" can lead to trunctations and errors on platforms for which the type of "std::size_t" is "unsigned long" instead of "unsigned int".

Related

Is it expensive to compute vector size in for loops, each iteration?

Does the c++ compiler take care of cases like, buildings is vector:
for (int i = 0; i < buildings.size(); i++) {}
that is, does it notice if buildings is modified in the loop or not, and then
based on that not evaluate it each iteration? Or maybe I should do this myself,
not that pretty but:
int n = buildings.size();
for (int i = 0; i < n; i++) {}
buildings.size() will likely be inlined by the compiler to directly access the private size field on the vector<T> class. So you shouldn't separate the call to size. This kind of micro-optimization is something you don't want to worry about anyway (unless you're in some really tight loop identified as a bottleneck by profiling).
Don't decide whether to go for one or the other by thinking in terms of performance; your compiler may or may not inline the call - and std::vector::size() has constant complexity, too.
What you should really consider is correctness, because the two versions will behave very differently if you add or remove elements while iterating.
If you don't modify the vector in any way in the loop, stick with the former version to avoid a little bit of state (the n variable).
If the compiler can determine that buildings isn't mutated within the loop (for example if it's a simple loop with no function calls that could have side effects) it will probably optmize the computation away. But computing the size of a vector is a single subtraction anyway which should be pretty cheap as well.
Write the code in the obvious way (size inside the loop) and only if profiling shows you that it's too slow should you consider an alternative mechanism.
I write loops like this:
for (int i = 0, maxI = buildings.size(); i < maxI; ++i)
Takes care of many issues at once: suggest max is fixed up front, no more thinking about lost performance, consolidate types. If evaluation is in the middle expression it suggests the loop changes the collection size.
Too bad language does not allow sensible use of const, else it would be const maxI.
OTOH for more and more cases I rather use some algo, lambda even allows to make it look almost like traditional code.
Assuming the size() function is an inline function for the base-template, one can also assume that it's very little overhead. It is far different from, say, strlen() in C, which can have major overhead.
It is possible that it's still faster to use int n = buildings.size(); - because the compiler can see that n is not changing inside the loop, so load it into a register and not indirectly fetch the vector size. But it's very marginal, and only really tight, highly optimized loops would need this treatment (and only after analyzing and finding that it's a benefit), since it's not ALWAYS that things work as well as you expect in that sort of regard.
Only start to manually optimize stuff like that, if it's really a performance problem. Then measure the difference. Otherwise you'll lot's of unmaintainable ugly code that's harder to debug and less productive to work with. Most leading compilers will probably optimize it away, if the size doesn't change within the loop.
But even if it's not optimized away, then it will probably be inlined (since templates are inlined by default) and cost almost nothing.

performance of std::vector c++ size() inside loop in member function

Similar question, but less specific:
Performance issue for vector::size() in a loop
Suppose we're in a member function like:
void Object::DoStuff() {
for( int k = 0; k < (int)this->m_Array.size(); k++ )
{
this->SomeNotConstFunction();
this->ConstFunction();
double x = SomeExternalFunction(i);
}
}
1) I'm willing to believe that if only the "SomeExternalFunction" is called that the compiler will optimize and not redundantly call size() on m_Array ... is this the case?
2) Wouldn't you almost certainly get a boost in speed from doing
int N = m_Array.size()
for( int k = 0; k < N; k++ ) { ... }
if you're calling some member function that is not const ?
Edit Not sure where these down-votes and snide comments about micro-optimization are coming from, perhaps I can clarify:
Firstly, it's not to optimize per-se but just understand what the compiler will and will not fix. Usually I use the size() function but I ask now because here the array might have millions of data points.
Secondly, the situation is that "SomeNotConstFunction" might have a very rare chance of changing the size of the array, or its ability to do so might depend on some other variable being toggled. So, I'm asking at what point will the compiler fail, and what exactly is the time cost incurred by size() when the array really might change, despite human-known reasons that it won't?
Third, the operations in-loop are pretty trivial, there are just millions of them but they are embarrassingly parallel. I would hope that by externally placing the value would let the compiler vectorize some of the work.
Do not get into the habit of doing things like that.
The cases where the optimization you make in (2) is:
safe to do
has a noticeable difference
something your compiler cannot figure out on its own
are few and far in-between.
If it were just the latter two points, I would just advise that you're worrying about something unimportant. However, that first point is the real killer: you do not want to get in the habit of giving yourself extra chances to make mistakes. It's far, far easier to accelerate slow, correct code than it is to debug fast, buggy code.
Now, that said, I'll try answering your question. The definitions of the functions SomeNotConstFunction and SomeConstFunction are (presumably) in the same translation unit. So if these functions really do not modify the vector, the compiler can figure it out, and it will only "call" size once.
However, the compiler does not have access to the definition of SomeExternalFunction, and so must assume that every call to that function has the potential of modifying your vector. The presence of that function in your loop guarantees that `size is "called" every time.
I put "called" in quotes, however, because it is such a trivial function that it almost certainly gets inlined. Also, the function is ridiculously cheap -- two memory lookups (both nearly guaranteed to be cache hits), and either a subtraction and a right shift, or maybe even a specialized single instruction that does both.
Even if SomeExternalFunction does absolutely nothing, it's quite possible that "calling" size every time would still only be a small-to-negligible fraction of the running time of your loop.
Edit: In response to the edit....
what exactly is the time cost incurred by size() when the array really might change
The difference in the times you see when you time the two different versions of code. If you're doing very low level optimizations like that, you can't get answers through "pure reason" -- you must empirically test the results.
And if you really are doing such low level optimizations (and you can guarantee that the vector won't resize), you should probably be more worried about the fact the compiler doesn't know the base pointer of the array is constant, rather than it not knowing the size is constant.
If SomeExternalFunction really is external to the compilation unit, then you have pretty much no chance of the compiler vectorizing the loop, no matter what you do. (I suppose it might be possible at link time, though....) And it's also unlikely to be "trivial" because it requires function call overhead -- at least if "trivial" means the same thing to you as to me. (again, I don't know how good link time optimizations are....)
If you really can guarantee that some operations will not resize the vector, you might consider refining your class's API (or at least it's protected or private parts) to include functions that self-evidently won't resize the vector.
The size method will typically be inlined by the compiler, so there will be a minimal performance hit, though there will usually be some.
On the other hand, this is typically only true for vectors. If you are using a std::list, for instance, the size method can be quite expensive.
If you are concerned with performance, you should get in the habit of using iterators and/or algorithms like std::for_each, rather than a size-based for loop.
The micro optimization remarks are probably because the two most common implementations of vector::size() are
return _Size;
and
return _End - _Begin;
Hoisting them out of the loop will probably not noticably improve the performance.
And if it is obvious to everyone that it can be done, the compiler is also likely to notice. With modern compilers, and if SomeExternalFunction is statically linked, the compiler is usually able to see if the call might affect the vector's size.
Trust your compiler!
In MSVC 2015, it does a return (this->_Mylast() - this->_Myfirst()). I can't tell you offhand just how the optimizer might deal with this; but unless your array is const, the optimizer must allow for the possibility that you may modify its number of elements; making it hard to optimize out. In Qt, it equates to an inline function that that does a return d->size; ; that is, for a QVector.
I've taken to doing it in one particular project I'm working on, but it is for performance-oriented code. Unless you are interested in deeply optimizing something, I wouldn't bother. It probably is pretty fast any of these ways. In Qt, it is at most one pointer dereferencing, and is more typing. It looks like it could make a difference in MSVC.
I think nobody has offered a definitive answer so far; but if you really want to test it, have the compiler emit assembly source code, and inspect it both ways. I wouldn't be surprised to find that there's no difference when highly optimized. Let's not forget, though, that unoptimized performance during debug is also a factor that might be taken into consideration, when a lot of e.g. number crunching is involved.
I think the OP's original ? really could use to give how the array is declared.

what is the best way to print all the elements of a vector in C++?

What is the most efficient way to code "print all the elements of a vector to standard out" in C++,
for(std::vector<int>::iterator it = intVect.begin(); it != intVect.end(); ++i)
std::cout << *it;
or
std::copy(intVect.begin(), intVect.end(), std::ostream_iterator<int>(std::cout));
and why?
You can use
http://louisdx.github.com/cxx-prettyprint/
and rely on the work of other people that made sure it will be most optimal.
If you are asking which of methods you've posted will be faster, the only valid answer can be:
There is no way to know for sure because they are equivalent. You must profile them both and see
for yourself.
This is because the two methods are effectively the same. The do the same thing, but they use different mechanisms to do it. By the time your compiler's optimizer has finished with the code, it may have found different opportunities to increase execution speed, or it may have found opportunities in each that result in identical machine code being executed.
For example, consider:
for(std::vector<int>::iterator it = intVect.begin(); it != intVect.end(); ++i)
At first blush, it might seem like this could have a built-in inefficiency by the fact that intVect.end() is evaluated at each loop. This would make this method slower than,
std::copy(intVect.begin(), intVect.end(), std::ostream_iterator<int>(std::cout));
...where it is only evaluated once.
However, depending on the surrounding code and your compiler's settings, it might be re-written so that it is only evaluated once, at the beginning of the for. (Credit: #SteveJessop) Or it might even be that it isn't hoisted, but evaluating it is no different from examining a pre-computed value. It's possible that either way, the emitted code must load a pointer value from (stack pointer) + (small offset known at compile time). The only way to know for sure is to compile them both and examine the resulting assembly code.
Beyond all of this however is a more fundamental issue. You are asking which method of doing something is faster, when the core thing you're trying to do is potentially very slow to begin with, relative to the means by which you do it. If you are writing to stdout using streams, it is going to have negligible effect on the overall execution time whether you use a for loop or std::copy even if one is marginally faster than the other. If your concern is overall execution time, you're possibly barking up the wrong tree.
These two lines will end up doing essentially the same thing (almost definitely) once the compiler gets through with them. Either way you will end up with the same code looping through using iterators in range of {begin, end-1} using the same streams.
This is a micro-optimization that will not help you significantly, though I'm sure you can compile it with a big data set and see for yourself easily on your platform.

Coding Practices which enable the compiler/optimizer to make a faster program

Many years ago, C compilers were not particularly smart. As a workaround K&R invented the register keyword, to hint to the compiler, that maybe it would be a good idea to keep this variable in an internal register. They also made the tertiary operator to help generate better code.
As time passed, the compilers matured. They became very smart in that their flow analysis allowing them to make better decisions about what values to hold in registers than you could possibly do. The register keyword became unimportant.
FORTRAN can be faster than C for some sorts of operations, due to alias issues. In theory with careful coding, one can get around this restriction to enable the optimizer to generate faster code.
What coding practices are available that may enable the compiler/optimizer to generate faster code?
Identifying the platform and compiler you use, would be appreciated.
Why does the technique seem to work?
Sample code is encouraged.
Here is a related question
[Edit] This question is not about the overall process to profile, and optimize. Assume that the program has been written correctly, compiled with full optimization, tested and put into production. There may be constructs in your code that prohibit the optimizer from doing the best job that it can. What can you do to refactor that will remove these prohibitions, and allow the optimizer to generate even faster code?
[Edit] Offset related link
Here's a coding practice to help the compiler create fast code—any language, any platform, any compiler, any problem:
Do not use any clever tricks which force, or even encourage, the compiler to lay variables out in memory (including cache and registers) as you think best. First write a program which is correct and maintainable.
Next, profile your code.
Then, and only then, you might want to start investigating the effects of telling the compiler how to use memory. Make 1 change at a time and measure its impact.
Expect to be disappointed and to have to work very hard indeed for small performance improvements. Modern compilers for mature languages such as Fortran and C are very, very good. If you read an account of a 'trick' to get better performance out of code, bear in mind that the compiler writers have also read about it and, if it is worth doing, probably implemented it. They probably wrote what you read in the first place.
Write to local variables and not output arguments! This can be a huge help for getting around aliasing slowdowns. For example, if your code looks like
void DoSomething(const Foo& foo1, const Foo* foo2, int numFoo, Foo& barOut)
{
for (int i=0; i<numFoo, i++)
{
barOut.munge(foo1, foo2[i]);
}
}
the compiler doesn't know that foo1 != barOut, and thus has to reload foo1 each time through the loop. It also can't read foo2[i] until the write to barOut is finished. You could start messing around with restricted pointers, but it's just as effective (and much clearer) to do this:
void DoSomethingFaster(const Foo& foo1, const Foo* foo2, int numFoo, Foo& barOut)
{
Foo barTemp = barOut;
for (int i=0; i<numFoo, i++)
{
barTemp.munge(foo1, foo2[i]);
}
barOut = barTemp;
}
It sounds silly, but the compiler can be much smarter dealing with the local variable, since it can't possibly overlap in memory with any of the arguments. This can help you avoid the dreaded load-hit-store (mentioned by Francis Boivin in this thread).
The order you traverse memory can have profound impacts on performance and compilers aren't really good at figuring that out and fixing it. You have to be conscientious of cache locality concerns when you write code if you care about performance. For example two-dimensional arrays in C are allocated in row-major format. Traversing arrays in column major format will tend to make you have more cache misses and make your program more memory bound than processor bound:
#define N 1000000;
int matrix[N][N] = { ... };
//awesomely fast
long sum = 0;
for(int i = 0; i < N; i++){
for(int j = 0; j < N; j++){
sum += matrix[i][j];
}
}
//painfully slow
long sum = 0;
for(int i = 0; i < N; i++){
for(int j = 0; j < N; j++){
sum += matrix[j][i];
}
}
Generic Optimizations
Here as some of my favorite optimizations. I have actually increased execution times and reduced program sizes by using these.
Declare small functions as inline or macros
Each call to a function (or method) incurs overhead, such as pushing variables onto the stack. Some functions may incur an overhead on return as well. An inefficient function or method has fewer statements in its content than the combined overhead. These are good candidates for inlining, whether it be as #define macros or inline functions. (Yes, I know inline is only a suggestion, but in this case I consider it as a reminder to the compiler.)
Remove dead and redundant code
If the code isn't used or does not contribute to the program's result, get rid of it.
Simplify design of algorithms
I once removed a lot of assembly code and execution time from a program by writing down the algebraic equation it was calculating and then simplified the algebraic expression. The implementation of the simplified algebraic expression took up less room and time than the original function.
Loop Unrolling
Each loop has an overhead of incrementing and termination checking. To get an estimate of the performance factor, count the number of instructions in the overhead (minimum 3: increment, check, goto start of loop) and divide by the number of statements inside the loop. The lower the number the better.
Edit: provide an example of loop unrolling
Before:
unsigned int sum = 0;
for (size_t i; i < BYTES_TO_CHECKSUM; ++i)
{
sum += *buffer++;
}
After unrolling:
unsigned int sum = 0;
size_t i = 0;
**const size_t STATEMENTS_PER_LOOP = 8;**
for (i = 0; i < BYTES_TO_CHECKSUM; **i = i / STATEMENTS_PER_LOOP**)
{
sum += *buffer++; // 1
sum += *buffer++; // 2
sum += *buffer++; // 3
sum += *buffer++; // 4
sum += *buffer++; // 5
sum += *buffer++; // 6
sum += *buffer++; // 7
sum += *buffer++; // 8
}
// Handle the remainder:
for (; i < BYTES_TO_CHECKSUM; ++i)
{
sum += *buffer++;
}
In this advantage, a secondary benefit is gained: more statements are executed before the processor has to reload the instruction cache.
I've had amazing results when I unrolled a loop to 32 statements. This was one of the bottlenecks since the program had to calculate a checksum on a 2GB file. This optimization combined with block reading improved performance from 1 hour to 5 minutes. Loop unrolling provided excellent performance in assembly language too, my memcpy was a lot faster than the compiler's memcpy. -- T.M.
Reduction of if statements
Processors hate branches, or jumps, since it forces the processor to reload its queue of instructions.
Boolean Arithmetic (Edited: applied code format to code fragment, added example)
Convert if statements into boolean assignments. Some processors can conditionally execute instructions without branching:
bool status = true;
status = status && /* first test */;
status = status && /* second test */;
The short circuiting of the Logical AND operator (&&) prevents execution of the tests if the status is false.
Example:
struct Reader_Interface
{
virtual bool write(unsigned int value) = 0;
};
struct Rectangle
{
unsigned int origin_x;
unsigned int origin_y;
unsigned int height;
unsigned int width;
bool write(Reader_Interface * p_reader)
{
bool status = false;
if (p_reader)
{
status = p_reader->write(origin_x);
status = status && p_reader->write(origin_y);
status = status && p_reader->write(height);
status = status && p_reader->write(width);
}
return status;
};
Factor Variable Allocation outside of loops
If a variable is created on the fly inside a loop, move the creation / allocation to before the loop. In most instances, the variable doesn't need to be allocated during each iteration.
Factor constant expressions outside of loops
If a calculation or variable value does not depend on the loop index, move it outside (before) the loop.
I/O in blocks
Read and write data in large chunks (blocks). The bigger the better. For example, reading one octect at a time is less efficient than reading 1024 octets with one read.
Example:
static const char Menu_Text[] = "\n"
"1) Print\n"
"2) Insert new customer\n"
"3) Destroy\n"
"4) Launch Nasal Demons\n"
"Enter selection: ";
static const size_t Menu_Text_Length = sizeof(Menu_Text) - sizeof('\0');
//...
std::cout.write(Menu_Text, Menu_Text_Length);
The efficiency of this technique can be visually demonstrated. :-)
Don't use printf family for constant data
Constant data can be output using a block write. Formatted write will waste time scanning the text for formatting characters or processing formatting commands. See above code example.
Format to memory, then write
Format to a char array using multiple sprintf, then use fwrite. This also allows the data layout to be broken up into "constant sections" and variable sections. Think of mail-merge.
Declare constant text (string literals) as static const
When variables are declared without the static, some compilers may allocate space on the stack and copy the data from ROM. These are two unnecessary operations. This can be fixed by using the static prefix.
Lastly, Code like the compiler would
Sometimes, the compiler can optimize several small statements better than one complicated version. Also, writing code to help the compiler optimize helps too. If I want the compiler to use special block transfer instructions, I will write code that looks like it should use the special instructions.
The optimizer isn't really in control of the performance of your program, you are. Use appropriate algorithms and structures and profile, profile, profile.
That said, you shouldn't inner-loop on a small function from one file in another file, as that stops it from being inlined.
Avoid taking the address of a variable if possible. Asking for a pointer isn't "free" as it means the variable needs to be kept in memory. Even an array can be kept in registers if you avoid pointers — this is essential for vectorizing.
Which leads to the next point, read the ^#$# manual! GCC can vectorize plain C code if you sprinkle a __restrict__ here and an __attribute__( __aligned__ ) there. If you want something very specific from the optimizer, you might have to be specific.
On most modern processors, the biggest bottleneck is memory.
Aliasing: Load-Hit-Store can be devastating in a tight loop. If you're reading one memory location and writing to another and know that they are disjoint, carefully putting an alias keyword on the function parameters can really help the compiler generate faster code. However if the memory regions do overlap and you used 'alias', you're in for a good debugging session of undefined behaviors!
Cache-miss: Not really sure how you can help the compiler since it's mostly algorithmic, but there are intrinsics to prefetch memory.
Also don't try to convert floating point values to int and vice versa too much since they use different registers and converting from one type to another means calling the actual conversion instruction, writing the value to memory and reading it back in the proper register set.
The vast majority of code that people write will be I/O bound (I believe all the code I have written for money in the last 30 years has been so bound), so the activities of the optimiser for most folks will be academic.
However, I would remind people that for the code to be optimised you have to tell the compiler to to optimise it - lots of people (including me when I forget) post C++ benchmarks here that are meaningless without the optimiser being enabled.
use const correctness as much as possible in your code. It allows the compiler to optimize much better.
In this document are loads of other optimization tips: CPP optimizations (a bit old document though)
highlights:
use constructor initialization lists
use prefix operators
use explicit constructors
inline functions
avoid temporary objects
be aware of the cost of virtual functions
return objects via reference parameters
consider per class allocation
consider stl container allocators
the 'empty member' optimization
etc
Attempt to program using static single assignment as much as possible. SSA is exactly the same as what you end up with in most functional programming languages, and that's what most compilers convert your code to to do their optimizations because it's easier to work with. By doing this places where the compiler might get confused are brought to light. It also makes all but the worst register allocators work as good as the best register allocators, and allows you to debug more easily because you almost never have to wonder where a variable got it's value from as there was only one place it was assigned.
Avoid global variables.
When working with data by reference or pointer pull that into local variables, do your work, and then copy it back. (unless you have a good reason not to)
Make use of the almost free comparison against 0 that most processors give you when doing math or logic operations. You almost always get a flag for ==0 and <0, from which you can easily get 3 conditions:
x= f();
if(!x){
a();
} else if (x<0){
b();
} else {
c();
}
is almost always cheaper than testing for other constants.
Another trick is to use subtraction to eliminate one compare in range testing.
#define FOO_MIN 8
#define FOO_MAX 199
int good_foo(int foo) {
unsigned int bar = foo-FOO_MIN;
int rc = ((FOO_MAX-FOO_MIN) < bar) ? 1 : 0;
return rc;
}
This can very often avoid a jump in languages that do short circuiting on boolean expressions and avoids the compiler having to try to figure out how to handle keeping
up with the result of the first comparison while doing the second and then combining them.
This may look like it has the potential to use up an extra register, but it almost never does. Often you don't need foo anymore anyway, and if you do rc isn't used yet so it can go there.
When using the string functions in c (strcpy, memcpy, ...) remember what they return -- the destination! You can often get better code by 'forgetting' your copy of the pointer to destination and just grab it back from the return of these functions.
Never overlook the oppurtunity to return exactly the same thing the last function you called returned. Compilers are not so great at picking up that:
foo_t * make_foo(int a, int b, int c) {
foo_t * x = malloc(sizeof(foo));
if (!x) {
// return NULL;
return x; // x is NULL, already in the register used for returns, so duh
}
x->a= a;
x->b = b;
x->c = c;
return x;
}
Of course, you could reverse the logic on that if and only have one return point.
(tricks I recalled later)
Declaring functions as static when you can is always a good idea. If the compiler can prove to itself that it has accounted for every caller of a particular function then it can break the calling conventions for that function in the name of optimization. Compilers can often avoid moving parameters into registers or stack positions that called functions usually expect their parameters to be in (it has to deviate in both the called function and the location of all callers to do this). The compiler can also often take advantage of knowing what memory and registers the called function will need and avoid generating code to preserve variable values that are in registers or memory locations that the called function doesn't disturb. This works particularly well when there are few calls to a function. This gets much of the benifit of inlining code, but without actually inlining.
I wrote an optimizing C compiler and here are some very useful things to consider:
Make most functions static. This allows interprocedural constant propagation and alias analysis to do its job, otherwise the compiler needs to presume that the function can be called from outside the translation unit with completely unknown values for the paramters. If you look at the well-known open-source libraries they all mark functions static except the ones that really need to be extern.
If global variables are used, mark them static and constant if possible. If they are initialized once (read-only), it's better to use an initializer list like static const int VAL[] = {1,2,3,4}, otherwise the compiler might not discover that the variables are actually initialized constants and will fail to replace loads from the variable with the constants.
NEVER use a goto to the inside of a loop, the loop will not be recognized anymore by most compilers and none of the most important optimizations will be applied.
Use pointer parameters only if necessary, and mark them restrict if possible. This helps alias analysis a lot because the programmer guarantees there is no alias (the interprocedural alias analysis is usually very primitive). Very small struct objects should be passed by value, not by reference.
Use arrays instead of pointers whenever possible, especially inside loops (a[i]). An array usually offers more information for alias analysis and after some optimizations the same code will be generated anyway (search for loop strength reduction if curious). This also increases the chance for loop-invariant code motion to be applied.
Try to hoist outside the loop calls to large functions or external functions that don't have side-effects (don't depend on the current loop iteration). Small functions are in many cases inlined or converted to intrinsics that are easy to hoist, but large functions might seem for the compiler to have side-effects when they actually don't. Side-effects for external functions are completely unknown, with the exception of some functions from the standard library which are sometimes modeled by some compilers, making loop-invariant code motion possible.
When writing tests with multiple conditions place the most likely one first. if(a || b || c) should be if(b || a || c) if b is more likely to be true than the others. Compilers usually don't know anything about the possible values of the conditions and which branches are taken more (they could be known by using profile information, but few programmers use it).
Using a switch is faster than doing a test like if(a || b || ... || z). Check first if your compiler does this automatically, some do and it's more readable to have the if though.
In the case of embedded systems and code written in C/C++, I try and avoid dynamic memory allocation as much as possible. The main reason I do this is not necessarily performance but this rule of thumb does have performance implications.
Algorithms used to manage the heap are notoriously slow in some platforms (e.g., vxworks). Even worse, the time that it takes to return from a call to malloc is highly dependent on the current state of the heap. Therefore, any function that calls malloc is going to take a performance hit that cannot be easily accounted for. That performance hit may be minimal if the heap is still clean but after that device runs for a while the heap can become fragmented. The calls are going to take longer and you cannot easily calculate how performance will degrade over time. You cannot really produce a worse case estimate. The optimizer cannot provide you with any help in this case either. To make matters even worse, if the heap becomes too heavily fragmented, the calls will start failing altogether. The solution is to use memory pools (e.g., glib slices ) instead of the heap. The allocation calls are going to be much faster and deterministic if you do it right.
A dumb little tip, but one that will save you some microscopic amounts of speed and code.
Always pass function arguments in the same order.
If you have f_1(x, y, z) which calls f_2, declare f_2 as f_2(x, y, z). Do not declare it as f_2(x, z, y).
The reason for this is that C/C++ platform ABI (AKA calling convention) promises to pass arguments in particular registers and stack locations. When the arguments are already in the correct registers then it does not have to move them around.
While reading disassembled code I've seen some ridiculous register shuffling because people didn't follow this rule.
Two coding technics I didn't saw in the above list:
Bypass linker by writing code as an unique source
While separate compilation is really nice for compiling time, it is very bad when you speak of optimization. Basically the compiler can't optimize beyond compilation unit, that is linker reserved domain.
But if you design well your program you can can also compile it through an unique common source. That is instead of compiling unit1.c and unit2.c then link both objects, compile all.c that merely #include unit1.c and unit2.c. Thus you will benefit from all the compiler optimizations.
It's very like writing headers only programs in C++ (and even easier to do in C).
This technique is easy enough if you write your program to enable it from the beginning, but you must also be aware it change part of C semantic and you can meet some problems like static variables or macro collision. For most programs it's easy enough to overcome the small problems that occurs. Also be aware that compiling as an unique source is way slower and may takes huge amount of memory (usually not a problem with modern systems).
Using this simple technique I happened to make some programs I wrote ten times faster!
Like the register keyword, this trick could also become obsolete soon. Optimizing through linker begin to be supported by compilers gcc: Link time optimization.
Separate atomic tasks in loops
This one is more tricky. It's about interaction between algorithm design and the way optimizer manage cache and register allocation. Quite often programs have to loop over some data structure and for each item perform some actions. Quite often the actions performed can be splitted between two logically independent tasks. If that is the case you can write exactly the same program with two loops on the same boundary performing exactly one task. In some case writing it this way can be faster than the unique loop (details are more complex, but an explanation can be that with the simple task case all variables can be kept in processor registers and with the more complex one it's not possible and some registers must be written to memory and read back later and the cost is higher than additional flow control).
Be careful with this one (profile performances using this trick or not) as like using register it may as well give lesser performances than improved ones.
I've actually seen this done in SQLite and they claim it results in performance boosts ~5%: Put all your code in one file or use the preprocessor to do the equivalent to this. This way the optimizer will have access to the entire program and can do more interprocedural optimizations.
Most modern compilers should do a good job speeding up tail recursion, because the function calls can be optimized out.
Example:
int fac2(int x, int cur) {
if (x == 1) return cur;
return fac2(x - 1, cur * x);
}
int fac(int x) {
return fac2(x, 1);
}
Of course this example doesn't have any bounds checking.
Late Edit
While I have no direct knowledge of the code; it seems clear that the requirements of using CTEs on SQL Server were specifically designed so that it can optimize via tail-end recursion.
Don't do the same work over and over again!
A common antipattern that I see goes along these lines:
void Function()
{
MySingleton::GetInstance()->GetAggregatedObject()->DoSomething();
MySingleton::GetInstance()->GetAggregatedObject()->DoSomethingElse();
MySingleton::GetInstance()->GetAggregatedObject()->DoSomethingCool();
MySingleton::GetInstance()->GetAggregatedObject()->DoSomethingReallyNeat();
MySingleton::GetInstance()->GetAggregatedObject()->DoSomethingYetAgain();
}
The compiler actually has to call all of those functions all of the time. Assuming you, the programmer, knows that the aggregated object isn't changing over the course of these calls, for the love of all that is holy...
void Function()
{
MySingleton* s = MySingleton::GetInstance();
AggregatedObject* ao = s->GetAggregatedObject();
ao->DoSomething();
ao->DoSomethingElse();
ao->DoSomethingCool();
ao->DoSomethingReallyNeat();
ao->DoSomethingYetAgain();
}
In the case of the singleton getter the calls may not be too costly, but it is certainly a cost (typically, "check to see if the object has been created, if it hasn't, create it, then return it). The more complicated this chain of getters becomes, the more wasted time we'll have.
Use the most local scope possible for all variable declarations.
Use const whenever possible
Dont use register unless you plan to profile both with and without it
The first 2 of these, especially #1 one help the optimizer analyze the code. It will especially help it to make good choices about what variables to keep in registers.
Blindly using the register keyword is as likely to help as hurt your optimization, It's just too hard to know what will matter until you look at the assembly output or profile.
There are other things that matter to getting good performance out of code; designing your data structures to maximize cache coherency for instance. But the question was about the optimizer.
Align your data to native/natural boundaries.
I was reminded of something that I encountered once, where the symptom was simply that we were running out of memory, but the result was substantially increased performance (as well as huge reductions in memory footprint).
The problem in this case was that the software we were using made tons of little allocations. Like, allocating four bytes here, six bytes there, etc. A lot of little objects, too, running in the 8-12 byte range. The problem wasn't so much that the program needed lots of little things, it's that it allocated lots of little things individually, which bloated each allocation out to (on this particular platform) 32 bytes.
Part of the solution was to put together an Alexandrescu-style small object pool, but extend it so I could allocate arrays of small objects as well as individual items. This helped immensely in performance as well since more items fit in the cache at any one time.
The other part of the solution was to replace the rampant use of manually-managed char* members with an SSO (small-string optimization) string. The minimum allocation being 32 bytes, I built a string class that had an embedded 28-character buffer behind a char*, so 95% of our strings didn't need to do an additional allocation (and then I manually replaced almost every appearance of char* in this library with this new class, that was fun or not). This helped a ton with memory fragmentation as well, which then increased the locality of reference for other pointed-to objects, and similarly there were performance gains.
A neat technique I learned from #MSalters comment on this answer allows compilers to do copy elision even when returning different objects according to some condition:
// before
BigObject a, b;
if(condition)
return a;
else
return b;
// after
BigObject a, b;
if(condition)
swap(a,b);
return a;
If you've got small functions you call repeatedly, i have in the past got large gains by putting them in headers as "static inline". Function calls on the ix86 are surprisingly expensive.
Reimplementing recursive functions in a non-recursive way using an explicit stack can also gain a lot, but then you really are in the realm of development time vs gain.
Here's my second piece of optimisation advice. As with my first piece of advice this is general purpose, not language or processor specific.
Read the compiler manual thoroughly and understand what it is telling you. Use the compiler to its utmost.
I agree with one or two of the other respondents who have identified selecting the right algorithm as critical to squeezing performance out of a program. Beyond that the rate of return (measured in code execution improvement) on the time you invest in using the compiler is far higher than the rate of return in tweaking the code.
Yes, compiler writers are not from a race of coding giants and compilers contain mistakes and what should, according to the manual and according to compiler theory, make things faster sometimes makes things slower. That's why you have to take one step at a time and measure before- and after-tweak performance.
And yes, ultimately, you might be faced with a combinatorial explosion of compiler flags so you need to have a script or two to run make with various compiler flags, queue the jobs on the large cluster and gather the run time statistics. If it's just you and Visual Studio on a PC you will run out of interest long before you have tried enough combinations of enough compiler flags.
Regards
Mark
When I first pick up a piece of code I can usually get a factor of 1.4 -- 2.0 times more performance (ie the new version of the code runs in 1/1.4 or 1/2 of the time of the old version) within a day or two by fiddling with compiler flags. Granted, that may be a comment on the lack of compiler savvy among the scientists who originate much of the code I work on, rather than a symptom of my excellence. Having set the compiler flags to max (and it's rarely just -O3) it can take months of hard work to get another factor of 1.05 or 1.1
When DEC came out with its alpha processors, there was a recommendation to keep the number of arguments to a function under 7, as the compiler would always try to put up to 6 arguments in registers automatically.
For performance, focus first on writing maintenable code - componentized, loosely coupled, etc, so when you have to isolate a part either to rewrite, optimize or simply profile, you can do it without much effort.
Optimizer will help your program's performance marginally.
You're getting good answers here, but they assume your program is pretty close to optimal to begin with, and you say
Assume that the program has been
written correctly, compiled with full
optimization, tested and put into
production.
In my experience, a program may be written correctly, but that does not mean it is near optimal. It takes extra work to get to that point.
If I can give an example, this answer shows how a perfectly reasonable-looking program was made over 40 times faster by macro-optimization. Big speedups can't be done in every program as first written, but in many (except for very small programs), it can, in my experience.
After that is done, micro-optimization (of the hot-spots) can give you a good payoff.
i use intel compiler. on both Windows and Linux.
when more or less done i profile the code. then hang on the hotspots and trying to change the code to allow compiler make a better job.
if a code is a computational one and contain a lot of loops - vectorization report in intel compiler is very helpful - look for 'vec-report' in help.
so the main idea - polish the performance critical code. as for the rest - priority to be correct and maintainable - short functions, clear code that could be understood 1 year later.
One optimization i have used in C++ is creating a constructor that does nothing. One must manually call an init() in order to put the object into a working state.
This has benefit in the case where I need a large vector of these classes.
I call reserve() to allocate the space for the vector, but the constructor does not actually touch the page of memory the object is on. So I have spent some address space, but not actually consumed a lot of physical memory. I avoid the page faults associated the associated construction costs.
As i generate objects to fill the vector, I set them using init(). This limits my total page faults, and avoids the need to resize() the vector while filling it.
One thing I've done is try to keep expensive actions to places where the user might expect the program to delay a bit. Overall performance is related to responsiveness, but isn't quite the same, and for many things responsiveness is the more important part of performance.
The last time I really had to do improvements in overall performance, I kept an eye out for suboptimal algorithms, and looked for places that were likely to have cache problems. I profiled and measured performance first, and again after each change. Then the company collapsed, but it was interesting and instructive work anyway.
I have long suspected, but never proved that declaring arrays so that they hold a power of 2, as the number of elements, enables the optimizer to do a strength reduction by replacing a multiply by a shift by a number of bits, when looking up individual elements.
Put small and/or frequently called functions at the top of the source file. That makes it easier for the compiler to find opportunities for inlining.

Is using size() for the 2nd expression in a for construct always bad?

In the following example should I expect that values.size() will be called every time around the loop? In which case it might make sense to introduce a temporary vectorSize variable. Or should a modern compiler be able to optimize the calls away by recognising that the vector size cannot change.
double sumVector(const std::vector<double>& values) {
double sum = 0.0;
for (size_t ii = 0; ii < values.size(); ++ii) {
sum += values.at(ii);
}
}
Note that I don't care if there are more efficient methods to sum the contents of a vector, this question is just about the use of size() in a for construct.
Here's one way to do it that makes it explicit - size() is called only once.
for (size_t ii = 0, count = values.size(); ii < count; ++ii)
Edit: I've been asked to actually answer the question, so here's my best shot.
A compiler generally won't optimize a function call, because it doesn't know if it will get a different return value from one call to the next. It also won't optimize if there are operations inside the loop that it can't predict the side effects of. Inline functions might make a difference, but nothing is guaranteed. Local variables are easier for the compiler to optimize.
Some will call this premature optimization, and I agree that there are few cases where you will ever notice a speed difference. But if it doesn't make the code any harder to understand, why not just consider it a best practice and go with it? It certainly can't hurt.
P.S. I wrote this before I read Benoit's answer carefully, I believe we're in complete agreement.
It all depends on what the vector's size implementation is, how aggressive the compiler is and if it listen/uses to inline directives.
I would be more defensive and introduce the temporary as you don't have any guarantees about how efficient your compiler will be.
Of course, if this routine is called once or twice and the vector is small, it really doesn't matter.
If it will be called thousands of times, then I would use the temporary.
Some might call this premature optimization, but I would tend to disagree with that assessment.
While you are trying to optimize the code, you are not investing time or obfuscating the code in the name of performance.
I have a hard time considering what is a refactoring to be an optimization. But in the end, this is along the lines of "you say tomato, I say tomato"...
Start with size() in the 'for' construct until you need to optimize for speed.
If it is too slow, look for ways to make it faster, such as using a temporary variable to hold the result of size.
No matter the optimisation settings, putting the .size() call in the second expression will be at most as performant as outlining the .size() call before the for-loop. That is:
size_t size = values.size();
for (size_t ii = 0; ii < size; ++ii) {
sum += values.at(ii)
}
will always perform at least as well as, if not better than:
for (size_t ii = 0; ii < values.size(); ++ii) {
sum += values.at(ii);
}
In practice, it probably wont matter, since outlining the .size() call is a common compiler optimisation. However, I do find the second version easier to read.
I find this even easier, though:
double sum = std::accumulate(values.begin(), values.end(), 0);
Worth noting that even if you are dealing with millions of items the overhead is going to be negligible.
In any case this should really be written using iterator - as there may be more overhead accessing a specific example.
There is really no way that the compiler can assume that size() won't change - because it could do..
If the order of iteration isn't important then you could always write it as which is slightly more efficient.
for (int i=v.size()-1; i>=0 ;i--)
{
...
}
This isn't part of the question but why are you using at in your code in place of the subscript operator []?
The sense in at is to ensure that no operation on an invalid index occurs. However, this will never be the case in your loop since you know from your code what the indices are going to be (always assuming single-threadedness).
Even if your code contained a logical error, causing you to access an invalid element, at in this place would be useless because you don't expect the resulting exception and hence you don't treat it (or do you enclose all of your loops by try blocks?).
The use of at here is misleading because it tells the reader that you (as a programmer) don't know what values the index will have – which is obviously wrong.
I agree with Curro, this is a typical case for the use of iterators. Although this is more verbose (at least if you don't use constructs like Boost.Foreach), it is also much more expressive and safer.
Boost.Foreach would allow you to write the code as follows:
double sum = 0.0;
foreach (double d, values)
sum += d;
This operation is safe, efficient, short and readable.
It doesn't matter at all. The performance overhead of .at() is so large (it contains a conditional throw statement) that a non-optimized version will spend most of its time there. An optimizing compiler smart enough to eliminiate the conditional throw will necessarily spot that size() does not change.
I agree with Benoit. The introduction of a new variable, especially an int or even a short will have a bigger benefit that calling it each time.
It's one less thing to worry about if the loop ever gets large enough that it may impact performance.
If you hold the size of the vector in a temporary variable, you will be independent of the compiler.
My guess is, that most compilers will optimize the code in a way, that size() will be called only once. But using a temporary variable will give you a guarantee, size() will only be called once!
The size method from std::vector should be inlined by the compiler, meaning that every call to size() is replaced by its actual body (see the question Why should I ever use inline code for more information about inlining). Since in most implementations size() basically computes the difference between end() and begin() (which should be inlined too), you don't have to worry too much about loss of performance.
Moreover, if I remember correctly, some compilers are "smart" enough to detect the constness of an expression in the second part of the for construct, and generate code that evaluates the expression only once.
The compiler will not know if the value of .size() changes between calls, so it won't do any optimizations. I know you just asked about the use of .size(), but you should be using iterators anyway.
std::vector<double>::const_iterator iter = values.begin();
for(; iter != values.end(); ++iter)
{
// use the iterator here to access the value.
}
In this case, the call to .end() is similar to the problem you expose with .size(). If you know the loop does not perform any operation in the vector that invalidates the iterators, you can initialize an iterator to the .end() position prior to enter the loop and use that as your boundary.
Always write code the first time exactly as you mean it. If you are iterating over the vector from zero to size(), write it like that. Do not optimise the call to size() into a temporary variable unless you have profiled the call to be a bottleneck in your program that needs optimising.
In all likelihood, a good compiler will be able to optimise away the call to size(), particularly given that the vector is declared as const.
If you were using a container where size() was O(n) (like std::list) and not O(1) (like std::vector), you would not be iterating through that container using indices. You would be using iterators instead.
Anyway, if the body of the loop is so trivial that recalculating std::vector::size() matters, then there is probably a more efficient (but possibly platform-specific) way to do the calculation, regardless of what it is. If the body of the loop is non-trivial, recalculating std::vector::size() each time is unlikely to matter.
If you are modifying the vector (adding or removing elements) in the for loop then you should not use a temporary variable since this could lead to bugs.
If you are not modifying the vector size in the for loop then I would all the time use a temporary variable to store the size (this would make your code independant on the implementation details of vector::size.
In such cases using iterators is cleaner - in some it's even faster. There's only one call to the container - getting the iterator holding a pointer to the vector member if there are any left, or null otherwise.
Then of course for can become a while and there are no temporary variables needed at all - you can even pass an iterator to the sumVector function instead of a const reference/value.
Most, maybe even all, standard implementations of size() will be inlined by the compiler to what would be the equivalent of a temporary or at most a pointer dereference.
However, you can never be sure. Inlining is about as hidden as these things get, and 3rd party containers may have virtual function tables - which means you may not get inlined.
However, seriously, using a temporary reduces readability slightly for almost certainly no gain. Only optimise to a temporary if profiling says it is fruitful. If you make these micro optimisations everywhere, your code could become unreadable, perhaps even for yourself.
As a slight aside, no compiler would optimise size() to one of call assigning to a temporary. There is almost no guarantee of const in C++. The compiler cannot risk assuming size() will return the same value for the whole loop. For example. Another thread could change the vector in between loop iterations.