Should I avoid repeated access to the same method within a loop? - c++

I worry about the final performance of a program, at the same time, I do not want to penalize the programmer with excessive or unnecessary care in the code.
In the code below, for example, it will return the same value of getPosition method for 10000 times:
vector <int> arr;
for (int i=0; i<10000 ; i++)
arr.push_back(i + window.getPosition().x)
In this specific case I know that window.getPosition().x will always return the same value.
To avoid a loss of performance, should I change the code to, for example:
vector <int> arr;
int x = window.getPosition().x;
for (int i=0; i<10000 ; i++)
arr.push_back(i + x);
This always involves an additional programmer's concern.
Or is there some smart cache, where should I not worry about it?

Maybe; this will depend on how visible the code you are calling is to the compiler (LTO or not? All visible, or separate object files?) If the compiler can prove it cannot change, the compiler will cache it for you. But seemingly innoculous things can prevent the compiler from knowing it cannot change.
The overhead here may be small, and simpler code is easier to maintain. Write the simpker, clearer, more obviously correct one. Then profile your code to find hot paths. Go over those hot paths and do that kind of optimization and reprofile to aee if it is worth it.
After decade+ of doing this, you'll get a feel for hot paths that is about 25% accurate, and you can self justify preemptively doing minor optimizations before profiling with dunning-krugar inflated confidence. Or, you know, profile.

That is definitely a good idea. Even if the cost of calling the method is very small, storing a single int only costs you a few bytes and will almost definitely be quicker. You could also declare it as const to make its purpose extra clear.
Also, if efficiency is your concern, you should consider reserving memory for your vector. Right now, as you're pushing back elements into arr, you're going to have to reallocate several times. Since you're running through that loop 10000 times, it can be pretty inefficient to have to copy all that information around. Since you know you need space for 10000 elements, you should call arr.reserve(10000); before the loop.

Related

Vector re-declaration versus insertions in looping operations - C++

I have an option to either create and destroy a vector on every call to func() and push elements in each iteration, as shown in Example A OR fixed the initialization and only overwrite old values in each iteration, as shown in Example B.
Example A:
void func ()
{
std::vector<double> my_vec(5, 0.0);
for ( int i = 0; i < my_vec.size(); i++) {
my_vec.push_back(i);
// do something
}
}
while (condition) {
func();
}
Example B:
void func (std::vector<double>& my_vec)
{
for ( int i = 0; i < my_vec.size(); i++) {
my_vec[i] = i;
// do something
}
}
while (condition) {
std::vector<double> my_vec(5, 0.0);
func(myVec);
}
Which of the two would be computationally inexpensive. The size of the array won't be more than 10.
I still suspect that the question that was asked is not the question that was intended, but it occurred to me that the main point of my answer would likely not change. If the question gets updated, I can always edit this answer to match (or delete it, if it turns out to be inapplicable).
De-prioritize optimizations
There are various factors that should affect how you write your code. Among the desirable goals are space optimization, time optimization, data encapsulation, logic encapsulation, readability, robustness, and correct functionality. Ideally, all of these goals would be achievable in every piece of code, but that is not particularly realistic. Much more likely is a situation where one or more of these goals must be sacrificed in favor of the others. When that happens, optimizations should typically yield to everything else.
That is not to say that optimizations should be ignored. There are plenty of optimizations that rarely obstruct the higher-priority goals. These range from the small, such as passing by const reference instead of by value, to the large, such as choosing the logarithmic algorithm instead of the exponential one. However, the optimizations that do interfere with the other goals should be postponed until after your code is reasonably complete and functioning correctly. At that point, a profiler should be used to determine where the bottlenecks actually are. Those bottlenecks are the only places where other goals should yield to optimizations, and only if the profiler confirms that the optimizations achieved their goals.
For the question being asked, this means that the main concern should not be computational expense, but encapsulation. Why should the caller of func() need to allocate space for func() to work with? It should not, unless a profiler identified this as a performance bottleneck. And if a profiler did that, it would be much easier (and more reliable!) to ask the profiler if the change helps than to ask Stack Overflow.
Why?
I can think of two major reasons to de-prioritize optimizations. First, the "sniff test" is unreliable. While there might be a few people who can identify bottlenecks by looking at code, there are many, many more who merely think they can. Second, that's why we have optimizing compilers. It is not unheard of for someone to come up with this super-clever optimization trick only to discover that the compiler was already doing it. Keep your code clean and let the compiler handle the routine optimizations. Only step in when the task demonstrably exceeds the compiler's capabilities.
See also: premature-optimization
Choosing an optimization
OK, suppose the profiler did identify construction of this small, 10-element array as a bottleneck. The next step is to test an alternative, right? Almost. First you need an alternative, and I would consider a review of the theoretical benefits of various alternatives to be useful. Just keep in mind that this is theoretical and that the profiler gets the final say. So I'll go into the pros and cons of the alternatives from the question, as well as some other alternatives that might bear consideration. Let's start from the worst options, working our way to the better ones.
Example A
In Example A, a vector is created with 5 elements, then elements are pushed onto the vector until i meets or exceeds the vector's size. Seeing how i and the vector's size are both increased by one each iteration (and i starts smaller than the size), this loop will run until the vector grows large enough to crash the program. That means probably billions of iterations (despite the question's claim that the size will not exceed 10).
Easily the most computationally expensive option. Don't do this.
Example B
In example B, a vector is created for each iteration of the outer while loop, which is then accessed by reference from within func(). The performance cons here include passing a parameter to func() and having func() access the vector indirectly through a reference. There are no performance pros as this does everything the baseline (see below) would do, plus some extra steps.
Even though a compiler might be able to compensate for the cons, I see no reason to try this approach.
Baseline
The baseline I'm using is a fix to Example A's infinite loop. Specifically, replace "my_vec.push_back(i);" with Example B's "my_vec[i] = i;". This simple approach is along the lines of what I would expect for the initial assessment by the profiler. If you cannot beat simple, stick with it.
Example B*
The text of the question presents an inaccurate assessment of Example B. Interestingly, the assessment describes an approach that has the potential to improve on the baseline. To get code that matches the textual description, move Example B's "std::vector<double> my_vec(5, 0.0);" to the line immediately before the while statement. This has the effect of constructing the vector only once, rather than constructing it with each iteration.
The cons of this approach are the same as those of Example B as originally coded. However, we now pick up a gain in that the vector's constructor is called only once. If construction is more expensive than the indirection costs, the result should be a net improvement once the while loop iterates often enough. (Beware these conditions: that's a significant "if" and there is no a priori guess as to how many iterations is "enough".) It would be reasonable to try this and see what the profiler says.
Get some static
A variant on Example B* that helps preserve encapsulation is to use the baseline (the fixed Example A), but precede the declaration of the vector with the keyword static. This brings in the benefit of constructing the vector only once, but without the overhead associated with making the vector a parameter. In fact, the benefit could be greater than in Example B* since construction happens only once per program execution, rather than each time the while loop is started. The more times the while loop is started, the greater this benefit.
The main con here is that the vector will occupy memory throughout the program's execution. Unlike Example B*, it will not release its memory when the block containing the while loop ends. Using this approach in too many places would lead to memory bloat. So while it is reasonable to profile this approach, you might want to consider other options. (Of course if the profiler calls this out as the bottleneck, dwarfing all others, the cost is small enough to pay.)
Fix the size
My personal choice for what optimization to try here would be to start from the baseline and switch the vector to std::array<10,double>. My main motivation is that the needed size won't be more than 10. Also relevant is that the construction of a double is trivial. Construction of the array should be on par with declaring 10 variables of type double, which I would expect to be negligible. So no need for fancy optimization tricks. Just let the compiler do its thing.
The expected possible benefit of this approach is that a vector allocates space on the heap for its storage, which has an overhead cost. The local array would not have this cost. However, this is only a possible benefit. A vector implementation might already take advantage of this performance consideration for small vectors. (Maybe it does not use the heap until the capacity needs to exceed some magic number, perhaps more than 10.) I would refer you back to earlier when I mentioned "super-clever" and "compiler was already doing it".
I'd run this through the profiler. If there's no benefit, there is likely no benefit from the other approaches. Give them a try, sure, since they're easy enough, but it would probably be a better use of your time to look at other aspects to optimize.

Is it expensive to compute vector size in for loops, each iteration?

Does the c++ compiler take care of cases like, buildings is vector:
for (int i = 0; i < buildings.size(); i++) {}
that is, does it notice if buildings is modified in the loop or not, and then
based on that not evaluate it each iteration? Or maybe I should do this myself,
not that pretty but:
int n = buildings.size();
for (int i = 0; i < n; i++) {}
buildings.size() will likely be inlined by the compiler to directly access the private size field on the vector<T> class. So you shouldn't separate the call to size. This kind of micro-optimization is something you don't want to worry about anyway (unless you're in some really tight loop identified as a bottleneck by profiling).
Don't decide whether to go for one or the other by thinking in terms of performance; your compiler may or may not inline the call - and std::vector::size() has constant complexity, too.
What you should really consider is correctness, because the two versions will behave very differently if you add or remove elements while iterating.
If you don't modify the vector in any way in the loop, stick with the former version to avoid a little bit of state (the n variable).
If the compiler can determine that buildings isn't mutated within the loop (for example if it's a simple loop with no function calls that could have side effects) it will probably optmize the computation away. But computing the size of a vector is a single subtraction anyway which should be pretty cheap as well.
Write the code in the obvious way (size inside the loop) and only if profiling shows you that it's too slow should you consider an alternative mechanism.
I write loops like this:
for (int i = 0, maxI = buildings.size(); i < maxI; ++i)
Takes care of many issues at once: suggest max is fixed up front, no more thinking about lost performance, consolidate types. If evaluation is in the middle expression it suggests the loop changes the collection size.
Too bad language does not allow sensible use of const, else it would be const maxI.
OTOH for more and more cases I rather use some algo, lambda even allows to make it look almost like traditional code.
Assuming the size() function is an inline function for the base-template, one can also assume that it's very little overhead. It is far different from, say, strlen() in C, which can have major overhead.
It is possible that it's still faster to use int n = buildings.size(); - because the compiler can see that n is not changing inside the loop, so load it into a register and not indirectly fetch the vector size. But it's very marginal, and only really tight, highly optimized loops would need this treatment (and only after analyzing and finding that it's a benefit), since it's not ALWAYS that things work as well as you expect in that sort of regard.
Only start to manually optimize stuff like that, if it's really a performance problem. Then measure the difference. Otherwise you'll lot's of unmaintainable ugly code that's harder to debug and less productive to work with. Most leading compilers will probably optimize it away, if the size doesn't change within the loop.
But even if it's not optimized away, then it will probably be inlined (since templates are inlined by default) and cost almost nothing.

performance of std::vector c++ size() inside loop in member function

Similar question, but less specific:
Performance issue for vector::size() in a loop
Suppose we're in a member function like:
void Object::DoStuff() {
for( int k = 0; k < (int)this->m_Array.size(); k++ )
{
this->SomeNotConstFunction();
this->ConstFunction();
double x = SomeExternalFunction(i);
}
}
1) I'm willing to believe that if only the "SomeExternalFunction" is called that the compiler will optimize and not redundantly call size() on m_Array ... is this the case?
2) Wouldn't you almost certainly get a boost in speed from doing
int N = m_Array.size()
for( int k = 0; k < N; k++ ) { ... }
if you're calling some member function that is not const ?
Edit Not sure where these down-votes and snide comments about micro-optimization are coming from, perhaps I can clarify:
Firstly, it's not to optimize per-se but just understand what the compiler will and will not fix. Usually I use the size() function but I ask now because here the array might have millions of data points.
Secondly, the situation is that "SomeNotConstFunction" might have a very rare chance of changing the size of the array, or its ability to do so might depend on some other variable being toggled. So, I'm asking at what point will the compiler fail, and what exactly is the time cost incurred by size() when the array really might change, despite human-known reasons that it won't?
Third, the operations in-loop are pretty trivial, there are just millions of them but they are embarrassingly parallel. I would hope that by externally placing the value would let the compiler vectorize some of the work.
Do not get into the habit of doing things like that.
The cases where the optimization you make in (2) is:
safe to do
has a noticeable difference
something your compiler cannot figure out on its own
are few and far in-between.
If it were just the latter two points, I would just advise that you're worrying about something unimportant. However, that first point is the real killer: you do not want to get in the habit of giving yourself extra chances to make mistakes. It's far, far easier to accelerate slow, correct code than it is to debug fast, buggy code.
Now, that said, I'll try answering your question. The definitions of the functions SomeNotConstFunction and SomeConstFunction are (presumably) in the same translation unit. So if these functions really do not modify the vector, the compiler can figure it out, and it will only "call" size once.
However, the compiler does not have access to the definition of SomeExternalFunction, and so must assume that every call to that function has the potential of modifying your vector. The presence of that function in your loop guarantees that `size is "called" every time.
I put "called" in quotes, however, because it is such a trivial function that it almost certainly gets inlined. Also, the function is ridiculously cheap -- two memory lookups (both nearly guaranteed to be cache hits), and either a subtraction and a right shift, or maybe even a specialized single instruction that does both.
Even if SomeExternalFunction does absolutely nothing, it's quite possible that "calling" size every time would still only be a small-to-negligible fraction of the running time of your loop.
Edit: In response to the edit....
what exactly is the time cost incurred by size() when the array really might change
The difference in the times you see when you time the two different versions of code. If you're doing very low level optimizations like that, you can't get answers through "pure reason" -- you must empirically test the results.
And if you really are doing such low level optimizations (and you can guarantee that the vector won't resize), you should probably be more worried about the fact the compiler doesn't know the base pointer of the array is constant, rather than it not knowing the size is constant.
If SomeExternalFunction really is external to the compilation unit, then you have pretty much no chance of the compiler vectorizing the loop, no matter what you do. (I suppose it might be possible at link time, though....) And it's also unlikely to be "trivial" because it requires function call overhead -- at least if "trivial" means the same thing to you as to me. (again, I don't know how good link time optimizations are....)
If you really can guarantee that some operations will not resize the vector, you might consider refining your class's API (or at least it's protected or private parts) to include functions that self-evidently won't resize the vector.
The size method will typically be inlined by the compiler, so there will be a minimal performance hit, though there will usually be some.
On the other hand, this is typically only true for vectors. If you are using a std::list, for instance, the size method can be quite expensive.
If you are concerned with performance, you should get in the habit of using iterators and/or algorithms like std::for_each, rather than a size-based for loop.
The micro optimization remarks are probably because the two most common implementations of vector::size() are
return _Size;
and
return _End - _Begin;
Hoisting them out of the loop will probably not noticably improve the performance.
And if it is obvious to everyone that it can be done, the compiler is also likely to notice. With modern compilers, and if SomeExternalFunction is statically linked, the compiler is usually able to see if the call might affect the vector's size.
Trust your compiler!
In MSVC 2015, it does a return (this->_Mylast() - this->_Myfirst()). I can't tell you offhand just how the optimizer might deal with this; but unless your array is const, the optimizer must allow for the possibility that you may modify its number of elements; making it hard to optimize out. In Qt, it equates to an inline function that that does a return d->size; ; that is, for a QVector.
I've taken to doing it in one particular project I'm working on, but it is for performance-oriented code. Unless you are interested in deeply optimizing something, I wouldn't bother. It probably is pretty fast any of these ways. In Qt, it is at most one pointer dereferencing, and is more typing. It looks like it could make a difference in MSVC.
I think nobody has offered a definitive answer so far; but if you really want to test it, have the compiler emit assembly source code, and inspect it both ways. I wouldn't be surprised to find that there's no difference when highly optimized. Let's not forget, though, that unoptimized performance during debug is also a factor that might be taken into consideration, when a lot of e.g. number crunching is involved.
I think the OP's original ? really could use to give how the array is declared.

Taking vector size() out of loop condition to optimize

fibs is a std::vector. Using g++, I was advised to take fibs.size() out of the loop, to save computing it each time (because the vector could change)
int sum = 0;
for(int i = 0; i < fibs.size(); ++i){
if(fibs[i] % 2 == 0){
sum += fibs[i];
}
}
Surely there is some dataflow analysis in the compiler that would tell us that fibs won't change size. Is there? Or should I set some other variable to be fibs.size() and use that in the loop condition?
The compiler will likely determine that it won't change. Even if it did, size() for vectors is an O(1) operation.
Unless you know it's a problem, leave it as it is. First make it correct, then make it clear, then make it fast (if necessary).
vector::size is extremely fast anyway. It seems to me likely that the compiler will optimise this case, since it is fairly obvious that the vector is not modified and all the functions called will be inlined so the compiler can tell.
You could always look at the generated code to see if this has happened.
If you do want to change it, you need to be able to measure the time it takes before and after. That's a fair amount of work - you probably have better things to do.
size() is constant time operation, there's no penalty calling it this way. If you are concerned about performance and a more general way to go through the collection, use iterators:
int sum = 0;
for(auto it = fibs.cbegin(); it != fibs.cend(); ++it) {
if((*it) % 2 == 0){
sum += *it;
}
}
I think you are missing another, more important point here: Is this loop causing a slow-down of your application? If you do not know for sure (i.e. if you haven't profiled), you risk focusing on the wrong parts of your application.
You already have to keep thousands of things in your head when writing programs (coding guidelines, architecture (bigger picture) of your application, variable names, function names, class names, readability, etc.), you can ignore the speed of the code during your initial implementation (in at least 95% of the time). This will allow you to focus on things, which are more important and far more valuable (like correctness, readability and maintainability).
In your example the compiler can easily analyze the flow and determine that it doesn't change. In more complicated code it cannot:
for(int i = 0; i < fibs.size(); ++i){
complicated_function();
}
complicated_function can change fibs. However, since the above code involves a function call, the compiler cannot store fibs.size() in a register and hence you cannot eliminate the memory access.

Shall I optimize or let compiler to do that?

What is the preferred method of writing loops according to efficiency:
Way a)
/*here I'm hoping that compiler will optimize this
code and won't be calling size every time it iterates through this loop*/
for (unsigned i = firstString.size(); i < anotherString.size(), ++i)
{
//do something
}
or maybe should I do it this way:
Way b)
unsigned first = firstString.size();
unsigned second = anotherString.size();
and now I can write:
for (unsigned i = first; i < second, ++i)
{
//do something
}
the second way seems to me like worse option for two reasons: scope polluting and verbosity but it has the advantage of being sure that size() will be invoked once for each object.
Looking forward to your answers.
I usually write this code as:
/* i and size are local to the loop */
for (size_t i = firstString.size(), size = anotherString.size(); i < size; ++i) {
// do something
}
This way I do not pollute the parent scope and avoid calling anotherString.size() for each loop iteration.
It is especially useful with iterators:
for(some_generic_type<T>::forward_iterator it = container.begin(), end = container.end();
it != end; ++it) {
// do something with *it
}
Since C++ 11 the code can be shortened even more by writing a range-based for loop:
for(const auto& item : container) {
// do something with item
}
or
for(auto item : container) {
// do something with item
}
In general, let the compiler do it. Focus on the algorithmic complexity of what you're doing rather than micro-optimizations.
However, note that your two examples are not semantically identical - if the body of the loop changes the size of the second string, the two loops will not iterate the same amount of times. For that reason, the compiler might not be able to perform the specific optimization you're talking about.
I would first use the first version, simply because it looks cleaner and easier to type. Then you can profile it to see if anything needs to be more optimized.
But I highly doubt that the first version will cause a noticable performance drop. If the container implements size() like this:
inline size_t size() const
{
return _internal_data_member_representing_size;
}
then the compiler should be able to inline the function, eliding the function call. My compiler's implementation of the standard containers all do this.
How will a good compiler optimize your code? Not at all, as it can't be sure size() has any side-effects. If size() had any side effects your code relied on, they'd now be gone after a possible compiler optimization.
This kind of optimization really isn't safe from a compiler's perspective, you need to do it on your own. Doing on your own doesn't mean you need to introduce two additional local variables. Depending on your implementation of size, it might be an O(1) operation. If size is also declared inline, you'll also spare the function call, making the call to size() as good as a local member access.
Don't pre-optimize your code. If you have a performance problem, use a profiler to find it, otherwise you are wasting development time. Just write the simplest / cleanest code that you can.
This is one of those things that you should test yourself. Run the loops 10,000 or even 100,000 iterations and see what difference, if any, exists.
That should tell you everything you want to know.
My recommendation is to let inconsequential optimizations creep into your style. What I mean by this is that if you learn a more optimal way of doing something, and you cant see any disadvantages to it (as far as maintainability, readability, etc) then you might as well adopt it.
But don't become obsessed. Optimizations that sacrifice maintainability should be saved for very small sections of code that you have measured and KNOW will have a major impact on your application. When you do decide to optimize, remember that picking the right algorithm for the job is often far more important than tight code.
I'm hoping that compiler will optimize this...
You shouldn't. Anything involving
A call to an unknown function or
A call to a method that might be overridden
is hard for a C++ compiler to optimize. You might get lucky, but you can't count on it.
Nevertheless, because you find the first version simpler and easier to read and understand, you should write the code exactly the way it is shown in your simple example, with the calls to size() in the loop. You should consider the second version, where you have extra variables that pull the common call out of the loop, only if your application is too slow and if you have measurements showing that this loop is a bottleneck.
Here's how I look at it. Performance and style are both important, and you have to choose between the two.
You can try it out and see if there is a performance hit. If there is an unacceptable performance hit, then choose the second option, otherwise feel free to choose style.
You shouldn't optimize your code, unless you have a proof (obtained via profiler) that this part of code is bottleneck. Needless code optimization will only waste your time, it won't improve anything.
You can waste hours trying to improve one loop, only to get 0.001% performance increase.
If you're worried about performance - use profilers.
There's nothing really wrong with way (b) if you just want to write something that will probably be no worse than way (a), and possibly faster. It also makes it clearer that you know that the string's size will remain constant.
The compiler may or may not spot that size will remain constant; just in case, you might as well perform this optimization yourself. I'd certainly do this if I was suspicious that the code I was writing was going to be run a lot, even if I wasn't sure that it would be a big deal. It's very straightforward to do, it takes no more than 10 extra seconds thinking about it, it's very unlikely to slow things down, and, if nothing else, will almost certainly make the unoptimized build run a bit more quickly.
(Also the first variable in style (b) is unnecessary; the code for the init expression is run only once.)
How much percent of time is spent in for as opposed to // do something? (Don't guess - sample it.) If it is < 10% you probably have bigger issues elsewhere.
Everybody says "Compilers are so smart these days."
Well they're no smarter than the poor coders who write them.
You need to be smart too. Maybe the compiler can optimize it but why tempt it not to?
For the "std::size_t size()const" member function which not only is O(1) but is also declared "const" and so can be automatically pulled out of the loop by the compiler, it probably doesn't matter. That said, I wouldn't count on the compiler to remove it from the loop, and I think it is a good habit to get into to factor out the calls within the loop for cases where the function isn't constant or O(1). In addition, I think assigning the values to a variable leads to the code being more readable. I would not suggest, though, that you make any premature optimizations if it will result in the code being harder to read. Again, though, I think the following code is more readable, since there is less to read within the loop:
std::size_t firststrlen = firststr.size();
std::size_t secondstrlen = secondstr.size();
for ( std::size_t i = firststrlen; i < secondstrlen; i++ ){
// ...
}
Also, I should point out that you should use "std::size_t" instead of "unsigned", as the type of "std::size_t" can vary from one platform to another, and using "unsigned" can lead to trunctations and errors on platforms for which the type of "std::size_t" is "unsigned long" instead of "unsigned int".