Is one loop better than several of them? - c++

I've been working on my implementation of BigInteger, and when I was contemplating the solution for addition, I decided to go with cleaner one, which had in mind adding corresponding digits in function and "normalizing" them later. Like in the following example
999 999 + 111 111
= 10 10 10 10 10 10 (value after addition)
= 1 111 110 (value after normalization)
But since then I was wondering about how it affects the efficiency of the program. Are several loops doing small things each generally going to work faster than one big nested loop?
For example, using
int a[7]={0,9,9,9,9,9,9};
int b[7]={0,1,1,1,1,1,1};
int c[7];
Is this,
for(int q=0; q<7; ++q){
c[q]=a[q]+b[q];
if(c[q]>9){
c[q-1]=c[q]/10;
c[q]%=10;
}
}
better than this
for(int q=0; q<7; ++q){
c[q]=a[q]+b[q];
}
for(int q=0;q<7;++q){
if(c[q]>9){
c[q-1]=c[q]/10;
c[q]%=10;
}
}
And what about bigger loops, that have much more things to go through on each iteration?
UPD.
As someone suggested I did measure performance time for both examples. For two loops the average time (for 100mil. elements) ~4.85sec. For one loop ~3.72sec

It is very difficult to tell which one of the two approaches will be more efficient. It probably varies among C++ compiler vendors and within a single vendor, from version to version of their compiler.
The bottom line is:
You will never know unless you benchmark.
As usual, it is almost certain that it does not matter anyway, and you are most probably unduly concerned about performance, like the vast majority of programmers do, in the vast majority of cases.
At the end of the day, all that matters is what is more readable and more maintainable. Code maintainability is far more important than saving clock cycles.
If you decide to follow the wise path of "what is more readable" keep in mind that different folks find different things more readable. For example, I personally hate surprises when I am reading code, so I would be rather annoyed to read your first loop which allows decimal digits to receive erroneous values outside of the 0-9 range, only to find out later that you are finally remedying that with another loop.

Related

What to consider when looking at multiple methods to achieve the same result while coding?

I am currently coding in C++, creating an all rounded calculator that, when finished, will be capable of handling all major and common mathematical procedures.
The current wall I am hitting is from the fact I am still learning about to profession we call being a programmer.
I have several ways of achieving a single result. I am curious as to whether I should pick the method that has a clear breakdown of how it got to that point in the code; or the method that is much shorter - while not sacrificing any of the redability.
Below I have posted snippets from my class showing what I mean.
This function uses if statements to determine whether or not a common denominator is even needed, but is several lines long.
Fraction Fraction::addFraction(Fraction &AddInput)
{
Fraction output;
if (m_denominator != AddInput.m_denominator)
{
getCommonDenominator(AddInput);
output.setWhole(m_whole + AddInput.m_whole);
output.setNumerator((m_numerator * firstchange) + (AddInput.m_numerator * secondchange));
output.setDenominator(commondenominator);
}
else
{
output.setWhole(m_whole + AddInput.m_whole);
output.setNumerator(m_numerator + AddInput.m_numerator);
output.setDenominator(m_denominator);
}
output.simplify();
return output;
}
This function below, gets a common denominator; repeats the steps on the numerators; then simplifies to the lowest terms.
Fraction Fraction::addFraction(Fraction &AddInput)
{
getCommonDenominator(AddInput);
Fraction output(m_whole + AddInput.m_whole, (m_numerator * firstchange) + (AddInput.m_numerator * secondchange), commondenominator);
output.simplify();
return output;
}
Both functions have been tested and always return the accurate result. When it comes to coding standards... do we pick longer and asy to follow? or shorter and easy to understand?
Your first priority with your code should be that it's correct.
Your second priority with code should be "If someone who's never seen this before is going to make a tiny change, which one is he less likely to break?
There's actually a lot that goes into this. How difficult is it to understand at a high level? How abstracted out are arcane details? Are there any surprises? What quirks do you have to know about? Are there edge cases that have to be handled?
The reasons that this second priority is important are:
it's key to preventing you from writing bugs in the first place
it's easier to find bugs later
it's easier to fix bugs later
despite whatever you think, you won't remember the details in 6 months.
Both implementations appear about equally difficult in complexity per branch, but the first one has branches, so I'd lean toward the second for understandability. Details seem abstracted out in both, and if there's surprises or quirks, I don't immediately see them (but that's sort of the point, that they can be easily overlooked). I don't see any special handling for edge cases, so if edge cases exist in either, comments would be good.
Unrelated to picking, but while on the topic of reviewing code, It's unclear how either handles fractions that have no fractional part, but that might be part of the full class documentation, which would be fine. Both codepaths take AddArgument by mutable reference, which is bad, and require this to be mutable as well, which is also bad. Both have methods named get*() that appear to modify (getCommonDenominator), which is bad. The code appears to be using variables that are external (firstchange? secondchange?) which is a major strike against preventing bugs.

Is there significant performance difference between break and return in a loop at the end of a C function?

Consider the following (Obj-)C(++) code segment as an example:
// don't blame me for the 2-space indents. It's insane to type 12 spaces.
int whatever(int *foo) {
for (int k = 0; k < bar; k++) { // I know it's a boring loop
do_something(k);
if (that(k))
break; // or return
do_more(k);
}
}
A friend told me that using break is not only more logical (and using return causes troubles when someone wants to add something to the function afterwards), but also yields faster code. It's said that the processor gives better predictions in this case for jmp-ly instructions than for ret.
Or course I agree with him on the first point, but if there is actually some significant difference, why doesn't the compiler optimize it?
If it's insane to type 2 spaces, use a decent text editor with auto-indent. 4 space indentation is much more readable than 2 spaces.
Readability should be a cardinal value when you write C code.
Using break or return should be chosen based on context to make your code easier to follow and understand. If not to others, you will be doing a favor to yourself, when a few years from now you will be reading your own code, hunting for a spurious bug and trying to make sense of it.
No matter which option you choose, the compiler will optimize your code its own way and different compilers, versions or configurations will do it differently. No noticeable difference should arise from this choice, and even in the unlikely chance that it would, not a lasting one.
Focus on the choice of algorithm, data structures, memory allocation strategies, possibly memory layout cache implications... These are far more important for speed and overall efficiency than local micro-optimizations.
Any compiler is capable of optimizing jumps to jumps. In practice, though, there will probably be some cleanup to do before exiting anyway. When in doubt, profile. I don’t see how this could make any significant difference.
Stylistically, and especially in C where the compiler does not clean stuff up for me when it goes out of scope, I prefer to have a single point of return, although I don’t go so far as to goto one.

Is adding 1 to a number repeatedly slower than adding everything at the same time in C++? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
If I have a number a, would it be slower to add 1 to it b times rather than simply adding a + b?
a += b;
or
for (int i = 0; i < b; i++) {
a += 1;
}
I realize that the second example seems kind of silly, but I have a situation where coding would actually be easier that way, and I am wondering if that would impact performance.
EDIT: Thank you for all your answers. It looks like some posters would like to know what situation I have. I am trying to write a function to shift an inputted character a certain number of characters over (ie. a cipher) if it is a letter. So, I want to say that one char += the number of shifts, but I also need to account for the jumps between the lowercase characters and uppercase characters on the ascii table, and also wrapping from z back to A. So, while it is doable in another way, I thought it would be easiest to keep adding one until I get to the end of a block of letter characters, then jump to the next one and keep going.
If your loop is really that simple, I don't see any reason why a compiler couldn't optimize it. I have no idea if any actually would, though. If your compiler doesn't the single addition will be much faster than the loop.
The language C++ does not describe how long either of those operations take. Compilers are free to turn your first statement into the second, and that is a legal way to compile it.
In practice, many compilers would treat those two subexpressions as the same expression, assuming everything is of type int. The second, however, would be fragile in that seemingly innocuous changes would cause massive performance degradation. Small changes in type that 'should not matter', extra statements nearby, etc.
It would be extremely rare for the first to be slower than the second, but if the type of a was such that += b was a much slower operation than calling += 1 a bunch of times, it could be. For example;
struct A {
std::vector<int> v;
void operator+=( int x ) {
// optimize for common case:
if (x==1 && v.size()==v.capacity()) v.reserve( v.size()*2 );
// grow the buffer:
for (int i = 0; i < x; ++i)
v.reserve( v.size()+1 );
v.resize( v.size()+1 );
}
}
};
then A a; int b = 100000; a+=b; would take much longer than the loop construct.
But I had to work at it.
The overhead (CPU instructions) on having a variable being incremented in a loop is likely to be insignificant compared to the total number of instructions in that loop (unless the only thing you are doing in the loop is incrementing). Loop variables are likely to remain in the low levels of the CPU cache (if not in CPU registries) and is very fast to increment as in doesn't need to read from the RAM via the FSB. Anyway, if in doubt just make a quick profile and you'll know if it makes sense to sacrifice code readability for speed.
Yes, absolutely slower. The second example is beyond silly. I highly doubt you have a situation where it would make sense to do it that way.
Lets say 'b' is 500,000... most computers can add that in a single operation, why do 500,000 operations (not including the loop overhead).
If the processor has an increment instruction, the compiler will usually translate the "add one" operation into an increment instruction.
Some processors may have an optimized increment instructions to help speed up things like loops. Other processors can combine an increment operation with a load or store instruction.
There is a possibility that a small loop containing only an increment instruction could be replaced by a multiply and add. The compiler is allowed to do so, if and only if the functionality is the same.
This kind of operation, generally produces negligible results. However, for large data sets and performance critical applications, this kind of operation may be necessary and the time gained would be significant.
Edit 1:
For adding values other than 1, the compiler would emit processor instructions to use the best addition operations.
The add operation is optimized in hardware as a different animal than incrementing. Arithmetic Logic Units (ALU) have been around for a long time. The basic addition operation is very optimized and a lot faster than incrementing in a loop.

Algorithm: taking out every 4th item of an array

I have two huge arrays (int source[1000], dest[1000] in the code below, but having millions of elements in reality). The source array contains a series of ints of which I want to copy 3 out of every 4.
For example, if the source array is:
int source[1000] = {1,2,3,4,5,6,7,8....};
int dest[1000];
Here is my code:
for (int count_small = 0, count_large = 0; count_large < 1000; count_small += 3, count_large +=4)
{
dest[count_small] = source[count_large];
dest[count_small+1] = source[count_large+1];
dest[count_small+2] = source[count_large+2];
}
In the end, dest console output would be:
1 2 3 5 6 7 9 10 11...
But this algorithm is so slow! Is there an algorithm or an open source function that I can use / include?
Thank you :)
Edit: The actual length of my array would be about 1 million (640*480*3)
Edit 2: Processing this for loop takes about 0.98 seconds to 2.28 seconds, while the other code only take 0.08 seconds to 0.14 seconds, so the device uses at least 90 % cpu time only for the loop
Well, the asymptotic complexity there is as good as it's going to get. You might be able to achieve slightly better performance by loading in the values as four 4-way SIMD integers, shuffling them into three 4-way SIMD integers, and writing them back out, but even that's not likely to be hugely faster.
With that said, though, the time to process 1000 elements (Edit: or one million elements) is going to be utterly trivial. If you think this is the bottleneck in your program, you are incorrect.
Before you do much more, try profiling your application and determine if this is the best place to spend your time. Then, if this is a hot spot, determine how fast is it, and how fast you need it to be/might achieve? Then test the alternatives; the overhead of threading or OpenMP might even slow it down (especially, as you now have noted, if you are on a single core processor - in which case it won't help at all). For single threading, I would look to memcpy as per Sean's answer.
#Sneftel has also reference other options below involving SIMD integers.
One option would be to try parallel processing the loop, and see if that helps. You could try using the OpenMP standard (see Wikipedia link here), but you would have to try it for your specific situation and see if it helped. I used this recently on an AI implementation and it helped us a lot.
#pragma omp parallel for
for (...)
{
... do work
}
Other than that, you are limited to the compiler's own optimisations.
You could also look at the recent threading support in C11, though you might be better off using pre-implemented framework tools like parallel_for (available in the new Windows Concurrency Runtime through the PPL in Visual Studio, if that's what you're using) than rolling your own.
parallel_for(0, max_iterations,
[...] (int i)
{
... do stuff
}
);
Inside the for loop, you still have other options. You could try a for loop that iterates and skips every for, instead of doing 3 copies per iteration (just skip when (i+1) % 4 == 0), or doing block memcopy operations for groups of 3 integers as per Seans answer. You might achieve slightly different compiler optimisations for some of these, but it is unlikely (memcpy is probably as fast as you'll get).
for (int i = 0, int j = 0; i < 1000; i++)
{
if ((i+1) % 4 != 0)
{
dest[j] = source[i];
j++;
}
}
You should then develop a test rig so you can quickly performance test and decide on the best one for you. Above all, decide how much time is worth spending on this before optimising elsewhere.
You could try memcpy instead of the individual assignments:
memcpy(&dest[count_small], &source[count_large], sizeof(int) * 3);
Is your array size only a 1000? If so, how is it slow? It should be done in no time!
As long as you are creating a new array and for a single threaded application, this is the only away AFAIK.
However, if the datasets are huge, you could try a multi threaded application.
Also you could explore having a bigger data type holding the value, such that the array size decreases... That is if this is viable to your real life application.
If you have Nvidia card you can consider using CUDA. If thats not the case you can try other parallel programming methods/environments as well.

Performance of comparisons in C++ ( foo >= 0 vs. foo != 0 )

I've been working on a piece of code recently where performance is very important, and essentially I have the following situation:
int len = some_very_big_number;
int counter = some_rather_small_number;
for( int i = len; i >= 0; --i ){
while( counter > 0 && costly other stuff here ){
/* do stuff */
--counter;
}
/* do more stuff */
}
So here I have a loop that runs very often and for a certain number of runs the while block will be executed as well until the variable counter is reduced to zero and then the while loop will not be called because the first expression will be false.
The question is now, if there is a difference in performance between using
counter > 0 and counter != 0?
I suspect there would be, does anyone know specifics about this.
To measure is to know.
Do you think that what will solve your problem! :D
if(x >= 0)
00CA1011 cmp dword ptr [esp],0
00CA1015 jl main+2Ch (0CA102Ch) <----
...
if(x != 0)
00CA1026 cmp dword ptr [esp],0
00CA102A je main+3Bh (0CA103Bh) <----
In programming, the following statement is the sign designating the road to Hell:
I've been working on a piece of code recently where performance is very important
Write your code in the cleanest, most easy to understand way. Period.
Once that is done, you can measure its runtime. If it takes too long, measure the bottlenecks, and speed up the biggest ones. Keep doing that until it is fast enough.
The list of projects that failed or suffered catastrophic loss due to a misguided emphasis on blind optimization is large and tragic. Don't join them.
I think you're spending time optimizing the wrong thing. "costly other stuff here", "do stuff" and "do more stuff" are more important to look at. That is where you'll make the big performance improvements I bet.
There will be a huge difference if the counter starts with a negative number. Otherwise, on every platform I'm familiar with, there won't be a difference.
Is there a difference between counter > 0 and counter != 0? It depends on the platform.
A very common type of CPU are those from Intel we have in our PC's. Both comparisons will map to a single instruction on that CPU and I assume they will execute at the same speed. However, to be certain you will have to perform your own benchmark.
As Jim said, when in doubt see for yourself :
#include <boost/date_time/posix_time/posix_time.hpp>
#include <iostream>
using namespace boost::posix_time;
using namespace std;
void main()
{
ptime Before = microsec_clock::universal_time(); // UTC NOW
// do stuff here
ptime After = microsec_clock::universal_time(); // UTC NOW
time_duration delta_t = After - Before; // How much time has passed?
cout << delta_t.total_seconds() << endl; // how much seconds total?
cout << delta_t.fractional_seconds() << endl; // how much microseconds total?
}
Here's a pretty nifty way of measuring time. Hope that helps.
OK, you can measure this, sure. However, these sorts of comparisons are so fast that you are probably going to see more variation based on processor swapping and scheduling then on this single line of code.
This smells of unnecessary, and premature, optimization. Right your program, optimize what you see. If you need more, profile, and then go from there.
I would add that the overwhelming performance aspects of this code on modern cpus will be dominated not by the comparison instruction but whether the comparison is well predicted since any mis-predict will waste many more cycles than any integral operation.
As such loop unrolling will probably be the biggest winner but measure, measure, measure.
Thinking that the type of comparison is going to make a difference, without knowing it, is the definition of guessing.
Don't guess.
In general, they should be equivalent (both are usually implemented in single-cycle instructions/micro-ops). Your compiler may do some strange special-case optimization that is difficult to reason about from the source level, which may make either one slightly faster. Also, equality testing is more energy-efficient than inequality testing (>), though the system-level effect is so small as to not merit discussion.
There may be no difference. You could try examining the assembly output for each.
That being said, the only way to tell if any difference is significant is to try it both ways and measure. I'd bet that the change makes no difference whatsoever with optimizations on.
Assuming you are developing for the x86 architecture, when you look at the assembly output it will come down to jns vs jne. jns will check the sign flag and jne will check the zero flag. Both operations, should as far as I know, be equally costly.
Clearly the solution is to use the correct data type.
Make counter an unsigned int. Then it can't be less than zero. Your compiler will obviously know this and be forced to choose the optimal solution.
Or you could just measure it.
You could also think about how it would be implemented...(here we go on a tangent)...
less than zero: the sign bit would be set, so need to check 1 bit
equal to zero : the whole value would be zero, so need to check all the bits
Of course, computers are funny things, and it may take longer to check a single bit than the whole value (however many bytes it is on your platform).
You could just measure it...
And you could find out that one it more optimal than another (under the conditions you measured it). But your program will still run like a dog because you spent all your time optimising the wrong part of your code.
The best solution is to use what many large software companies do - blame the hardware for not runnnig fast enough and encourage your customer to upgrade their equipment (which is clearly inferior since your product doesn't run fast enough).
< /rant>
I stumbled across this question just now, 3 years after it is asked, so I am not sure how useful the answer will still be... Still, I am surprised not to see clearly stated that answering your question requires to know two and only two things:
which processor you target
which compiler you work with
To the first point, each processor has different instructions for tests. On one given processor, two similar comparisons may turn up to take a different number of cycles. For example, you may have a 1-cycle instruction to do a gt (>), eq (==), or a le (<=), but no 1-cycle instruction for other comparisons like a ge (>=). Following a test, you may decide to execute conditional instructions, or, more often, as in your code example, take a jump. There again, cycles spent in jumps take a variable number of cycles on most high-end processors, depending whether the conditional jump is taken or not taken, predicted or not predicted. When you write code in assembly and your code is time critical, you can actually take quite a bit of time to figure out how to best arrange your code to minimize overall the cycle count and may end up in a solution that may have to be optimized based on the number of time a given comparison returns a true or false.
Which leads me to the second point: compilers, like human coders, try to arrange the code to take into account the instructions available and their latencies. Their job is harder because some assumptions an assembly code would know like "counter is small" is hard (not impossible) to know. For trivial cases like a loop counter, most modern compilers can at least recognize the counter will always be positive and that a != will be the same as a > and thus generate the best code accordingly. But that, as many mentioned in the posts, you will only know if you either run measurements, or inspect your assembly code and convince yourself this is the best you could do in assembly. And when you upgrade to a new compiler, you may then get a different answer.