Using one loop vs two loops - c++

I was reading this blog :- https://developerinsider.co/why-is-one-loop-so-much-slower-than-two-loops/. And I decided to check it out using C++ and Xcode. So, I wrote a simple program given below and when I executed it, I was surprised by the result. Actually the 2nd function was slower compared to the first function contrary to what is stated in the article. Can anyone please help me figure out why this is the case?
#include <iostream>
#include <vector>
#include <chrono>
using namespace std::chrono;
void function1() {
const int n=100000;
int a1[n], b1[n], c1[n], d1[n];
for(int j=0;j<n;j++){
a1[j] = 0;
b1[j] = 0;
c1[j] = 0;
d1[j] = 0;
}
auto start = high_resolution_clock::now();
for(int j=0;j<n;j++){
a1[j] += b1[j];
c1[j] += d1[j];
}
auto stop = high_resolution_clock::now();
auto duration = duration_cast<microseconds>(stop - start);
std::cout << duration.count() << " Microseconds." << std::endl;
}
void function2() {
const int n=100000;
int a1[n], b1[n], c1[n], d1[n];
for(int j=0; j<n; j++){
a1[j] = 0;
b1[j] = 0;
c1[j] = 0;
d1[j] = 0;
}
auto start = high_resolution_clock::now();
for(int j=0; j<n; j++){
a1[j] += b1[j];
}
for(int j=0;j<n;j++){
c1[j] += d1[j];
}
auto stop = high_resolution_clock::now();
auto duration = duration_cast<microseconds>(stop - start);
std::cout << duration.count() << " Microseconds." << std::endl;
}
int main(int argc, const char * argv[]) {
function1();
function2();
return 0;
}

TL;DR: The loops are basically the same, and if you are seeing differences, then your measurement is wrong. Performance measurement and more importantly, reasoning about performance requires a lot of computer knowledge, some scientific rigor, and much engineering acumen. Now for the long version...
Unfortunately, there is some very inaccurate information in the article to which you've linked, as well as in the answers and some comments here.
Let's start with the article. There won't be any disk caching that has any effect on the performance of these functions. It is true that virtual memory is paged to disk, when demand on physical memory exceeds what's available, but that's not a factor that you have to consider for programs that touch 1.6MB of memory (4 * 4 * 100K).
And if paging comes into play, the performance difference won't exactly be subtle either. If these arrays were paged to disk and back, the performance difference would be in order of 1000x for fastest disks, not 10% or 100%.
Paging and page faults and its effect on performance is neither trivial, nor intuitive. You need to read about it, and experiment with it seriously. What little information that article has is completely inaccurate to the point of being misleading.
The second is your profiling strategy and the micro-benchmark itself. Clearly, with such simple operations on data (an add,) the bottleneck will be memory bandwidth itself (maybe instruction retire limits or something like that with such a simple loop.) And since you only read memory linearly, and use all you read, whether its in 4 interleaving streams or 2, you are making use of all the bandwidth that is available.
However, if you call your function1 or function2 in a loop, you will be measuring the bandwidth of different parts of the memory hierarchy depending on N, from L1 all the way to L3 and main memory. (You should know the size of all levels of cache on your machine, and how they work.) This is obvious if you know how CPU caches work, and really mystifying otherwise. Do you want to know how fast this is when you do it the first time, when the arrays are cold, or do you want to measure the hot access?
Is your real use case copying the same mid-sized array over and over again?
If not, what is it? What are you benchmarking? Are you trying to measure something or just experimenting?
Shouldn't you be measuring the fastest run through a loop, rather than the average since that can be massively affected by a (basically random) context switch or an interrupt?
Have you made sure you are using the correct compiler switches? Have you looked at the generated assembly code to make sure the compiler is not adding debug checks and what not, and is not optimizing stuff away that it shouldn't (after all, you are just executing useless loops, and an optimizing compiler wants nothing more than to avoid generating code that is not needed).
Have you looked at the theoretical memory/cache bandwidth number for your hardware? Your specific CPU and RAM combination will have theoretical limits. And be it 5, 50, or 500 GiB/s, it will give you an upper bound on how much data you can move around and work with. The same goes with the number of execution units, the IPC or your CPU, and a few dozen other numbers that will affect the performance of this kind of micro-benchmark.
If you are reading 4 integers (4 bytes each, from a, b, c, and d) and then doing two adds and writing the two results back, and doing it 100'000 times, then you are - roughly - looking at 2.4MB of memory read and write. If you do it 10 times in 300 micro-seconds, then your program's memory (well, store buffer/L1) throughput is about 80 GB/s. Is that low? Is that high? Do you know? (You should have a rough idea.)
And let me tell you that the other two answers here at the time of this writing (namely this and this) do not make sense. I can't make heads nor tails of the first one, and the second one is almost completely wrong (conditional branches in a 100'000-times for loop are bad? allocating an additional iterator variable is costly? cold access to array on stack vs. on the heap has "serious performance implications?)
And finally, as written, the two functions have very similar performances. It is really hard separating the two, and unless you can measure a real difference in a real use case, I'd say write whichever one that makes you happier.
If you really really want a theoretical difference between them, I'd say the one with two separate loops is very slightly better because it is usually not a good idea interleaving access to unrelated data.

This has nothing to do with caching or instruction efficiency. Simple iterations over long vectors are purely a matter of bandwidth. (Google: stream benchmark.) And modern CPUs have enough bandwidth to satisfy not all of their cores, but a good deal.
So if you combine the two loops, executing them on a single core, there is probably enough bandwidth for all loads and stores at the rate that memory can sustain. But if you use two loops, you leave bandwidth unused, and the runtime will be a little less than double.

The reasons why the second is faster in your case (I do not think that this works on any machine) is better cpu caching at the point at ,which you cpu has enough cache to store the arrays, the stuff your OS requires and so on, the second function will probably be much slower than the first.
from a performance standpoint. I doubt that the two loop code will give better performance if there are enough other programs running as well, because the second function has obviously worse efficiency then the first and if there is enough other stuff cached the performance lead throw caching will be eliminated.

I'll just chime in here with a little something to keep in mind when looking into performance - unless you are writing embedded software for a real-time device, the performance of such low level code as this should not be a concern.
In 99.9% of all other cases, they will be fast enough.

Related

Time measurement repeatedly makes mistake in specific places

I need to write program which would measure performance of certain data structures. But I can't get reliable result. For example when I measured performance 8 times for the same size of structure, every other result was different(for example: 15ms, 9ms, 15ms, 9ms, 15ms, ...), although the measurements weren't dependent on each other(for every measurement I generated new data). I tried to extract the problem and here is what I have:
while (true) {
auto start = high_resolution_clock::now();
for (int j = 0; j < 500; j++)
;
auto end = high_resolution_clock::now();
cout << duration<double, milli>(end - start).count() << " ";
_getch();
}
What happens when I run this code is - In the first run of loop the time is significantly higher than in next runs. Well it's always higher in the first run, but from time to time also in other measurements.
Example output: 0.006842 0.002566 0.002566 0.002138 0.002993 0.002138 0.002139 ...
And that's the behaviour everytime I start the program.
Here are some things I tried:
It does matter if I compile Release or Debug version. Measurements are still faulty but in different places.
I turned off code optimization.
I tried using different clocks.
And what I think is quite important - While my Add function wasn't empty, the problem depended on data size. For example program worked well for most data sizes but let's say for element count of 7500 measurements were drastically different.
I just deleted part of code after the segment i posted here. And guess what, first measurement is no longer faulty. I have no idea what's happening here.
I would be glad if someone explained to me what can be possible cause of all of this.
In that code, it's likely that you're just seeing the effect of the instruction cache or the micro-op cache. The first time the test is run, more instructions have to be fetched and decoded; on subsequent runs the results of that are available in the caches. As for the alternating times you were setting on some other code, that could be fluctuations in the branch prediction buffer, or something else entirely.
There's too many complex processes involved in execution on modern CPUs to expect a normal sequence of instructions to execute in a fixed amount of time. While it's possible to measure or at least account for these externalities when looking at individual instructions, for nontrivial code you basically have to accept empirical measurements including their variance.
Depending on what kind of operating system you're on, for durations this short, the scheduler can cause huge differences. If your thread is preempted, then you have the idle duration in your time. There are also many things that happen that you don't see: caches, pages, allocation. Modern systems are complex.
You're better off making the whole benchmark bigger, and then doing multiple runs on each thing you're testing, and then using something like ministat from FreeBSD to compare the runs of the same test, and then compare the ministat for the different things you're comparing.
To do this effectively, your benchmark should try to use the same amount of memory as the real workload, so that you memory access is a part of the benchmark.

How to find runtime efficiency of a C++ code

I'm trying to find the efficiency of a program which I have recently posted on stackoverflow.
How to efficiently delete elements from a vector given an another vector
To compare the efficiency of my code with other answers I'm using chrono object.
Is it a correct way to check the runtime efficiency?
If not then kindly suggest a way to do it with an example.
Code on Coliru
#include <iostream>
#include <vector>
#include <algorithm>
#include <chrono>
#include <ctime>
using namespace std;
void remove_elements(vector<int>& vDestination, const vector<int>& vSource)
{
if(!vDestination.empty() && !vSource.empty())
{
for(auto i: vSource) {
vDestination.erase(std::remove(vDestination.begin(), vDestination.end(), i), vDestination.end());
}
}
}
int main() {
vector<int> v1={1,2,3};
vector<int> v2={4,5,6};
vector<int> v3={1,2,3,4,5,6,7,8,9};
std::chrono::steady_clock::time_point begin = std::chrono::steady_clock::now();
remove_elements(v3,v1);
remove_elements(v3,v2);
std::chrono::steady_clock::time_point end= std::chrono::steady_clock::now();
std::cout << "Time difference = " << std::chrono::duration_cast<std::chrono::nanoseconds>(end - begin).count() <<std::endl;
for(auto i:v3)
cout << i << endl;
return 0;
}
Output
Time difference = 1472
7
8
9
Is it a correct way to check the runtime efficiency?
Looks like not the best way to do it. I see the following flaws in your method:
Values are sorted. Branch prediction may expose ridiculous effects when testing sorted vs. unsorted values with the same algorithm. Possible fix: test on both sorted and unsorted and compare results.
Values are hard-coded. CPU cache is a tricky thing and it may introduce subtle differences between tests on hard-coded values and real-life ones. In a real world you are unlikely to perform these operations on hard-coded values, so you may either read them from a file or generate random ones.
Too few values. Your code's execution time is much smaller than the timer precision.
You run the code only once. If you fix all other issues and run the code twice, the second run may be much faster than the first one due to cache warm-up: subsequent runs tend to have less cache misses than the first one.
You run the code once on fixed-size data. It would be better to run otherwise correct tests at least four times to cover a Cartesian product of the following parameters:
sorted vs. unsorted data
v3 fits CPU cache vs. v3 size exceeds CPU cache. Also consider cases when (v1.length() + v3.length()) * sizeof(int) fits the cache or not, (v1.length() + v2.length() + v3.length()) * sizeof(int) fits the cache or not and so on for all combinations.
The biggest issues with your approach are:
1) The code you're testing is too short and predictable. You need to run it at least a few thousand times so that there is at least a few hundred milliseconds between measurements. And you need to make the data set larger and less predictable. In general, CPU caches really make accurate measurements based on synthetic input data a PITA.
2) The compiler is free to reorder your code. In general it's quite difficult to ensure the code you're timing will execute between calls to check the time (and nothing else, for that matter). On the one hand, you could dial down optimization, but on the other you want to measure optimized code.
One solution is to turn off whole-program optimization and put timing calls in another compilation unit.
Another possible solution is to use a memory fence around your test, e.g.
std::atomic_thread_fence(std::memory_order_seq_cst);
(requires #include <atomic> and a C++11-capable compiler).
In addition, you might want to supplement your measurements with profiler data to see how efficiently L1/2/3 caches are used, memory bottlenecks, instruction retire rate, etc. Unfortunately the best tool for Intel x86 for that is commercial (vtune), but on AMD x86 a similar tool is free (codeXL).
You might consider using a benchmarking library like Celero to do the measurements for you and deal with the tricky parts of performance measurements, while you remain focused on the code you're trying to optimize. More complex examples are available in the code I've linked in the answer to your previous question (How to efficiently delete elements from a vector given an another vector), but a simple use case would look like this:
BENCHMARK(VectorRemoval, OriginalQuestion, 100, 1000)
{
std::vector destination(10000);
std::generate(destination.begin(), destination.end(), std::rand);
std::sample(destination.begin(), destination.end(), std::back_inserter(source),
100, std::mt19937{std::random_device{}()})
for (auto i: source)
destination.erase(std::remove(destination.begin(), destination.end(), i),
destination.end());
}

Should I use loops for small iterations? [duplicate]

I've been trying to optimize some extremely performance-critical code (a quick sort algorithm that's being called millions and millions of times inside a monte carlo simulation) by loop unrolling. Here's the inner loop I'm trying to speed up:
// Search for elements to swap.
while(myArray[++index1] < pivot) {}
while(pivot < myArray[--index2]) {}
I tried unrolling to something like:
while(true) {
if(myArray[++index1] < pivot) break;
if(myArray[++index1] < pivot) break;
// More unrolling
}
while(true) {
if(pivot < myArray[--index2]) break;
if(pivot < myArray[--index2]) break;
// More unrolling
}
This made absolutely no difference so I changed it back to the more readable form. I've had similar experiences other times I've tried loop unrolling. Given the quality of branch predictors on modern hardware, when, if ever, is loop unrolling still a useful optimization?
Loop unrolling makes sense if you can break dependency chains. This gives a out of order or super-scalar CPU the possibility to schedule things better and thus run faster.
A simple example:
for (int i=0; i<n; i++)
{
sum += data[i];
}
Here the dependency chain of the arguments is very short. If you get a stall because you have a cache-miss on the data-array the cpu cannot do anything but to wait.
On the other hand this code:
for (int i=0; i<n-3; i+=4) // note the n-3 bound for starting i + 0..3
{
sum1 += data[i+0];
sum2 += data[i+1];
sum3 += data[i+2];
sum4 += data[i+3];
}
sum = sum1 + sum2 + sum3 + sum4;
// if n%4 != 0, handle final 0..3 elements with a rolled up loop or whatever
could run faster. If you get a cache miss or other stall in one calculation there are still three other dependency chains that don't depend on the stall. A out of order CPU can execute these in parallel.
(See Why does mulss take only 3 cycles on Haswell, different from Agner's instruction tables? (Unrolling FP loops with multiple accumulators) for an in-depth look at how register-renaming helps CPUs find that parallelism, and an in depth look at the details for FP dot-product on modern x86-64 CPUs with their throughput vs. latency characteristics for pipelined floating-point SIMD FMA ALUs. Hiding latency of FP addition or FMA is a major benefit to multiple accumulators, since latencies are longer than integer but SIMD throughput is often similar.)
Those wouldn't make any difference because you're doing the same number of comparisons. Here's a better example. Instead of:
for (int i=0; i<200; i++) {
doStuff();
}
write:
for (int i=0; i<50; i++) {
doStuff();
doStuff();
doStuff();
doStuff();
}
Even then it almost certainly won't matter but you are now doing 50 comparisons instead of 200 (imagine the comparison is more complex).
Manual loop unrolling in general is largely an artifact of history however. It's another of the growing list of things that a good compiler will do for you when it matters. For example, most people don't bother to write x <<= 1 or x += x instead of x *= 2. You just write x *= 2 and the compiler will optimize it for you to whatever is best.
Basically there's increasingly less need to second-guess your compiler.
Regardless of branch prediction on modern hardware, most compilers do loop unrolling for you anyway.
It would be worthwhile finding out how much optimizations your compiler does for you.
I found Felix von Leitner's presentation very enlightening on the subject. I recommend you read it. Summary: Modern compilers are VERY clever, so hand optimizations are almost never effective.
As far as I understand it, modern compilers already unroll loops where appropriate - an example being gcc, if passed the optimisation flags it the manual says it will:
Unroll loops whose number of
iterations can be determined at
compile time or upon entry to the
loop.
So, in practice it's likely that your compiler will do the trivial cases for you. It's up to you therefore to make sure that as many as possible of your loops are easy for the compiler to determine how many iterations will be needed.
Loop unrolling, whether it's hand unrolling or compiler unrolling, can often be counter-productive, particularly with more recent x86 CPUs (Core 2, Core i7). Bottom line: benchmark your code with and without loop unrolling on whatever CPUs you plan to deploy this code on.
Trying without knowing is not the way to do it.
Does this sort take a high percentage of overall time?
All loop unrolling does is reduce the loop overhead of incrementing/decrementing, comparing for the stop condition, and jumping. If what you're doing in the loop takes more instruction cycles than the loop overhead itself, you're not going to see much improvement percentage-wise.
Here's an example of how to get maximum performance.
Loop unrolling can be helpful in specific cases. The only gain isn't skipping some tests!
It can for instance allow scalar replacement, efficient insertion of software prefetching... You would be surprised actually how useful it can be (you can easily get 10% speedup on most loops even with -O3) by aggressively unrolling.
As it was said before though, it depends a lot on the loop and the compiler and experiment is necessary. It's hard to make a rule (or the compiler heuristic for unrolling would be perfect)
Loop unrolling entirely depends on your problem size. It is entirely dependent on your algorithm being able to reduce the size into smaller groups of work. What you did above does not look like that. I am not sure if a monte carlo simulation can even be unrolled.
I good scenario for loop unrolling would be rotating an image. Since you could rotate separate groups of work. To get this to work you would have to reduce the number of iterations.
Loop unrolling is still useful if there are a lot of local variables both in and with the loop. To reuse those registers more instead of saving one for the loop index.
In your example, you use small amount of local variables, not overusing the registers.
Comparison (to loop end) are also a major drawback if the comparison is heavy (i.e non-test instruction), especially if it depends on an external function.
Loop unrolling helps increasing the CPU's awareness for branch prediction as well, but those occur anyway.

why the memory access is so slow?

I've got very strange performance issue, related to access to memory.
The code snippet is:
#include <vector>
using namespace std;
vector<int> arrx(M,-1);
vector< vector<int> > arr(N,arrx);
...
for(i=0;i<N;i++){
for(j=0;j<M;j++){
//>>>>>>>>>>>> Part 1 <<<<<<<<<<<<<<
// Simple arithmetic operations
int n1 = 1 + 2; // does not matter what (actually more complicated)
// Integer assignment, without access to array
int n2 = n1;
//>>>>>>>>>>>> Part 2 <<<<<<<<<<<<<<
// This turns out to be most expensive part
arr[i][j] = n1;
}
}
N and M - are some constants of the order of 1000 - 10000 or so.
When I compile this code (release version) it takes approximately 15 clocks to finish it if the Part 2 is commented. With this part the execution time goes up to 100+ clocks, so almost 10 times slower. I expected the assignment operation to be much cheaper than even simple arithmetic operations. This is actually true if we do not use arrays. But with that array the assignment seems to be much more expensive.
I also tried 1-D array instead of 2-D - the same result (for 2D is obviously slower).
I also used int** or int* instead of vector< vector< int > > or vector< int > - again the result is the same.
Why do I get such poor performance in array assignment and can I fix it?
Edit:
One more observation: in the part 2 of the given code if we change the assignment from
arr[i][j] = n1; // 172 clocks
to
n1 = arr[i][j]; // 16 clocks
the speed (numbers in comments) goes up. More interestingly, if we change the line:
arr[i][j] = n1; // 172 clocks
to
arr[i][j] = arr[i][j] * arr[i][j]; // 110 clocks
speed is also higher than for simple assignment
Is there any difference in reading and writing from/to memory? Why do I get such strange performance?
Thanks in advance!
Your assumptions are really wrong...
Writing to main memory is much slower than doing a simple addition.
If you don't do the write, it is
likely that the loop will be optimized away entirely.
Unless your actual "part 1" is significantly more complex than your example would lead us to believe, there's no surprise here -- memory access is slow compared to basic arithmetic. Additionally, if you're compiling with optimizations, most or all of "part 1" may be getting optimized away because the results are never used.
You have discovered a sad truth about modern computers: memory access is really slow compared to arithmetic. There's nothing you can do about it. It is because electric fields in a copper wire only travel at about two-thirds of the speed of light.
Your nested arrays are going to be somewhere in the region of 50–500MB of memory. It takes time to write that much memory, and no amount of clever hardware memory caching is going to help that much. Moreover, even a single memory write will take time as it has to make its way over some copper wires on a circuit board to a lump of silicon some distance away; physics wins.
But if you want to dig in more, try the cachegrind tool (assuming it's present on your platform). Just be aware that the code you're using above doesn't really permit a lot of optimization; its data access pattern doesn't have enormous amounts of reuse potential.
Let's make a simple estimate. A typical CPU clock speed nowadays is about 1--2 GHz (giga=10 to power nine). Simplifying a (really) great deal, it means that a single processor operation takes about 1ns (nano=10 to power negative nine). A simple arithmetic like int addition takes several CPU cycles, of order ten.
Now, memory: typical memory access time is about 50ns (again, it's not necessary just now to go into gory details, which are aplenty).
You see that even in the very best case scenario, the memory is slower than CPU by a factor of about 5 to 10.
In fact, I'm sweeping under the rug an enormous amount of detail, but I hope the basic idea is clear. If you're interested, there are books around (keywords: cache, cache misses, data locality etc). This one is dated, but still very good at the explaining general concepts.

Slow writing to array in C++

I was just wondering if this is expected behavior in C++. The code below runs at around 0.001 ms:
for(int l=0;l<100000;l++){
int total=0;
for( int i = 0; i < num_elements; i++)
{
total+=i;
}
}
However if the results are written to an array, the time of execution shoots up to 15 ms:
int *values=(int*)malloc(sizeof(int)*100000);
for(int l=0;l<100000;l++){
int total=0;
for( unsigned int i = 0; i < num_elements; i++)
{
total+=i;
}
values[l]=total;
}
I can appreciate that writing to the array takes time but is the time proportionate?
Cheers everyone
The first example can be implemented using just CPU registers. Those can be accessed billions of times per second. The second example uses so much memory that it certainly overflows L1 and possibly L2 cache (depending on CPU model). That will be slower. Still, 15 ms/100.000 writes comes out to 1.5 ns per write - 667 Mhz effectively. That's not slow.
It looks like the compiler is optimizing that loop out entirely in the first case.
The total effect of the loop is a no-op, so the compiler just removes it.
It's very simple.
In first case You have just 3 variables, which can be easily stored in GPR (general purpose registers), but it doesn't mean that they are there all the time, but they are probably in L1 cache memory, which means thah they can be accessed very fast.
In second case You have more than 100k variables, and You need about 400kB to store them. That is deffinitely to much for registers and L1 cache memory. In best case it could be in L2 cache memory, but probably not all of them will be in L2. If something is not in register, L1, L2 (I assume that your processor doesn't have L3) it means that You need to search for it in RAM and it takes muuuuuch more time.
I would suspect that what you are seeing is an effect of virtual memory and possibly paging. The malloc call is going to allocate a decent sized chunk of memory that is probably represented by a number of virtual pages. Each page is linked into process memory separately.
You may also be measuring the cost of calling malloc depending on how you timed the loop. In either case, the performance is going to be very sensitive to compiler optimization options, threading options, compiler versions, runtime versions, and just about anything else. You cannot safely assume that the cost is linear with the size of the allocation. The only thing that you can do is measure it and figure out how to best optimize once it has been proven to be a problem.