Time measurement repeatedly makes mistake in specific places - c++

I need to write program which would measure performance of certain data structures. But I can't get reliable result. For example when I measured performance 8 times for the same size of structure, every other result was different(for example: 15ms, 9ms, 15ms, 9ms, 15ms, ...), although the measurements weren't dependent on each other(for every measurement I generated new data). I tried to extract the problem and here is what I have:
while (true) {
auto start = high_resolution_clock::now();
for (int j = 0; j < 500; j++)
;
auto end = high_resolution_clock::now();
cout << duration<double, milli>(end - start).count() << " ";
_getch();
}
What happens when I run this code is - In the first run of loop the time is significantly higher than in next runs. Well it's always higher in the first run, but from time to time also in other measurements.
Example output: 0.006842 0.002566 0.002566 0.002138 0.002993 0.002138 0.002139 ...
And that's the behaviour everytime I start the program.
Here are some things I tried:
It does matter if I compile Release or Debug version. Measurements are still faulty but in different places.
I turned off code optimization.
I tried using different clocks.
And what I think is quite important - While my Add function wasn't empty, the problem depended on data size. For example program worked well for most data sizes but let's say for element count of 7500 measurements were drastically different.
I just deleted part of code after the segment i posted here. And guess what, first measurement is no longer faulty. I have no idea what's happening here.
I would be glad if someone explained to me what can be possible cause of all of this.

In that code, it's likely that you're just seeing the effect of the instruction cache or the micro-op cache. The first time the test is run, more instructions have to be fetched and decoded; on subsequent runs the results of that are available in the caches. As for the alternating times you were setting on some other code, that could be fluctuations in the branch prediction buffer, or something else entirely.
There's too many complex processes involved in execution on modern CPUs to expect a normal sequence of instructions to execute in a fixed amount of time. While it's possible to measure or at least account for these externalities when looking at individual instructions, for nontrivial code you basically have to accept empirical measurements including their variance.

Depending on what kind of operating system you're on, for durations this short, the scheduler can cause huge differences. If your thread is preempted, then you have the idle duration in your time. There are also many things that happen that you don't see: caches, pages, allocation. Modern systems are complex.
You're better off making the whole benchmark bigger, and then doing multiple runs on each thing you're testing, and then using something like ministat from FreeBSD to compare the runs of the same test, and then compare the ministat for the different things you're comparing.
To do this effectively, your benchmark should try to use the same amount of memory as the real workload, so that you memory access is a part of the benchmark.

Related

Using one loop vs two loops

I was reading this blog :- https://developerinsider.co/why-is-one-loop-so-much-slower-than-two-loops/. And I decided to check it out using C++ and Xcode. So, I wrote a simple program given below and when I executed it, I was surprised by the result. Actually the 2nd function was slower compared to the first function contrary to what is stated in the article. Can anyone please help me figure out why this is the case?
#include <iostream>
#include <vector>
#include <chrono>
using namespace std::chrono;
void function1() {
const int n=100000;
int a1[n], b1[n], c1[n], d1[n];
for(int j=0;j<n;j++){
a1[j] = 0;
b1[j] = 0;
c1[j] = 0;
d1[j] = 0;
}
auto start = high_resolution_clock::now();
for(int j=0;j<n;j++){
a1[j] += b1[j];
c1[j] += d1[j];
}
auto stop = high_resolution_clock::now();
auto duration = duration_cast<microseconds>(stop - start);
std::cout << duration.count() << " Microseconds." << std::endl;
}
void function2() {
const int n=100000;
int a1[n], b1[n], c1[n], d1[n];
for(int j=0; j<n; j++){
a1[j] = 0;
b1[j] = 0;
c1[j] = 0;
d1[j] = 0;
}
auto start = high_resolution_clock::now();
for(int j=0; j<n; j++){
a1[j] += b1[j];
}
for(int j=0;j<n;j++){
c1[j] += d1[j];
}
auto stop = high_resolution_clock::now();
auto duration = duration_cast<microseconds>(stop - start);
std::cout << duration.count() << " Microseconds." << std::endl;
}
int main(int argc, const char * argv[]) {
function1();
function2();
return 0;
}
TL;DR: The loops are basically the same, and if you are seeing differences, then your measurement is wrong. Performance measurement and more importantly, reasoning about performance requires a lot of computer knowledge, some scientific rigor, and much engineering acumen. Now for the long version...
Unfortunately, there is some very inaccurate information in the article to which you've linked, as well as in the answers and some comments here.
Let's start with the article. There won't be any disk caching that has any effect on the performance of these functions. It is true that virtual memory is paged to disk, when demand on physical memory exceeds what's available, but that's not a factor that you have to consider for programs that touch 1.6MB of memory (4 * 4 * 100K).
And if paging comes into play, the performance difference won't exactly be subtle either. If these arrays were paged to disk and back, the performance difference would be in order of 1000x for fastest disks, not 10% or 100%.
Paging and page faults and its effect on performance is neither trivial, nor intuitive. You need to read about it, and experiment with it seriously. What little information that article has is completely inaccurate to the point of being misleading.
The second is your profiling strategy and the micro-benchmark itself. Clearly, with such simple operations on data (an add,) the bottleneck will be memory bandwidth itself (maybe instruction retire limits or something like that with such a simple loop.) And since you only read memory linearly, and use all you read, whether its in 4 interleaving streams or 2, you are making use of all the bandwidth that is available.
However, if you call your function1 or function2 in a loop, you will be measuring the bandwidth of different parts of the memory hierarchy depending on N, from L1 all the way to L3 and main memory. (You should know the size of all levels of cache on your machine, and how they work.) This is obvious if you know how CPU caches work, and really mystifying otherwise. Do you want to know how fast this is when you do it the first time, when the arrays are cold, or do you want to measure the hot access?
Is your real use case copying the same mid-sized array over and over again?
If not, what is it? What are you benchmarking? Are you trying to measure something or just experimenting?
Shouldn't you be measuring the fastest run through a loop, rather than the average since that can be massively affected by a (basically random) context switch or an interrupt?
Have you made sure you are using the correct compiler switches? Have you looked at the generated assembly code to make sure the compiler is not adding debug checks and what not, and is not optimizing stuff away that it shouldn't (after all, you are just executing useless loops, and an optimizing compiler wants nothing more than to avoid generating code that is not needed).
Have you looked at the theoretical memory/cache bandwidth number for your hardware? Your specific CPU and RAM combination will have theoretical limits. And be it 5, 50, or 500 GiB/s, it will give you an upper bound on how much data you can move around and work with. The same goes with the number of execution units, the IPC or your CPU, and a few dozen other numbers that will affect the performance of this kind of micro-benchmark.
If you are reading 4 integers (4 bytes each, from a, b, c, and d) and then doing two adds and writing the two results back, and doing it 100'000 times, then you are - roughly - looking at 2.4MB of memory read and write. If you do it 10 times in 300 micro-seconds, then your program's memory (well, store buffer/L1) throughput is about 80 GB/s. Is that low? Is that high? Do you know? (You should have a rough idea.)
And let me tell you that the other two answers here at the time of this writing (namely this and this) do not make sense. I can't make heads nor tails of the first one, and the second one is almost completely wrong (conditional branches in a 100'000-times for loop are bad? allocating an additional iterator variable is costly? cold access to array on stack vs. on the heap has "serious performance implications?)
And finally, as written, the two functions have very similar performances. It is really hard separating the two, and unless you can measure a real difference in a real use case, I'd say write whichever one that makes you happier.
If you really really want a theoretical difference between them, I'd say the one with two separate loops is very slightly better because it is usually not a good idea interleaving access to unrelated data.
This has nothing to do with caching or instruction efficiency. Simple iterations over long vectors are purely a matter of bandwidth. (Google: stream benchmark.) And modern CPUs have enough bandwidth to satisfy not all of their cores, but a good deal.
So if you combine the two loops, executing them on a single core, there is probably enough bandwidth for all loads and stores at the rate that memory can sustain. But if you use two loops, you leave bandwidth unused, and the runtime will be a little less than double.
The reasons why the second is faster in your case (I do not think that this works on any machine) is better cpu caching at the point at ,which you cpu has enough cache to store the arrays, the stuff your OS requires and so on, the second function will probably be much slower than the first.
from a performance standpoint. I doubt that the two loop code will give better performance if there are enough other programs running as well, because the second function has obviously worse efficiency then the first and if there is enough other stuff cached the performance lead throw caching will be eliminated.
I'll just chime in here with a little something to keep in mind when looking into performance - unless you are writing embedded software for a real-time device, the performance of such low level code as this should not be a concern.
In 99.9% of all other cases, they will be fast enough.

Repeated timing of a void function in C++

I am trying to time a void function
for (size_t round = 0; round < 5; round++) {
cpu_time_start = get_cpu_time();
wall_time_start = get_wall_time();
scan.assign_clusters(epsilon, mu);
cpu_time_end = get_cpu_time();
wall_time_end = get_wall_time();
...
}
The first timing yields 300 seconds, while the next four timings yields 0.000002 seconds. This indicates that the void function call to assign_clusters is optimized out. How can I force my program to execute this time consuming function call every time, and yet still use optimization for the rest of the code?
What I usually do is to save the result of the function in question and then print it, but since this is a void function, do I have the same option?
I use the following optimization flags: -std=c++0x -march=native -O2
It depends on what is taking the time, to make the fix.
This could be caused by :-
Loading services. Your clustering may be database based, and requires the database services to start (the first time)
Disk caching. The OS will remember data it has read, and be able to provide the data as if it was in memory.
Memory caching. The CPU has different speeds of memory available to it, using the same memory twice, would be faster the second time.
State caching. The data may be in a more amenable state for subsequent runs. This can be thought of as sorting an array twice. The second time is already sorted, which can produce a speed up.
Service starting can be a number of seconds.
Disk cache approx 20x speed up.
Memory cache approx 6x speed up
State caching, can be unbounded.
I think your code needs to reset the scan object, to ensure it does the work again

How to demonstrate the impact of instruction cache limitations

My orginial idea was to give an elegant code example, that would demonstrate the impact of instruction cache limitations. I wrote the following piece of code, that creates a large amount of identical functions, using template metaprogramming.
volatile int checksum;
void (*funcs[MAX_FUNCS])(void);
template <unsigned t>
__attribute__ ((noinline)) static void work(void) { ++checksum; }
template <unsigned t>
static void create(void) { funcs[t - 1] = &work<t - 1>; create<t - 1>(); }
template <> void create<0>(void) { }
int main()
{
create<MAX_FUNCS>();
for (unsigned range = 1; range <= MAX_FUNCS; range *= 2)
{
checksum = 0;
for (unsigned i = 0; i < WORKLOAD; ++i)
{
funcs[i % range]();
}
}
return 0;
}
The outer loop varies the amount of different functions to be called using a jump table. For each loop pass, the time taken to invoke WORKLOAD functions is then measured. Now what are the results? The following chart shows the average run time per function call in relation to the used range. The blue line shows the data measured on a Core i7 machine. The comparative measurement, depicted by the red line, was carried out on a Pentium 4 machine. Yet when it comes to interpreting these lines, I seem to be somehow struggling...
The only jumps of the piecewise constant red curve occur exactly where the total memory consumption for all functions within range exceed the capacity of one cache level on the tested machine, which has no dedicated instruction cache. For very small ranges (below 4 in this case) however, run time still increases with the amount of functions. This may be related to branch prediction efficiency, but since every function call reduces to an unconditional jump in this case, I'm not sure if there should be any branching penalty at all.
The blue curve behaves quite differently. Run time is constant for small ranges and increases logarithmic thereafter. Yet for larger ranges, the curve seems to be approaching a constant asymptote again. How exactly can the qualitative differences of both curves be explained?
I am currently using GCC MinGW Win32 x86 v.4.8.1 with g++ -std=c++11 -ftemplate-depth=65536 and no compiler optimization.
Any help would be appreciated. I am also interested in any idea on how to improve the experiment itself. Thanks in advance!
First, let me say that I really like how you've approached this problem, this is a really neat solution for intentional code bloating. However, there might still be several possible issues with your test -
You also measure the warmup time. you didn't show where you've placed your time checks, but if it's just around the internal loop - then the first time until you reach range/2 you'd still enjoy the warmup of the previous outer iteration. Instead, measure only warm performance - run each internal iteration for several times (add another loop in the middle), and take the timestamp only after 1-2 rounds.
You claim to have measure several cache levels, but your L1 cache is only 32k, which is where your graph ends. Even assuming this counts in terms of "range", each function is ~21 bytes (at least on my gcc 4.8.1), so you'll reach at most 256KB, which is only then scratching the size of your L2.
You didn't specify your CPU model (i7 has at least 4 generations in the market now, Haswell, IvyBridge, SandyBridge and Nehalem). The differences are quite large, for example an additional uop-cache since Sandybrige with complicated storage rules and conditions. Your baseline is also complicating things, if I recall correctly the P4 had a trace cache which might also cause all sorts of performance impacts. You should check an option to disable them if possible.
Don't forget the TLB - even though it probably doesn't play a role here in such a tightly organized code, the number of unique 4k pages should not exceed the ITLB (128 entries), and even before that you may start having collisions if your OS did not spread the physical code pages well enough to avoid ITLB collisions.

Execution time of functions decreases at runtime. (C++) Why?

For some testing purposes I have written a piece of code for measuring execution times of several fast operations in my real-time video processing code. And things are working fine. I am getting very realistic results, but i noticed one interesting peculiarity.
I am using a POSIX function clock_gettime with CLOCK_MONOTONIC attribute. So i am getting timespecs with nanosecond precision (1/1000000000sec) and it is said that getting a timespec value in that manner takes only several processor ticks.
Here are two functions that i am using for saving timespecs. I also added definitions of datastructures that are being used:
QVector<long> timeMemory;
QVector<std::string> procMemory;
timespec moment;
void VisionTime::markBegin(const std::string& action) {
if(measure){
clock_gettime(CLOCK_MONOTONIC, &moment);
procMemory.append(action + ";b");
timeMemory.append(moment.tv_nsec);
}
}
void VisionTime::markEnd(const std::string& action) {
if(measure){
clock_gettime(CLOCK_MONOTONIC, &moment);
procMemory.append(action + ";e");
timeMemory.append(moment.tv_nsec);
}
}
I am collecting the results into a couple of QVectors that are used later.
I noticed that when these two functions are executed for the first time(right after each other, having nothing between them), the difference between two saved timespecs is ~34000ns. Next time the difference is about 2 times smaller. And so on. If i execute them for hundreds of times then the average difference is ~2000ns.
So an average recurrent execution of these functions takes about 17000x less time than the first one.
As i am taking hundreds of measurements in a row, it does not really matter to me that some first executions last a little bit longer. But anyway it just interests me, why is it that way?
I have various experience in Java, but i am quite new to c++. I do not know much how things work here.
I am using O3 flag for optimization level.
My QMake conf:
QMAKE_CXXFLAGS += -O3 -march=native
So, can anyone tell, which part of this little code gets faster at runtime, how and why? I doubt appending to QVector. Does optimization affect this somehow?
It's my first question here on stackoverflow, hope it's not too long :) Many thanks for all your responses!
There are quite a few potential first-time costs in your measurement code, here's a couple and how you can test for them.
Memory allocation: Those QVectors won't have any memory allocated on the heap until the first time you use them.
Also, the vector will most likely start out by allocating a small amount of memory, then allocate exponentially more as you add more data (a standard compromise for containers like this). Therefore, you will have many memory allocations towards the beginning of your runtime, then the frequency will decrease over time.
You can verify that this is happening by looking at the return value of QVector::capacity(), and tune the behavior by QVector::reserve(int) - e.g. if you do timeMemory.reserve(10000);, procMemory.reserve(10000);, you can reserve enough space for the first ten thousand measurements before your measurements begin.
Lazy symbol binding: the dynamic linker by default won't resolve symbols from Qt (or other shared libraries) until they are needed. So, if these measuring functions are the first place in your code where some QVector or std::string functions are called, the dynamic linker will need to do some one-time work to resolve those functions, which takes time.
If this is indeed the case, you can disable the lazy loading by setting the environment variable LD_BIND_NOW=1 on Linux or DYLD_BIND_AT_LAUNCH=1 on Mac.
It is probably due to branch prediction. http://en.wikipedia.org/wiki/Branch_predictor

Simple operation to waste time?

I'm looking for a simple operation / routine which can "waste" time if repeated continuously.
I'm researching how gprof profiles applications, so this "time waster" needs to waste time in the user space and should not require external libraries. IE, calling sleep(20) will "waste" 20 seconds of time, but gprof will not record this time because it occurred within another library.
Any recommendations for simple tasks which can be repeated to waste time?
Another variant on Tomalak's solution is to set up an alarm, and so in your busy-wait loop, you don't need to keep issuing a system call, but instead just check if the signal has been sent.
The simplest way to "waste" time without yielding CPU is a tight loop.
If you don't need to restrict the duration of your waste (say, you control it by simply terminating the process when done), then go C style*:
for (;;) {}
(Be aware, though, that the standard allows the implementation to assume that programs will eventually terminate, so technically speaking this loop — at least in C++0x — has Undefined Behaviour and could be optimised out!**
Otherwise, you could time it manually:
time_t s = time(0);
while (time(0) - s < 20) {}
Or, instead of repeatedly issuing the time syscall (which will lead to some time spent in the kernel), if on a GNU-compatible system you could make use of signal.h "alarms" to end the loop:
alarm(20);
while (true) {}
There's even a very similar example on the documentation page for "Handler Returns".
(Of course, these approaches will all send you to 100% CPU for the intervening time and make fluffy unicorns fall out of your ears.)
* {} rather than trailing ; used deliberately, for clarity. Ultimately, there's no excuse for writing a semicolon in a context like this; it's a terrible habit to get into, and becomes a maintenance pitfall when you use it in "real" code.
** See [n3290: 1.10/2] and [n3290: 1.10/24].
A simple loop would do.
If you're researching how gprof works, I assume you've read the paper, slowly and carefully.
I also assume you're familiar with these issues.
Here's a busy loop which runs at one cycle per iteration on modern hardware, at least as compiled by clang or gcc or probably any reasonable compiler with at least some optimization flag:
void busy_loop(uint64_t iters) {
volatile int sink;
do {
sink = 0;
} while (--iters > 0);
(void)sink;
}
The idea is just to store to the volatile sink every iteration. This prevents the loop from being optimized away, and makes each iteration have a predictable amount of work (at least one store). Modern hardware can do one store per cycle, and the loop overhead generally can complete in parallel in that same cycle, so usually achieves one cycle per iteration. So you can ballpark the wall-clock time in nanoseconds a given number of iters will take by dividing by your CPU speed in GHz. For example, a 3 GHz CPU will take about 2 seconds (2 billion nanos) to busy_loop when iters == 6,000,000,000.