How to calculate ANSI C code performance? - c++

I have written a simple code in ANSI C and now would want to perform some measurements.
I have measured execution time (using clock() function under the Windows OS and clock_gettime() under the Linux OS).
Now I would want to calculate, how many IPSes (Instructions Per Second) my CPU executes, while running this code of mine. (Yes, I know that MIPS is a pathetic parameter, but even this, I want to calculate it)
It would be also nice to see, how many CPIs (Cycles Per Instruction) it takes to perform e.g. addition of 3 elements and others operations I perform.
Google says how to calculate number of MIPS using calculator, some knowledge about my CPU (its clock speed), simple math and a bunch of other parameters (like CPI), but doesn't say HOW to obtain those!
I haven't found also any C/C++ function which would return the number of clock cycles needed to perform e.g. access to a local variable.
There is also a problem to find a Reference Manual by Intel/AMD for a modern CPU, which would have information about opcodes and others.
I have manually calculated, that my ANSI C code takes 37 operations, but those are ANSI C operations, not CPU instructions.

The easiest way of getting high accuracy timing on windows is PerformanceCounter, see How to use QueryPerformanceCounter?
Then you simply need some functions that perform the operations you are interested in timing. You have to be a little careful of caching etc. so run the calculation several times and look at the distribution of times

Related

Should I make a large function atomic in order to benchmark it accurately?

I would like to know how long it takes to execute some code. The code I am executing deals with openCV matrices and operations. The code will be run in a ROS environment on Linux. I don't want the code to be interrupted by system functions during my benchmarking.
Looking at this post about benchmarking, the answerer said the granularity of the result is 15ms. I would like to do much better than that and so I was considering to make the function atomic (just for benchmarking purposes). I'm not sure if it is a good idea for a few reasons, primarily because I don't have a deep understanding of processor architecture.
void atomic_wrapper_function(const object& A, const object& B) {
static unsigned long running_sum = 0;
unsigned long before, after;
before = GetTimeMs64();
function_to_benchmark(A, B);
after = GetTimeMs64();
running_sum += (after - before);
}
The function I am trying to bench mark is not a short function.
Will the result be accurate? For marking the time I'm considering to use this function by Andreas Bonini.
Will it do something horrible to my computer? Call me superstitious but I think it's good to ask this question.
I'm using C++11 on the Linux Kernel.
C++11 atomics are not atomic in the RTOS way, they just provide guarantees when writing multithreaded code. Linux is not an RTOS. Your code can and will always be interrupted. There are some ways to lessen the effects though, but not without diving very deeply into linux.
You can for example configure the niceness to get interrupted less by other userspace programs. You can tell the kernel on which CPU core to process interrupts, then pin your program to a different cpu. You can increase the timer precision etc, but:
There are many other things that might change the runtime of your algorithm like several layers of CPU caches, power saving features of your CPU, etc... If you are really only interested in benchmarking the execution time of your function for non-hard realtime problems, it is easier to just run the algorithm many many times and get a statistical estimate for the execution time.
Call the function a billion times during the benchmark and average. OR
Benchmark the function from 1 time to a billion times. The measure for execution time you are interested in should scale linearly. Then do some kind of linear regression to get an estimate of that.
OR: You say that you want to know what influence the algorithm has on your total program runtime? Use profiling tools like callgrind (integratable into QtCreator).

Speed ratio of algorithm versus precompiled reference implementation differs across computers

We have a small C++ project with the following architecture.
These two were compiled into a DLL:
An algorithm
A tester for the algorithm which checks the correctness of the result and measures the execution speed.
Then another implementation of the same algorithm is written by someone else.
The main() function does this:
Invoke the tester on both implementations of the algorithm and measure their execution speed. This is done several times, so that averages can be taken later.
Compute the speed ratio between them (measured time/measured reference time). This is referred to as the score.
We found that running the very same code and DLL on different computers returned quite different speed ratios. On one computer an implementation scored 6.4, and the very same implementation scored 2.8 on another machine. How could that be?
There could be tons of factors, but here are a few:
CPU cache can be a big one. Different processors have different caches (and not just in terms of raw cache size, but also caching strategies). One might be "smarter" than the other, or perhaps one just happens to work better than another in this specific situation.
CPU pipelining. Instructions these days are interleaved in the CPU, even in a single thread of execution. The way the CPU pipeline works varies from CPU to CPU, and one CPU might be able to two particular things at once, while another CPU can't. If one of the implementations exploit this, then it gets a speed boost (or if they both do, then they both get closer to the same speed).
CPU instruction execution times may vary. So one CPU executing the exact same instructions as another CPU might be able to do each one faster than the other CPU. If one computer's CPU takes a longer time to use a particular instruction (and one of the implementations happens to use that instruction), while another CPU has been improved to speed up that instruction's execution time, then there will be a larger time discrepancy.
Branch prediction models in the CPUs might be different, and one implementation might be more or less friendly to a particular CPU's branch prediction model.
Operating systems can affect this in many ways, from memory allocation strategies (maybe one OS has a memory allocation strategy that causes a bigger discrepancy in times, while another OS has a different allocation strategy that minimizes the discrepancy), to CPU time slice management (are the algorithms multithreaded, for example?).

How to measure read/cycle or instructions/cycle?

I want to thoroughly measure and tune my C/C++ code to perform better with caches on a x86_64 system. I know how to measure time with a counter (QueryPerformanceCounter on my Windows machine) but I'm wondering how would one measure the instructions per cycle or reads/write per cycle with respect to the working set.
How should I proceed to measure these values?
Modern processors (i.e., those not very constrained that are less than some 20 years old) are superscalar, i.e., they execute more than one instruction at a time (given correct instruction ordering). Latest x86 processors translate the CISC instructions into internal RISC instructions, reorder them and execute the result, have even several regster banks so instructions using "the same registers" can be done in parallel. There isn't any reasonable way to define the "time the instruction execution takes" today.
The current CPUs are much faster than memory (a few hundred instructions is the typical cost of accessing memory), they are all heavily dependent on cache for performance. And then you have all kinds of funny effects of cores sharing (or not) parts of cache, ...
Tuning code for maximal performance starts with the software architecture, goes on to program organization, algorithm and data structure selection (here a modicum of cache/virtual memory awareness is useful too), careful programming and (as te most extreme measures to squeeze out the last 2% of performance) considerations like the ones you mention (and the other favorite, "rewrite in assembly"). And the ordering is that one because the first levels give more performance for the same cost. Measure before digging in, programmers are notoriously unreliable in finding bottlenecks. And consider the cost of reorganizing code for performance, both in the work itself, in convincing yourself this complex code is correct, and maintenance. Given the relative costs of computers and people, extreme performance tuning rarely makes any sense (perhaps for heavily travelled code paths in popular operating systems, in common code paths generated by a compiler, but almost nowhere else).
If you are really interested in where your code is hitting cache and where it is hitting memory, and the processor is less than about 10-15 years old in its design, then there are performance counters in the processor. You need driver level software to access these registers, so you probably don't want to write your own tools for this. Fortunately, you don't have to.
There is tools like VTune from Intel, CodeAnalyst from AMD and oprofile for Linux (works with both AMD and Intel processors).
There are a whole range of different registers that count the number of instructions actually completed, the number of cycles the processor is waiting for . You can also get a count of things like "number of memory reads", "number of cache misses", "number of TLB misses", "number of FPU instructions".
The next, more tricky part, is of course to try to fix any of these sort of issues, and as mentioned in another answer, programmers aren't always good at tweaking these sort of things - and it's certainly time consuming, not to mention that what works well on processor model X will not necessarily run fast on model Y (there were some tuning tricks for early Pentium 4 that works VERY badly on AMD processors - if on the other hand, you tune that code for AMD processors of that age, you get code that runs well on the same generation Intel processor too!)
You might be interested in the rdtsc x86 instruction, which reads a relative number of cycles.
See http://www.fftw.org/cycle.h for an implementation to read the counter in many compilers.
However, I'd suggest simply measuring using QueryPerformanceCounter. It is rare that the actual number of cycles is important, to tune code you typically only need to be able to compare relative time measurements, and rdtsc has many pitfalls (though probably not applicable to the situation you described):
On multiprocessor systems, there is not a single coherent cycle counter value.
Modern processors often adjust the frequency, changing the rate of change in time with respect to the rate of change in cycles.

benchmark a piece of code independent of CPU performance?

My Objective is : I want to test a piece of code (or function) performance, just like how I test the correctness of that function in a unit-test, let say that the output of this benchmarking process is a "function performance index" which is "portable"
My Problem is : we usually benchmarking a code by using a timer to count elapsed time during execution of that code. and that method is depend on the hardware or O/S or other thing.
My Question is : is there a method to get a "function performance index" that is independent to the performance of the host (CPU/OS/etc..), or if not "independent" lets say it is "relative" to some fixed value. so that somehow the value of "function performance index" is still valid on any platform or hardware performance.
for example: that FPI value is could be measured in
number of arithmetic instruction needed to execute a single call
float value compared to benchmark function, for example function B has rating index of 1.345 (which is the performance is slower 1.345 times than the benchmark function)
or other value.
note that the FPI value doesn't need to be scientifically correct, exact or accurate, I just need a value to give a rough overview of that function performance compared to other function which was tested by the same method.
I think you are in search of the impossible here, because the performance of a modern computer is a complex blend of CPU, cache, memory controller, memory, etc.
So one (hypothetical) computer system might reward the use of enormous look-up tables to simplify an algorithm, so that there were very few cpu instructions processed. Whereas another system might have memory much slower relative to the CPU core, so an algorithm which did a lot of processing but touched very little memory would be favoured.
So a single 'figure of merit' for these two algorithms could not even convey which was the better one on all systems, let alone by how much it was better.
Probably what you really need is a tcov-like tool.
man tcov says:
Each basic block of code (or each
line if the -a option to tcov is specified) is prefixed with
the number of times it has been executed; lines that have
not been executed are prefixed with "#####". A basic block
is a contiguous section of code that has no branches: each
statement in a basic block is executed the same number of
times.
No, there is no such thing. Different hardware performs differently. You can have two different pieces of code X and Y such that hardware A runs X faster than Y but hardware B runs Y faster than X. There is no absolute scale of performance, it depends entirely on the hardware (not to mention other things like the operating system and other environmental considerations).
It sounds like what you want is a program that calculates the Big-O Notation of a piece of code. I don't know if it's possible to do that in an automated fashion (Halting problem, etc).
Like others have mentioned this is not a trivial task and may be impossible to get any sort of accurate results from. Considering a few methods:
Benchmark Functions -- While this seems promising I think you'll find that it won't work well as you try to compare different types of functions. For example, if your benchmark function is 100% CPU bound (as in some complex math computation) then it will compare/scale well with other CPU bound functions but fail when compared with, say, I/O or memory bound functions. Carefully matching a benchmark function to a small set of similar functions may work but is tedious/time consuming.
Number of Instructions -- For a very simple processor it may be possible to count the cycles of each instruction and get a reasonable value for the total number of cycles a block of code will take but with today's modern processors are anything but "simple". With branch prediction and parallel pipelines you can can't just add up instruction cycles and expect to get an accurate result.
Manual Counting -- This might be your best bet and while it is not automatic it may give better results faster than the other methods. Just look at things like the O() order of the code, how much memory the function reads/writes, how many file bytes are input/output etc.... By having a few stats like this for each function/module you should be able to get a rough comparison of their complexity.

Measuring running time of computational geometry algorithms

I am taking a course on computational geometry in the fall, where we will be implementing some algorithms in C or C++ and benchmarking them. Most of the students generate a few datasets and measure their programs with the time command, but I would like to be a bit more thorough.
I am thinking about writing a program to automatically generate different datasets, run my program with them and use R to test hypotheses and estimate parameters.
So... How do you measure program running time more accurately?
What might be relevant to measure?
What hypotheses might be interesting to test (variance, effects caused by caching, etc.)?
Should I test my code on more than one machine? How should these machines differ?
My overall goals are to learn how these algorithms perform in practice, which implementation techniques are better and how the hardware actually performs.
Profilers are great. Valgrind is pretty popular. Also, I'd suggest trying your code out on risc machines if you can get access to some. Their performance characteristics are different from those of cisc machines in interesting ways.
You could use the Windows API timing function (are not that exactly) and you can use the RDTSC inline assembler command which is sub-nanosecond exact(don't forget that the command and the instructions around it create a small overhead of some hundreds cycles but this is not an big issue).
In order to get better accuracy with program metrics, you will have to run your program many times, such as 100 or 1000.
For more details, on metrics, search the web for metrics and profiling.
Beware that programs may differ in performance (time) measurements due to things running in the background such as virus scanners, music players, and other programs with timers in them.
You could test your program on different machines. Processor clock rates, L1 and L2 cache sizes, RAM sizes, and Disk speeds are all factors (as well as the number of other programs / tasks running concurrently). Floating point may also be a factor.
If you want, you can challenge your compiler by printing the assembly language of the listings for various optimization settings. See which setting produces the fewest or most efficient assembly code.
Since your processing data, look at data driven design: http://www.gamearchitect.net/Articles/DataDrivenDesign.html
You can use the Windows High Performance Counter to get nanosecond accuracy. Technically, afaik, the HPC can be any speed, but you can query it's counts per second, and as far as I know, most CPUs do very very high performance counting.
What you should do is just get a professional profiler. That's what they're for. More realistically, however.
If you're only comparing between algorithms, as long as your machine doesn't happen to excel in one area (Pentium D, SSD sort of thing) it shouldn't matter too much to do it on just one machine. If you want to look at cache effects, try running the algorithm right after the machine starts up (make sure that you get a copy of Windows 7, should be free for CS students), then leave it doing something that can be plenty cache heavy, like image processing, for 24h or something to convince the OS to cache it. Then run algorithm again. Compare.
You didn't specify your platform. If you are on a POSIX system (eg linux) have a look into clock_gettime. This lets you access different kinds of clocks e.g wall clock time or cpu time. You also may get to know about the precision of the clocks.
Since you are willing to do good statistics on your numbers, you should repeat your experiments often enough such that the statistical test give you enough confidence.
If your measurements are not too fine grained and your variance is low this often is quite good for 10 probes or so. But if you go down to small scale, a short function or so, you might need to go much higher.
Also you would have to ensure reproducible experimental conditions, no other load on the machine, enough memory available etc.