Calculating FLOPS (Floating-point Operations per Seconds) - c++

How can I calculate FLOPS of my application?
If I have the total number of executed instructions, I can divide it by the execution time. But, how to count the number of executed instructions?
My question is general and answer for any language is highly appreciated. But I am looking to find a solution for my application which is developed by C/C++ and CUDA.
I do not know whether the tags are proper, please correct me if I am wrong.

What I do if the number of floating point operations is not easily modeled is to produce two executables: One that is the production version and gives me the execution time, and an instrumented one that counts all floating point operations while performing them (surely that will be slow, but that doesn't matter for our purpose). Then I can compute the FLOP/s value by dividing the number of floating point ops from the second executable by the time from the first one.
This could probably even be automated, but I haven't had a need for this so far.

You should mathematically model what's done with your data. Isolate one loop iteration. Then count all simple floating-point additions, multiplications, divisions, etc. For example,
y = x * 2 * (y + z*w) is 4 floating-point operations. Multiply the resulting number by the number of iterations. The result will be the number of instructions you're searching for.

Related

How is FLOPS/IOPS calculated and what is its use?

I have been following some tutorials for OpenCL and a lot of times people speak in terms of FLOPS. Wikipedia does explain the formula but does not tell what it actually means? For example, 1 light year = 9.4605284 × 10^15 meters but what it means is the distance traveled by light in a year. Similarly what does FLOP mean?
Answer to a similar question says 100 IOPS for the code
for(int i = 0; i < 100; ++i)
Ignoring the initialisation, I see 100 increment operations, so there's 100IOPS. But I also see 100 comparison operations. So why isn't it 200IOPS? So what types of operators are included in FLOPS/IOPS calculation?
Secondly I want to know what would you do by calculating the FLOPS of your algorithm?
I am asking this because the value is specific for a CPU clock speed and no of cores.
Any guidance on this arena would be very helpful.
"FLOPS" stands for "Floating Point Operations Per Second" and it's exactly that. It's used as a measure of the computing speed of large, number based (usually scientific) operations. Measuring it is a matter of knowing two things:
1.) The precise execution time of your algorithm
2.) The precise number of floating point operations involved in your algorithm
You can get pretty good approximations of the first one from profiling tools, and the second one from...well you might be on your own there. You can look through the source for floating point operations like "1.0 + 2.0" or look at the generated assembly code, but those can both be misleading. There's probably a debugger out there that will give you FLOPS directly.
It's important to understand that there is a theoretical maximum FLOPS value for the system you're running on, and then there is the actual achieved FLOPS of your algorithm. The ratio of these two can give you a sense of the efficiency of your algorithm. Hope this helps.

How to measure FLOPS

How do I measure FLOPS or IOPS? If I do measure time for ordinary floating point addition / multiplication , is it equivalent to FLOPS?
FLOPS is floating point operations per second. To measure FLOPS you first need code that performs such operations. If you have such code, what you can measure is its execution time. You also need to sum up or estimate (not measure!) all floating point operations and divide that over the measured wall time. You should count all ordinary operations like additions,subtractions,multiplications,divisions (yes, even though they are slower and better avoided, they are still FLOPs..). Be careful how you count! What you see in your source code is most likely not what the compiler produces after all the optimisations. To be sure you will likely have to look at the assembly..
FLOPS is not the same as Operations per second. So even though some architectures have a single MAD (multiply-and-add) instruction, those still count as two FLOPs. Similarly the SSE instructions. You count them as one instruction, though they perform more than one FLOP.
FLOPS are not entirely meaningless, but you need to be careful when comparing your FLOPS to sb. elses FLOPS, especially the hardware vendors. E.g. NVIDIA gives the peak FLOPS performance for their cards assuming MAD operations. So unless your code has those, you will not ever get this performance. Either rethink the algorithm, or modify the peak hardware FLOPS by a correct factor, which you need to figure out for your own algorithm! E.g., if your code only performs multiplication, you would divide it by 2. Counting right might get your code from suboptimal to quite efficient without changing a single line of code..
You can use the CPU performance counters to get the CPU to itself count the number of floating point operations it uses for your particular program. Then it is the simple matter of dividing this by the run time. On Linux the perf tools allow this to be done very easily, I have a writeup on the details of this on my blog here:
http://www.bnikolic.co.uk/blog/hpc-howto-measure-flops.html
FLOP's are not well defined. mul FLOPS are different than add FLOPS. You have to either come up with your own definition or take the definition from a well-known benchmark.
Usually you use some well-known benchmark. Things like MIPS and megaFLOPS don't mean much to start with, and if you don't restrict them to specific benchmarks, even that tiny bit of meaning is lost.
Typically, for example, integer speed will be quoted in "drystone MIPS" and floating point in "Linpack megaFLOPS". In these, "drystone" and "Linpack" are the names of the benchmarks used to do the measurements.
IOPS are I/O operations. They're much the same, though in this case, there's not quite as much agreement about which benchmark(s) to use (though SPC-1 seems fairly popular).
This is a highly architecture specific question, for a naive/basic/start start I would recommend to find out how many Operations 1 multiplication take's on your specific hardware then do a large matrix multiplication , and see how long it takes. Then you can eaisly estimate the FLOP of your particular hardware
the industry standard of measuring flops is the well known Linpack or HPL high performance linpack, try looking at the source or running those your self
I would also refer to this answer as an excellent reference

How to calculate Gflops of a kernel

I want a measure of how much of the peak performance my kernel archives.
Say I have a NVIDIA Tesla C1060, which has a peak GFLOPS of 622.08 (~= 240Cores * 1300MHz * 2).
Now in my kernel I counted for each thread 16000 flop (4000 x (2 subtraction, 1 multiplication and 1 sqrt)). So when I have 1,000,000 threads I would come up with 16GFLOP. And as the kernel takes 0.1 seconds I would archive 160GFLOPS, which would be a quarter of the peak performance. Now my questions:
Is this approach correct?
What about comparisons (if(a>b) then....)? Do I have to consider them as well?
Can I use the CUDA profiler for easier and more accurate results? I tried the instructions counter, but I could not figure out, what the figure means.
sister question: How to calculate the achieved bandwidth of a CUDA kernel
First some general remarks:
In general, what you are doing is mostly an exercise in futility and is the reverse of how most people would probably go about performance analysis.
The first point to make is that the peak value you are quoting is for strictly for floating point multiply-add instructions (FMAD), which count as two FLOPS, and can be retired at a maximum rate of one per cycle. Other floating point operations which retire at a maximum rate of one per cycle would formally only be classified as a single FLOP, while others might require many cycles to be retired. So if you decided to quote kernel performance against that peak, you are really comparing your codes performance against a stream of pure FMAD instructions, and nothing more than that.
The second point is that when researchers quote FLOP/s values from a piece of code, they are usually using a model FLOP count for the operation, not trying to count instructions. Matrix multiplication and the Linpack LU factorization benchmarks are classic examples of this approach to performance benchmarking. The lower bound of the operation count of those calculations is exactly known, so the calculated throughput is simply that lower bound divided by the time. The actual instruction count is irrelevent. Programmers often use all sorts of techniques, including rundundant calculations, speculative or predictive calculations, and a host of other ideas to make code run faster. The actual FLOP count of such code is irrelevent, the reference is always the model FLOP count.
Finally, when looking at quantifying performance, there are usually only two points of comparison of any real interest
Does version A of the code run faster than version B on the same hardware?
Does hardware A perform better than hardware B doing the task of interest?
In the first case you really only need to measure execution time. In the second, a suitable measure usually isn't FLOP/s, it is useful operations per unit time (records per second in sorting, cells per second in a fluid mechanical simulation, etc). Sometimes, as mentioned above, the useful operations can be the model FLOP count of an operation of known theoretical complexity. But the actual floating point instruction count rarely, if ever, enters into the analysis.
If your interest is really about optimization and understanding the performance of your code, then maybe this presentation by Paulius Micikevicius from NVIDIA might be of interest.
Addressing the bullet point questions:
Is this approach correct?
Strictly speaking, no. If you are counting floating point operations, you would need to know the exact FLOP count from the code the GPU is running. The sqrt operation can consume a lot more than a single FLOP, depending on its implementation and the characteristics of the number it is operating on, for example. The compiler can also perform a lot of optimizations which might change the actual operation/instruction count. The only way to get a truly accurate count would be to disassemble compiled code and count the individual floating point operands, perhaps even requiring assumptions about the characteristics of values the code will compute.
What about comparisons (if(a>b) then....)? Do I have to consider them as well?
They are not floating point multiply-add operations, so no.
Can I use the CUDA profiler for easier and more accurate results? I tried the instructions counter, but I could not figure out, what the figure means.
Not really. The profiler can't differentiate between a floating point intruction and any other type of instruction, so (as of 2011) FLOP count from a piece of code via the profiler is not possible. [EDIT: see Greg's execellent answer below for a discussion of the FLOP counting facilities available in versions of the profiling tools released since this answer was written]
Nsight VSE (>3.2) and the Visual Profiler (>=5.5) support Achieved FLOPs calculation. In order to collect the metric the profilers run the kernel twice (using kernel replay). In the first replay the number of floating point instructions executed is collected (with understanding of predication and active mask). in the second replay the duration is collected.
nvprof and Visual Profiler have a hardcoded definition. FMA counts as 2 operations. All other operations are 1 operation. The flops_sp_* counters are thread instruction execution counts whereas flops_sp is the weighted sum so some weighting can be applied using the individual metrics. However, flops_sp_special covers a number of different instructions.
The Nsight VSE experiment configuration allows the user to define the operations per instruction type.
Nsight Visual Studio Edition
Configuring to collect Achieved FLOPS
Execute the menu command Nsight > Start Performance Analysis... to open the Activity Editor
Set Activity Type to Profile CUDA Application
In Experiment Settings set Experiments to Run to Custom
In the Experiment List add Achieved FLOPS
In the middle pane select Achieved FLOPS
In the right pane you can custom the FLOPS per instruction executed. The default weighting is for FMA and RSQ to count as 2. In some cases I have seen RSQ as high as 5.
Run the Analysis Session.
Viewing Achieved FLOPS
In the nvreport open the CUDA Launches report page.
In the CUDA Launches page select a kernel.
In the report correlation pane (bottom left) select Achieved FLOPS
nvprof
Metrics Available (on a K20)
nvprof --query-metrics | grep flop
flops_sp: Number of single-precision floating-point operations executed by non-predicated threads (add, multiply, multiply-accumulate and special)
flops_sp_add: Number of single-precision floating-point add operations executed by non-predicated threads
flops_sp_mul: Number of single-precision floating-point multiply operations executed by non-predicated threads
flops_sp_fma: Number of single-precision floating-point multiply-accumulate operations executed by non-predicated threads
flops_dp: Number of double-precision floating-point operations executed non-predicated threads (add, multiply, multiply-accumulate and special)
flops_dp_add: Number of double-precision floating-point add operations executed by non-predicated threads
flops_dp_mul: Number of double-precision floating-point multiply operations executed by non-predicated threads
flops_dp_fma: Number of double-precision floating-point multiply-accumulate operations executed by non-predicated threads
flops_sp_special: Number of single-precision floating-point special operations executed by non-predicated threads
flop_sp_efficiency: Ratio of achieved to peak single-precision floating-point operations
flop_dp_efficiency: Ratio of achieved to peak double-precision floating-point operations
Collection and Results
nvprof --devices 0 --metrics flops_sp --metrics flops_sp_add --metrics flops_sp_mul --metrics flops_sp_fma matrixMul.exe
[Matrix Multiply Using CUDA] - Starting...
==2452== NVPROF is profiling process 2452, command: matrixMul.exe
GPU Device 0: "Tesla K20c" with compute capability 3.5
MatrixA(320,320), MatrixB(640,320)
Computing result using CUDA Kernel...
done
Performance= 6.18 GFlop/s, Time= 21.196 msec, Size= 131072000 Ops, WorkgroupSize= 1024 threads/block
Checking computed result for correctness: OK
Note: For peak performance, please refer to the matrixMulCUBLAS example.
==2452== Profiling application: matrixMul.exe
==2452== Profiling result:
==2452== Metric result:
Invocations Metric Name Metric Description Min Max Avg
Device "Tesla K20c (0)"
Kernel: void matrixMulCUDA<int=32>(float*, float*, float*, int, int)
301 flops_sp FLOPS(Single) 131072000 131072000 131072000
301 flops_sp_add FLOPS(Single Add) 0 0 0
301 flops_sp_mul FLOPS(Single Mul) 0 0 0
301 flops_sp_fma FLOPS(Single FMA) 65536000 65536000 65536000
NOTE: flops_sp = flops_sp_add + flops_sp_mul + flops_sp_special + (2 * flops_sp_fma) (approximately)
Visual Profiler
The Visual Profiler supports the metrics shown in the nvprof section above.

Typical time of execution for elementary functions

It is well-known that the processor instruction for multiplication takes several times more time than addition, division is even worse (UPD: which is not true any more, see below). What about more complex operations like exponent? How difficult are they?
Motivation. I am interested because it would help in algorithm design to estimate performance-critical parts of algorithms on early stage. Suppose I want to apply a set of filters to an image. One of them operates on 3×3 neighborhood of each pixel, sums them and takes atan. Another one sums more neighbouring pixels, but does not use complicated functions. Which one would execute longer?
So, ideally I want to have approximate relative times of elementary operations execution, like multiplication typically takes 5 times more time than addition, exponent is about 100 multiplications. Of course, it is a deal of orders of magnitude, not the exact values. I understand that it depends on the hardware and on the arguments, so let's say we measure average time (in some sense) for floating-point operations on modern x86/x64. For operations that are not implemented in hardware, I am interested in typical running time for C++ standard libraries.
Have you seen any sources when such thing was analyzed? Does this question makes sense at all? Or no rules of thumb like this could be applied in practice?
First off, let's be clear. This:
It is well-known that processor instruction for multiplication takes
several times more time than addition
is no longer true in general. It hasn't been true for many, many years, and needs to stop being repeated. On most common architectures, integer multiplies are a couple cycles and integer adds are single-cycle; floating-point adds and multiplies tend to have nearly equal timing characteristics (typically around 4-6 cycles latency, with single-cycle throughput).
Now, to your actual question: it varies with both the architecture and the implementation. On a recent architecture, with a well written math library, simple elementary functions like exp and log usually require a few tens of cycles (20-50 cycles is a reasonable back-of-the-envelope figure). With a lower-quality library, you will sometimes see these operations require a few hundred cycles.
For more complicated functions, like pow, typical timings range from high tens into the hundreds of cycles.
You shouldn't be concerned about this. If I tell you that a typical C library implementation of transcendental functions tend to take around 10 times a single floating point addition/multiplication (or 50 floating point additions/multiplications), and around 5 times a floating point division, this wouldn't be useful to you.
Indeed, the way your processor schedules memory accesses will interfere badly with any premature optimization you'd do.
If after profiling you find that a particular implementation using transcendental functions is too slow, you can contemplate setting up a polynomial interpolation scheme. This will include a table and therefore will incur extra cache issues, so make sure to measure and not guess.
This will likely involve Chebyshev approximation. Document yourself about it, this is a particularly useful technique in this kind of domains.
I have been told that compilers are quite bad in optimizing floating point code. You may want to write custom assembly code.
Also, Intel Performance Primitives (if you are on Intel CPU) is something good to own if you are ready to trade off some accuracy for speed.
You could always start a second thread and time the operations. Most elementary operations don't have that much difference in execution time. The big difference is how many times the are executed. The O(n) is generally what you should be thinking about.

benchmarking trig lookup tables performance gains vs cpp implementation

We are developing a real-time system that will be performing sin/cos calculations during a time critical period of operation. We're considering using a lookup table to help with performance, and I'm trying to benchmark the benefit/cost of implementing a table. Unfortunately we don't yet know what degree of accuracy we will need, but probably around 5-6 decimal points.
I figure that a through comparison of C++ trig functions to lookup approaches has already been done previously. I was hoping that someone could provide me with a link to a site documenting any such benchmarking. If such results don't exist I would appreciate any suggestions for how I can determine how much memory is required for a lookup table assuming a given minimum accuracy, and how I can determine the potential speed benefits.
Thanks!
I can't answer all your questions, but instead of trying to determine theoretical speed benefits you would almost certainly be better off profiling it in your actual application. Then you get an accurate picture of what sort of improvement you stand to gain in your specific problem domain, which is the most useful information for your needs.
What accuracy is your degree input (let's use degrees over radians to keep the discussion "simpler"). Tenths of a degree? Hundredths of a degree? If your angle precision is not great, then your trig result cannot be any better.
I've seen this implemented as an array indexed by hundredths of a degree (keeping the angle as an integer w/two implied decimal point also helps with the calculation - no need to use high precision float/double radian angles).
Store SIN values of 0.00 to to 90.00 degrees would be 9001 32 bit float result values.
SIN[0] = 0.0
...
SIN[4500] = 0.7071068
...
SIN[9000] = 1.0
If you have SIN, the trig property of COS(a) = SIN(90-a)
just means you do
SIN[9000-a]
to get COS(a)
If you need more precision but don't have the memory for more table space, you could do linear interpolation between the two entries in the array, e.g. SIN of 45.00123 would be
SIN[4500] + 0.123 * (SIN[4501] - SIN[4500])
The only way to know the performance characteristics of the two approaches is to try them.
Yes, there are probably benchmarks of this made by others, but they didn't run in the context of your code, and they weren't running on your hardware, so they're not very applicable to your situation.
One thing you can do, however, is to look up the instruction latencies in the manuals for your CPU. (Intel and AMD have this information available in PDF form on their websites, and most other CPU manufacturers have similar documents)
Then you can at least find out how fast the actual trig instructions are, giving you a baseline that the lookup table will have to beat to be worthwhile.
But that only gives you a rough estimate of one side of the equation. You might be able to make a similar rough estimate of the cost of a lookup table as well, if you know the latencies of the CPU's caches, and you have a rough idea of the latency of memory accesses.
But the only way to get accurate information is to try it. Implement both, and see what happens in your application. Only then will you know which is better in your case.