Performance difference between float and double in x86 and x86_64 - c++

A while ago I heard that some compilers use SSE2 extensions for floating point operations for x86_64 architecture, so I used this simple code to determine the performance difference between them.
I disabled Intel SpeedStep technology via BIOS and system load was approximately equal for my tests. I am using GCC 4.8 on OpenSuSE 64 bit.
I am writing a program with a lot of FPU operations and I would like to know if this test is valid or not?
And any information about the performance difference between float and double under each architecture is appreciated.
Code :
#include <iostream>
#include <sys/time.h>
#include <vector>
#include <cstdlib>
using namespace std;
int main()
{
timeval t1, t2;
double elapsedTime;
double TotalTime = 0;
for(int j=0 ; j < 100 ; j++)
{
// start timer
gettimeofday(&t1, NULL);
vector<float> RealVec;
float temp;
for (int i = 0; i < 1000000; i++)
{
temp = static_cast <float> (rand()) / (static_cast <float> (RAND_MAX));
RealVec.push_back(temp);
}
for (int i = 0; i < 1000000; i++)
{
RealVec[i] = (RealVec[i]*2-435.345345)/15.75;
}
// stop timer
gettimeofday(&t2, NULL);
elapsedTime = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms
elapsedTime += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms
TotalTime = TotalTime + elapsedTime;
}
cout << TotalTime/100 << " ms.\n";
return 0;
}
and result :
32 Bit Double
157.781 ms.
151.994 ms.
152.244 ms.
32 Bit Float
149.896 ms.
148.489 ms.
161.086 ms.
64 Bit Double
110.125 ms.
111.612 ms.
113.818 ms.
64 Bit Float
110.393 ms.
106.778 ms.
107.833 ms.

You're really not measuring much; perhaps just the degree of compiler
optimization. In order for the measurements to be valid, you really
have to do something with the results, or the compiler can optimize out
all, or the major part of your tests. What I woule do is 1) initialize
the vector, 2) get the start time (probably using clock, since that
only takes CPU time into account), 3) execute the second loop a 100 (or
more... enough to last a couple of seconds, at least) times, 4) get the
end time, and finally, 5) output the sum of the elements in the vector.
With regards to the differences you may find: independently of the
floating point processors, the 64 bit machine has more general registers
for the compiler to play with. This could have an enormous impact.
Unless you look at the generated assembler, you just can't know.

Not really valid. You're basically testing the performance of the random number generator.
Also, you're not trying to enforce SSE2 SIMD operation, so you can't really claim this compares anything SSE-related.

Valid in what sense?
Measure actual usage, with your actual code.
Some artificial test suite probably won't help you assess the performance characteristics.
You can use a typedef, then change the actual underlying type with a flick of a switch.

Related

Code performance strick measurement

I’m creating performance framework tool for measuring individual message processing time in CentOS 7. I reserved one CPU for this task with isolcpus kernel option and I run it using taskset.
Ok, now the problem. I trying to measure the max processing time among several messages. The processing time is <= 1000ns, but when I run many iterations I get very high results (> 10000ns).
Here I created some simple code which does nothing interesting but shows the problem. Depending on the number of iterations i can get results like:
max: 84 min: 23 -> for 1000 iterations
max: 68540 min: 11 -> for 100000000 iterations
I'm trying to understand from where this difference came from? I tried to run this with real-time scheduling with highest priority. Is there some way to prevent that?
#include <iostream>
#include <limits>
#include <time.h>
const unsigned long long SEC = 1000L*1000L*1000L;
inline int64_t time_difference( const timespec &start,
const timespec &stop ) {
return ( (stop.tv_sec * SEC - start.tv_sec * SEC) +
(stop.tv_nsec - start.tv_nsec));
}
int main()
{
timespec start, stop;
int64_t max = 0, min = std::numeric_limits<int64_t>::max();
for(int i = 0; i < 100000000; ++i){
clock_gettime(CLOCK_REALTIME, &start);
clock_gettime(CLOCK_REALTIME, &stop);
int64_t time = time_difference(start, stop);
max = std::max(max, time);
min = std::min(min, time);
}
std::cout << "max: " << max << " min: " << min << std::endl;
}
You can't really reduce jitter to zero even with isolcpus, since you still have at least the following:
1) Interrupts delivered to your CPU (you may be able to reduce this my messing with irq affinity - but probably not to zero).
2) Clock timer interrupts are still scheduled for your process and may do a variable amount of work on the kernel side.
3) The CPU itself may pause briefly for P-state or C-state transitions, or other reasons (e.g., to let voltage levels settle after turning on AVX circuitry, etc).
Let us check the documentation...
Isolation will be effected for userspace processes - kernel threads may still get scheduled on the isolcpus isolated CPUs.
So it seems that there is no guarantee of perfect isolation, at least not from the kernel.

Testing parallel_for_ performance in OpenCV

I tested parallel_for_ in OpenCV by comparing with the normal operation for just simple array summation and multiplication.
I have array of 100 integers and split into 10 each and run using parallel_for_.
Then I also have normal 0 to 99 operation for summation and multiuplication.
Then I measured the elapsed time and normal operation is faster than parallel_for_ operation.
My CPU is Intel(R) Core(TM) i7-2600 Quard Core CPU.
parallel_for_ operation took 0.002sec (took 2 clock cycles) for summation and 0.003sec (took 3 clock cycles) for multiplication.
But normal operation took 0.0000sec (less than one click cycle) for both summation and multiplication. What am I missing? My code is as follow.
TEST Class
#include <opencv2\core\internal.hpp>
#include <opencv2\core\core.hpp>
#include <tbb\tbb.h>
using namespace tbb;
using namespace cv;
template <class type>
class Parallel_clipBufferValues:public cv::ParallelLoopBody
{
private:
type *buffertoClip;
type maxSegment;
char typeOperation;//m = mul, s = summation
static double total;
public:
Parallel_clipBufferValues(){ParallelLoopBody::ParallelLoopBody();};
Parallel_clipBufferValues(type *buffertoprocess, const type max, const char op): buffertoClip(buffertoprocess), maxSegment(max), typeOperation(op){
if(typeOperation == 's')
total = 0;
else if(typeOperation == 'm')
total = 1;
}
~Parallel_clipBufferValues(){ParallelLoopBody::~ParallelLoopBody();};
virtual void operator()(const cv::Range &r) const{
double tot = 0;
type *inputOutputBufferPTR = buffertoClip+(r.start*maxSegment);
for(int i = 0; i < 10; ++i)
{
if(typeOperation == 's')
total += *(inputOutputBufferPTR+i);
else if(typeOperation == 'm')
total *= *(inputOutputBufferPTR+i);
}
}
static double getTotal(){return total;}
void normalOperation(){
//int iteration = sizeof(buffertoClip)/sizeof(type);
if(typeOperation == 'm')
{
for(int i = 0; i < 100; ++i)
{
total *= buffertoClip[i];
}
}
else if(typeOperation == 's')
{
for(int i = 0; i < 100; ++i)
{
total += buffertoClip[i];
}
}
}
};
MAIN
#include "stdafx.h"
#include "TestClass.h"
#include <ctime>
double Parallel_clipBufferValues<int>::total;
int _tmain(int argc, _TCHAR* argv[])
{
const int SIZE=100;
int myTab[SIZE];
double totalSum_by_parallel;
double totalSun_by_normaloperation;
double elapsed_secs_parallel;
double elapsed_secs_normal;
for(int i = 1; i <= SIZE; i++)
{
myTab[i-1] = i;
}
int maxSeg =10;
clock_t begin_parallel = clock();
cv::parallel_for_(cv::Range(0,maxSeg), Parallel_clipBufferValues<int>(myTab, maxSeg, 'm'));
totalSum_by_parallel = Parallel_clipBufferValues<int>::getTotal();
clock_t end_parallel = clock();
elapsed_secs_parallel = double(end_parallel - begin_parallel) / CLOCKS_PER_SEC;
clock_t begin_normal = clock();
Parallel_clipBufferValues<int> norm_op(myTab, maxSeg, 'm');
norm_op.normalOperation();
totalSun_by_normaloperation = norm_op.getTotal();
clock_t end_normal = clock();
elapsed_secs_normal = double(end_normal - begin_normal) / CLOCKS_PER_SEC;
return 0;
}
Let me do some considerations:
Accuracy
clock() function is not accurate at all. Its tick is roughly 1 / CLOCKS_PER_SEC but how often it's updated and if it's uniform or not it's system and implementation dependent. See this post for more details about that.
Better alternatives to measure time:
This post for Windows.
This article for *nix.
Trials and Test Environment
Measures are always affected by errors. Performance measurement for your code is affected (short list, there is much more than that) by other programs, cache, operating system jobs, scheduling and user activity. To have a better measure you have to repeat it many times (let's say 1000 or more) then calculate average. Moreover you should prepare your test environment to be as clean as possible.
More details about tests on these posts:
How do I write a correct micro-benchmark in Java?
NAS Parallel Benchmarks
Visual C++ 11 Beta Benchmark of Parallel Loops (for code examples)
Great articles from our Eric Lippert about benchmarking (it's about C# but most of them applies directly to any bechmark): C# Performance Benchmark Mistakes (part II).
Overhead and Scalability
In your case overhead for parallel execution (and your test code structure) is much higher that loop body itself. In this case it's not productive to make an algorithm parallel. Parallel execution must always be evaluated in a specific scenario, measured and compared. It's not kind of magic medicine to speed up everything. Take a look to this article about How to Quantify Scalability.
Just for example if you have to sum/multiply 100 numbers it's better to use SIMD instructions (even better within an unrolled loop).
Measure It!
Try to make your loop body empty (or to execute a single NOP operation or volatile write so it won't be optimized away). You'll roughly measure overhead. Now compare it with your results.
Notes About This Test
IMO this kind of test is pretty useless. You can't compare, in a generic way, serial or parallel execution. It's something you should always check against a specific situation (in real world many things will play, synchronization for example).
Imagine: you make your loop body really "heavy" and you'll see a big speed up with parallel execution. Now you make your real program parallel and you see performance is worse. Why? Because parallel execution is slowed down by locks, by cache problems or serial access to a shared resource.
Test itself is meaningless unless you're testing your specific code in your specific situation (because too many factors will play and you just can't ignore them). What it means? Well that you can compare only what you tested...if your program performs total *= buffertoClip[i]; then your results are reliable. If your real program does something else then you have to repeat tests with that something else.

How to handle caching while timing an operating in C++ in linux

I have to time the clock_gettime() function for estimating and profiling other operations, and it's for homework so I cant use a profiler and have to write my own code.
The way I'm doing it is like below:
clock_gettime(CLOCK_PROCESS_CPUTIME_ID,&begin);
for(int i=0;i<=n;i++)
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &end);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &end);
cout<<(end.tv_nsec-begin.tv_nsec)/n; //time per clock_gettime()
The problem is that when n=100, output is: 370.63 ns, when n=100000, output: 330 ns, when n=1000000, output: 260 ns, n=10000000, output: 55 ns, ....keeps reducing.
I understand that this is happening because of instruction caching, but I don't know how to handle this in profiling. Because for example when I estimate the time for a function call using gettime, how would I know how much time that gettime used for itself?
Would taking a weighted mean of all these values be a good idea? (I can run the operation I want the same number of times, take weighted mean of that, subtract weighted mean of gettime and get a good estimate of the operation irrespective of caching?)
Any suggestions are welcome.
Thank you in advance.
When you compute the time difference: (end.tv_nsec-begin.tv_nsec)/n
You are only taking into account the nanoseconds part of the elapsed time. You must also take the seconds into account since the tv_nsec field only reflects the fractional part of a second:
int64_t end_ns = ((int64_t)end.tv_sec * 1000000000) + end.tv_nsec;
int64_t begin_ns = ((int64_t)begin.tv_sec * 1000000000) + begin.tv_nsec;
int64_t elapsed_ns = end_ns - begin_ns;
Actually, with your current code you should sometimes get negative results when the nanoseconds part of end has wrapped around and is less than begin's nanoseconds part.
Fix that, and you'll be able to observe much more consistent results.
Edit: for the sake of completeness, here's the code I used for my tests, which gets me very consistent results (between 280 and 300ns per call, whatever number of iterations I use):
int main() {
const int loops = 100000000;
struct timespec begin;
struct timespec end;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &begin);
for(int i = 0; i < loops; i++)
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &end);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &end);
int64_t end_ns = ((int64_t)end.tv_sec * 1000000000) + end.tv_nsec;
int64_t begin_ns = ((int64_t)begin.tv_sec * 1000000000) + begin.tv_nsec;
int64_t elapsed_ns = end_ns - begin_ns;
int64_t ns_per_call = elapsed_ns / loops;
std::cout << ns_per_call << std::endl;
}

Interesting processing time results

I've made a small application that averages the numbers between 1 and 1000000. It's not hard to see (using a very basic algebraic formula) that the average is 500000.5 but this was more of a project in learning C++ than anything else.
Anyway, I made clock variables that were designed to find the amount of clock steps required for the application to run. When I first ran the script, it said that it took 3770000 clock steps, but every time that I've run it since then, it's taken "0.0" seconds...
I've attached my code at the bottom.
Either a.) It's saved the variables from the first time I ran it, and it's just running quickly to the answer...
or b.) something is wrong with how I'm declaring the time variables.
Regardless... it doesn't make sense.
Any help would be appreciated.
FYI (I'm running this through a Linux computer, not sure if that matters)
double avg (int arr[], int beg, int end)
{
int nums = end - beg + 1;
double sum = 0.0;
for(int i = beg; i <= end; i++)
{
sum += arr[i];
}
//for(int p = 0; p < nums*10000; p ++){}
return sum/nums;
}
int main (int argc, char *argv[])
{
int nums = 1000000;//atoi(argv[0]);
int myarray[nums];
double timediff;
//printf("Arg is: %d\n",argv[0]);
printf("Nums is: %d\n",nums);
clock_t begin_time = clock();
for(int i = 0; i < nums; i++)
{
myarray[i] = i+1;
}
double average = avg(myarray, 0, nums - 1);
printf("%f\n",average);
clock_t end_time = clock();
timediff = (double) difftime(end_time, begin_time);
printf("Time to Average: %f\n", timediff);
return 0;
}
You are measuring the I/O operation too (printf), that depends on external factors and might be affecting the run time. Also, clock() might not be as precise as needed to measure such a small task - look into higher resolution functions such as clock_get_time(). Even then, other processes might affect the run time by generating page fault interrupts and occupying the memory BUS, etc. So this kind of fluctuation is not abnormal at all.
On the machine I tested, Linux's clock call was only accurate to 1/100th of a second. If your code runs in less than 0.01 seconds, it will usually say zero seconds have passed. Also, I ran your program a total of 50 times in .13 seconds, so I find it suspicous that you claim it takes 2 seconds to run it once on your computer.
Your code incorrectly uses the difftime, which may display incorrect output as well if clock says time did pass.
I'd guess that the first timing you got was with different code than that posted in this question, becase I can't think of any way the code in this question could produce a time of 3770000.
Finally, benchmarking is hard, and your code has several benchmarking mistakes:
You're timing how long it takes to (1) fill an array, (2) calculate an average, (3) format the result string (4) make an OS call (slow) that prints said string in the right language/font/colo/etc, which is especially slow.
You're attempting to time a task which takes less than a hundredth of a second, which is WAY too small for any accurate measurement.
Here is my take on your code, measuring that the average takes ~0.001968 seconds on this machine.

Benchmarking math.h square root and Quake square root

Okay so I was board and wondered how fast math.h square root was in comparison to the one with the magic number in it (made famous by Quake but made by SGI).
But this has ended up in a world of hurt for me.
I first tried this on the Mac where the math.h would win hands down every time then on Windows where the magic number always won, but I think this is all down to my own noobness.
Compiling on the Mac with "g++ -o sq_root sq_root_test.cpp" when the program ran it takes about 15 seconds to complete. But compiling in VS2005 on release takes a split second. (in fact I had to compile in debug just to get it to show some numbers)
My poor man's benchmarking? is this really stupid? cos I get 0.01 for math.h and 0 for the Magic number. (it cant be that fast can it?)
I don't know if this matters but the Mac is Intel and the PC is AMD. Is the Mac using hardware for math.h sqroot?
I got the fast square root algorithm from http://en.wikipedia.org/wiki/Fast_inverse_square_root
//sq_root_test.cpp
#include <iostream>
#include <math.h>
#include <ctime>
float invSqrt(float x)
{
union {
float f;
int i;
} tmp;
tmp.f = x;
tmp.i = 0x5f3759df - (tmp.i >> 1);
float y = tmp.f;
return y * (1.5f - 0.5f * x * y * y);
}
int main() {
std::clock_t start;// = std::clock();
std::clock_t end;
float rootMe;
int iterations = 999999999;
// ---
rootMe = 2.0f;
start = std::clock();
std::cout << "Math.h SqRoot: ";
for (int m = 0; m < iterations; m++) {
(float)(1.0/sqrt(rootMe));
rootMe++;
}
end = std::clock();
std::cout << (difftime(end, start)) << std::endl;
// ---
std::cout << "Quake SqRoot: ";
rootMe = 2.0f;
start = std::clock();
for (int q = 0; q < iterations; q++) {
invSqrt(rootMe);
rootMe++;
}
end = std::clock();
std::cout << (difftime(end, start)) << std::endl;
}
There are several problems with your benchmarks. First, your benchmark includes a potentially expensive cast from int to float. If you want to know what a square root costs, you should benchmark square roots, not datatype conversions.
Second, your entire benchmark can be (and is) optimized out by the compiler because it has no observable side effects. You don't use the returned value (or store it in a volatile memory location), so the compiler sees that it can skip the whole thing.
A clue here is that you had to disable optimizations. That means your benchmarking code is broken. Never ever disable optimizations when benchmarking. You want to know which version runs fastest, so you should test it under the conditions it'd actually be used under. If you were to use square roots in performance-sensitive code, you'd enable optimizations, so how it behaves without optimizations is completely irrelevant.
Also, you're not benchmarking the cost of computing a square root, but of the inverse square root.
If you want to know which way of computing the square root is fastest, you have to move the 1.0/... division down to the Quake version. (And since division is a pretty expensive operation, this might make a big difference in your results)
Finally, it might be worth pointing out that Carmacks little trick was designed to be fast on 12 year old computers. Once you fix your benchmark, you'll probably find that it's no longer an optimization, because today's CPU's are much faster at computing "real" square roots.