In C# I am using the Stopwatch class. I can get the ticks, and milliseconds with no problems.
Now that I am testing code while learning C++ I try to get measurements but
I don't know where the results are that match the C# Stopwatch solution equivalent. I tried to search but the information is too broad and I couldn't find an absolute solution.
double PCFreq = 0.0;
__int64 CounterStart = 0;
void StartCounter()
{
LARGE_INTEGER li;
if(!QueryPerformanceFrequency(&li))
std::cout << "QueryPerformanceFrequency failed!\n";
PCFreq = double(li.QuadPart)/1000.0;
QueryPerformanceCounter(&li);
CounterStart = li.QuadPart;
}
double GetCounter()
{
LARGE_INTEGER li;
QueryPerformanceCounter(&li);
return double(li.QuadPart-CounterStart)/PCFreq;
}
As that gives me two different results, I tend to believe the clock. :)
start = StartCounter()
//some function or for loop
end = GetCounter()
marginPc = end - start;
start = clock();
// ...same
end= clock();
marginClck = end - start;
std::cout<< "Res Pc: " << marginPc << "\r\nRes Clck: " marginClck<< std::endl;
With the clock version I tried both unsigned int and double but the results were still different.
What is the proper method equivalent to the C# Stopwatch?
clock() gives you the number of milliseconds since the program started. For example, the following program will print a number close to 500:
int main()
{
Sleep(500);
cout << clock() << endl;
/*
POSIX version:
std::cout << clock() * 1000.0 / CLOCKS_PER_SEC << std::endl;
CLOCKS_PER_SEC is 1000 in Windows
*/
return 0;
}
QueryPerformanceCounter is sort of similar to GetTickCount64, it is based on the time when the computer started. When you do Stop-Watch type subtraction, the results are very close. QueryPerformanceCounter is more accurate. chrono method from #BoPersson's link is also based on QueryPerformanceCounter.
MSDN recommends using QueryPerformanceCounter (QPC) for high resolution stamps:
Acquiring high-resolution time stamps
The same QPC function is used in managed code:
For managed code, the System.Diagnostics.Stopwatch class uses
QPC as its precise time basis
This function should have reasonable accuracy:
long long getmicroseconds()
{
LARGE_INTEGER fq, t;
QueryPerformanceFrequency(&fq);
QueryPerformanceCounter(&t);
return 1000000 * t.QuadPart / fq.QuadPart;
}
The computer clock is usually accurate to +/-1 second per day.
From above link:
Duration Uncertainty
1 microsecond ± 10 picoseconds (10-12)
1 millisecond ± 10 nanoseconds (10-9)
1 second ± 10 microseconds
1 hour ± 60 microseconds
1 day ± 0.86 seconds
1 week ± 6.08 seconds
To simplify your other function, you can avoid double results. QuadPart is long long, so use that throughout the functions:
long long PCFreq = 0;
long long CounterStart = 0;
void StartCounter()
{
LARGE_INTEGER li;
QueryPerformanceFrequency(&li);
PCFreq = li.QuadPart;
QueryPerformanceCounter(&li);
CounterStart = li.QuadPart;
}
long long GetCounter()
{
if (PCFreq < 1) return 0;
LARGE_INTEGER li;
QueryPerformanceCounter(&li);
//for milliseconds: 1,000
return 1000 * (li.QuadPart - CounterStart) / PCFreq;
//for microseconds: 1,000,000
//return 1000000 * (li.QuadPart - CounterStart) / PCFreq;
}
Your bug is this. You have StartCounter return CounterStart = li.QuadPart;
But GetCounter returns double(li.QuadPart-CounterStart)/PCFreq.
I.e. one is divided by PCFreq and the other is not. It's not valid to then subtract one from the other.
Related
I would like to measure wallclock time taken by my algorithm in C++. Many articles point to this code.
clock_t begin_time, end_time;
begin_time = clock();
Algorithm();
end_time = clock();
cout << ((double)(end_time - begin_time)/CLOCKS_PER_SEC) << endl;
But this measures only cpu time taken by my algorithm.
Some other article pointed out this code.
double getUnixTime(void)
{
struct timespec tv;
if(clock_gettime(CLOCK_REALTIME, &tv) != 0) return 0;
return (tv.tv_sec + (tv.tv_nsec / 1000000000.0));
}
double begin_time, end_time;
begin_time = getUnixTime();
Algorithm();
end_time = getUnixTime();
cout << (double) (end_time - begin_time) << endl;
I thought it would print wallclock time taken by my algorithm. But surprisingly, the time printed by this code is much lower than cpu time printed by previous code. So, I am confused. Please provide code for printing wallclock time.
Those times are probably down in the noise. To get a reasonable time measurement, try executing your algorithm many times in a loop:
const int loops = 1000000;
double begin_time, end_time;
begin_time = getUnixTime();
for (int i = 0; i < loops; ++i)
Algorithm();
end_time = getUnixTime();
cout << (double) (end_time - begin_time) / loops << endl;
I'm getting approximately the same times in a single threaded program:
#include <time.h>
#include <stdio.h>
__attribute((noinline)) void nop(void){}
void loop(unsigned long Cnt) { for(unsigned long i=0; i<Cnt;i++) nop(); }
int main()
{
clock_t t0,t1;
struct timespec ts0,ts1;
t0=clock();
clock_gettime(CLOCK_REALTIME,&ts0);
loop(1000000000);
t1=clock();
clock_gettime(CLOCK_REALTIME,&ts1);
printf("clock-diff: %lu\n", (unsigned long)((t1 - t0)/CLOCKS_PER_SEC));
printf("clock_gettime-diff: %lu\n", (unsigned long)((ts1.tv_sec - ts0.tv_sec)));
}
//prints 2 and 3 or 2 and 2 on my system
But clocks manpage only describes it as returning an approximation. There's no indication that approximation is comparable to what clock_gettime returns.
Where I get drastically different results is where I throw in multiple threads:
#include <time.h>
#include <stdio.h>
#include <pthread.h>
__attribute((noinline)) void nop(void){}
void loop(unsigned long Cnt) {
for(unsigned long i=0; i<Cnt;i++) nop();
}
void *busy(void *A){ (void)A; for(;;) nop(); }
int main()
{
pthread_t ptids[4];
for(size_t i=0; i<sizeof(ptids)/sizeof(ptids[0]); i++)
pthread_create(&ptids[i], 0, busy, 0);
clock_t t0,t1;
struct timespec ts0,ts1;
t0=clock();
clock_gettime(CLOCK_REALTIME,&ts0);
loop(1000000000);
t1=clock();
clock_gettime(CLOCK_REALTIME,&ts1);
printf("clock-diff: %lu\n", (unsigned long)((t1 - t0)/CLOCKS_PER_SEC));
printf("clock_gettime-diff: %lu\n", (unsigned long)((ts1.tv_sec - ts0.tv_sec)));
}
//prints 18 and 4 on my 4-core linux system
That's because both musl and glibc on Linux use clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts) to implement clock() and the CLOCK_PROCESS_CPUTIME_ID nonstandard clock is described in the clock_gettime manpage as returning time for all process threads together.
The following Binary Search program is returning a running time of 0 milliseconds using GetTickCount() no matter how big the search item is set in the given list of values.
Is there any other way to get the running time for comparison?
Here's the code :
#include <iostream>
#include <windows.h>
using namespace std;
int main(int argc, char **argv)
{
long int i = 1, max = 10000000;
long int *data = new long int[max];
long int initial = 1;
long int final = max, mid, loc = -5;
for(i = 1; i<=max; i++)
{
data[i] = i;
}
int range = final - initial + 1;
long int search_item = 8800000;
cout<<"Search Item :- "<<search_item<<"\n";
cout<<"-------------------Binary Search-------------------\n";
long int start = GetTickCount();
cout<<"Start Time : "<<start<<"\n";
while(initial<=final)
{
mid=(initial+final)/2;
if(data[mid]==search_item)
{
loc=mid;
break;
}
if(search_item<data[mid])
final=mid-1;
if(search_item>data[mid])
initial=mid+1;
}
long int end = GetTickCount();
cout<<"End Time : "<<end<<"\n";
cout << "time: " << double(end - start)<<" milliseconds \n";
if(loc==-5)
cout<<" Required number not found "<<endl;
else
cout<<" Required number is found at index "<<loc<<endl;
return 0;
}
Your code looks like this:
int main()
{
// Some code...
while (some_condition)
{
// Some more code...
// Print timing result
return 0;
}
}
That's why your code prints zero time, you only do one iteration of the loop then you exit the program.
Try to use the clock_t object from the time.h header:
clock_t START, END;
START = clock();
**YOUR CODE GOES HERE**
END = clock();
float clocks = END - START;
cout <<"running time : **" << clocks/CLOCKS_PER_SEC << "** seconds" << endl;
CLOCKS_PER_SEC is a defined var to convert from clock ticks to seconds.
https://msdn.microsoft.com/en-us/library/windows/desktop/ms724408(v=vs.85).aspx
This article says that result of GetTickCount will wrap to zero if you system runs for 49.7 days.
You can find here: Easily measure elapsed time how to measure time in C++.
You can use time.h header
and do something like this in your code :
clock_t Start, Stop;
double sec;
Start = clock();
//call your BS function
Stop = clock();
Sec = ((double) (Stop - Start) / CLOCKS_PER_SEC);
and print the sec!
I hope this helps you!
The complexity of binary search is log2(N), it's about 23 for N = 10000000.
I think its not enough to mesure in realtime scale and even clock.
In this case you should use unsigned long long __rdtsc(), that returns number of processor ticks from last reset. Put this before and after your binary search and place cout << start; after obtaining end time. Overwise time of output would be included.
There is also memory corruption around data array. Index in C runs from 0 to size - 1, so thereis no data[max] element.
And delete [] data; before calling return.
I'm making a stopwatch, and I need to output the seconds out like so: "9.743 seconds".
I have the start time, the end time, and the difference measured out in clocks, and was planning on achieving the decimal by dividing the difference by 1000. However, no matter what I try, it will always output as a whole number. It's probably something small I'm overlooking, but I haven't a clue what.
Here's my code:
#include "Stopwatch.h"
#include <iostream>
#include <iomanip>
using namespace std;
Stopwatch::Stopwatch(){
clock_t startTime = 0;
clock_t endTime = 0;
clock_t elapsedTime = 0;
long miliseconds = 0;
}
void Stopwatch::Start(){
startTime = clock();
}
void Stopwatch::Stop(){
endTime = clock();
}
void Stopwatch::DisplayTimerInfo(){
long formattedSeconds;
setprecision(4);
seconds = (endTime - startTime) / CLOCKS_PER_SEC;
miliseconds = (endTime - startTime) / (CLOCKS_PER_SEC / 1000);
formattedSeconds = miliseconds / 1000;
cout << formattedSeconds << endl;
system("pause");
}
Like I said, the output is integer. Say it timed 5892 clocks: the output would be "5".
Division between integers is still an integer. Cast one of your division parameters to a real type (double or float) and assign to another variable that is a real type.
double elapsedSeconds = (endTime - startTime) / (double)(CLOCKS_PER_SEC);
cout << elapsedSeconds << endl;
formattedSeconds =(double) miliseconds / 1000;
it will give you real number output
I have app, where i must count time of executing part of C++ function and ASM function. Actually i have problem, times which i get are weird - 0 or about 15600. O ocurs more often. And sometimes, after executing, times looks good, and values are different than 0 and ~15600. Anybody knows why it occurs ? And how to fix it ?
Fragment of counting time for executing my app for C++:
auto start = chrono::system_clock::now();
for (int i = 0; i < nThreads; i++)
xThread[i]->Start(i);
for (int i = 0; i < nThreads; i++)
xThread[i]->Join();
auto elapsed = chrono::system_clock::now() - start;
long long milliseconds = chrono::duration_cast<std::chrono::microseconds>(elapsed).count();
cppTimer = milliseconds;
What you're seeing there is the resolution of your timer. Apparently, chrono::system_clock ticks every 1/64th of a second, or 15,625 microseconds, on your system.
Since you're in C++/CLI and have the .Net library available, I'd switch to using the Stopwatch class. It will generally have a much higher resolution than 1/64th of a second.
Looks good to me. Except for cast to std::chrono::microseconds and naming it milliseconds.
The snippet I have used for many months now is:
class benchmark {
private:
typedef std::chrono::high_resolution_clock clock;
typedef std::chrono::milliseconds milliseconds;
clock::time_point start;
public:
benchmark(bool startCounting = true) {
if(startCounting)
start = clock::now();
}
void reset() {
start = clock::now();
}
// in milliseconds
double elapsed() {
milliseconds ms = std::chrono::duration_cast<milliseconds>(clock::now() - start);
double elapsed_secs = ms.count() / 1000.0;
return elapsed_secs;
}
};
// usage
benchmark b;
...
cout << "took " << b.elapsed() << " ms" << endl;
Why boost::timer gives me such strange results?
My working solution is to use wrapper about gettimeofday function from <time.h>, but I don't understand why boost::timer is not working for me here. What do I do wrong?
class Timer {
private:
timeval startTime;
public:
void start(){
gettimeofday(&startTime, NULL);
}
double stop(){
timeval endTime;
long seconds, useconds;
double duration;
gettimeofday(&endTime, NULL);
seconds = endTime.tv_sec - startTime.tv_sec;
useconds = endTime.tv_usec - startTime.tv_usec;
duration = seconds + useconds/1000000.0;
return duration;
}
long stop_useconds(){
timeval endTime;
long useconds;
gettimeofday(&endTime, NULL);
useconds = endTime.tv_usec - startTime.tv_usec;
return useconds;
}
static void printTime(double duration){
printf("%5.6f seconds\n", duration);
}
};
test:
//test
for (int i = 0; i < 10; i++) {
void *vp = malloc(1024*sizeof(int));
memset((int *)vp, 0, 1024);
void* itab = malloc(sizeof(int)*1024*256); //1MiB table
if (itab) {
memset ( (int*)itab, 0, 1024*256*sizeof (int) );
float elapsed;
boost::timer t;
Timer timer = Timer();
timer.start();
Munge64(itab, 1024*256);
double duration = timer.stop();
long lt = timer.stop_useconds();
timer.printTime(duration);
cout << t.elapsed() << endl;
elapsed = t.elapsed();
cout << ios::fixed << setprecision(10) << elapsed << endl;
cout << ios::fixed << setprecision(10) << t.elapsed() << endl;
printf("Munge8 elapsed:%ld useconds\n", lt);
elapsed = 0;
free(vp);
free(itab);
//printf("Munge8 elapsed:%d\n", elapsed);
}
}
results:
0.000100 seconds
0 << ??????????
40 << ????????????????
40 << ???????????????????????????????????
Munge8 elapsed:100 useconds
0.000100 seconds
0
40
40
Munge8 elapsed:100 useconds
0.000099 seconds
0
40
40
Munge8 elapsed:99 useconds
You should not use boost::timer - http://www.boost.org/doc/libs/1_54_0/libs/timer/doc/original_timer.html#Class timer
On POSIX like it measures CPU time - not wall clock time.
Consider using boost::chrono or std::chrono - you would want to look at steady_clock - as other clocks when implementing a timer if you want to isolate yourself from drift or shift in system wall clock. I expect on POSIX this will use clock_gettime on CLOCK_MONOTONIC.