I'm sorting an array with merge sort. I use the Chrono library to measure the time, but, sometimes, the result is 0.
int main() {
srand(50000);
int A[N];
for (int i = 0; i < N; i++) {
A[i] = rand();
}
cout << "Array non ordinato\n";
Stampa(A, N);
auto start = std::chrono::system_clock::now();
QuickSort(A, 0, N - 1);
auto end = std::chrono::system_clock::now();
std::chrono::duration<double> elapsed = end - start;
cout << "\nArray ordinato\n";
Stampa(A, N);
cout << "Elapsed time in nanoseconds : "
<< chrono::duration_cast<chrono::nanoseconds>(end - start).count()
<< " ns" << endl;
cout << "Elapsed time in milliseconds : "
<< chrono::duration_cast<chrono::milliseconds>(end - start).count()
<< " ms" << endl;
cout << "Elapsed time: " << elapsed.count() << "s";
}
I defined N with #define N 300.
If N is 300, elapsed time (nanosecond, millisecond or second) is 0. If I increment N, I have an elapsed time greater than zero. I need time for small arrays too. How could I fix it?
If you want more precision for smaller N sorting, I recommend using the high resolution clock instead of the system clock.
When you are measuring time like in the question, you need to use std::chrono::steady_clock or any other clock where the is_steady property is true. std::chrono::system_clock is using the wall-clock which may change at any given time. This is the purpose of the std::chrono::system_clock: to represent the current time for the system.
You probably also want to use a higher number of elements to see any meaningful differences in time.
Although std::chrono::high_resolution clock is tempting going by it's name alone, it is worth noting that there is, technically, no high resolution clock in chrono. Rather, it is an alias to the clock that has the highest resolution available. Quoting from the standard:
Objects of class high_resolution_clock represent clocks with the
shortest tick period. high_resolution_clock may be a synonym for
system_clock or steady_clock.
when we check the implementations we see that MSVC has the following:
using high_resolution_clock = steady_clock;
while libstdc++-v3 has:
/**
* #brief Highest-resolution clock
*
* This is the clock "with the shortest tick period." Alias to
* std::system_clock until higher-than-nanosecond definitions
* become feasible.
*/
using high_resolution_clock = system_clock;
Related
My question is about the difference of the elapsed time according to the point.
For finding the largest portion of the total elapsed time when executing in my code, I used clock function.
source : calculating time elapsed in C++
First, I put the clock function at the start and end of the main function.
(Actually, there are some declaration of variables but I deleted them for readability of my questions). Then I think I will be able to measure the total elapsed time.
int main(){
using clock = std::chrono::system_clock;
using sec = std::chrono::duration<double>;
const auto before = clock::now();
...
std::cin >> a >> b;
lgstCommSubStr findingLCSS(a,b,numberofHT,cardi,SubsA);
const sec duration = clock::now() - before;
std::cout << "It took " << duration.count() << "s in main function" << std::endl;
return 0;
}
Second, I put the clock function at the class findingLCSS. This class is for finding longest common sub-string between two string. It is the class that actually do my algorithm. I write the code for finding it in its constructor. Therefore, when making this class, it returns longest common sub-string information. I think this elapsed time will be the actual algorithm running time.
public:
lgstCommSubStr(string a, string b, int numHT, int m, vector <int> ** SA):
strA(a), strB(b), hashTsize(numHT), SubstringsA(SA),
primeNs(numHT), xs(numHT),
A_hashValues(numHT), B_hashValues(numHT),
av(numHT), bv(numHT), cardi(m)
{
using clock = std::chrono::system_clock;
using sec = std::chrono::duration<double>;
const auto before = clock::now();
...
answer ans=binarySearch(a,b, numHT);
std::cout << ans.i << " " << ans.j << " " << ans.length << "\n";
const sec duration = clock::now() - before;
std::cout << "It took " << duration.count() << "s in the class" << std::endl;
}
The output is as below.
tool coolbox
1 1 3
It took 0.002992s in inner class
It took 4.13945s in main function
It means 'tool' and 'coolbox' have a substring 'ool'
But I am confused that there is a big difference between two times.
Because the first time is total time and the second time is algorithm running time, I have to think its difference time is the elapsed time for declaration variables.
But it looks weird because I think declaration variables time is short.
Is there a mistake in measuring the elapsed time?
Please give me a hint for troubleshoot. Thank you for reading!
Taking a snapshot of the time before std::cin >> a >> b; leads to an inaccurate measurement as you're likely starting the clock before you type in the values for a and b. Generally you want to put your timing as close as possible to the thing you're actually measuring.
I'm trying to figure out how to time the execution of part of my program, but when I use the following code, all I ever get back is 0. I know that can't be right. The code I'm timing recursively implements mergesort of a large array of ints. How do I get the time it takes to execute the program in milliseconds?
//opening input file and storing contents into array
index = inputFileFunction(inputArray);
clock_t time = clock();//start the clock
//this is what needs to be timed
newRecursive.mergeSort(inputArray, 0, index - 1);
//getting the difference
time = clock() - time;
double ms = double(time) / CLOCKS_PER_SEC * 1000;
std::cout << "\nTime took to execute: " << std::setprecision(9) << ms << std::endl;
You can use the chrono library in C++11. Here's how you can modify your code:
#include <chrono>
//...
auto start = std::chrono::steady_clock::now();
// do whatever you're timing
auto end = std::chrono::steady_clock::now();
auto durationMS = std::chrono::duration_cast<std::chrono::microseconds>(end - start);
std::cout << "\n Time took " << durationMS.count() << " ms" << std::endl;
If you're developing on OSX, this blog post from Apple may be useful. It contains code snippets that should give you the timing resolution you need.
I want to be able to measure time elapsed (for frame time) with my Clock class. (Problem described below the code.)
Clock.h
typedef std::chrono::high_resolution_clock::time_point timePt;
class Clock
{
timePt currentTime;
timePt lastTime;
public:
Clock();
void update();
uint64_t deltaTime();
};
Clock.cpp
#include "Clock.h"
using namespace std::chrono;
Clock::Clock()
{
currentTime = high_resolution_clock::now();
lastTime = currentTime;
}
void Clock::update()
{
lastTime = currentTime;
currentTime = high_resolution_clock::now();
}
uint64_t Clock::deltaTime()
{
microseconds delta = duration_cast<microseconds>(currentTime - lastTime);
return delta.count();
}
When I try to use Clock like so
Clock clock;
while(1) {
clock.update();
uint64_t dt = clock.deltaTime();
for (int i=0; i < 10000; i++)
{
//do something to waste time between updates
int k = i*dt;
}
cout << dt << endl; //time elapsed since last update in microseconds
}
For me it prints about 30 times "0" until it finally prints a number which is always very close to something like "15625" microseconds (15.625 milliseconds).
My question is, why isn't there anything between? I'm wondering whether my implementation is wrong or the precision on high_resolution_clock is acting strange. Any ideas?
EDIT: I am using Codeblocks with mingw32 compiler on a windows 8 computer.
EDIT2:
I tried running the following code that should display high_resolution_clock precision:
template <class Clock>
void display_precision()
{
typedef std::chrono::duration<double, std::nano> NS;
NS ns = typename Clock::duration(1);
std::cout << ns.count() << " ns\n";
}
int main()
{
display_precision<std::chrono::high_resolution_clock>();
}
For me it prints: "1000 ns". So I guess high_resolution_clock has a precision of 1 microsecond right? Yet in my tests it seems to have a precision of 16 milliseconds?
What system are you using? (I guess it's Windows? Visual Studio is known to had this problem, now fixed in VS 2015, see the bug report). On some systems high_resolution_clock is defined as just an alias to system_clock, which can have really low resolution, like 16 ms you are seeing.
See for example this question.
I have the same problem with msys2 on Windows 10: the delta returned is 0 for most of my subfunctions tested and suddenly returns 15xxx or 24xxx microseconds. I thought there was a problem in my code as all the tutorials do not mention any problem.
Same thing for difftime(finish, start) in time.h which often returns 0.
I finally changed all my high_resolution clock with steady_clock, and I can find the proper times:
auto t_start = std::chrono::steady_clock::now();
_cvTracker->track(image); // my function to test
std::cout << "Time taken = " << std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::steady_clock ::now() - t_start).count() << " microseconds" << std::endl;
// returns the proper value (or at least a plausible value)
whereas this returns mostly 0:
auto t_start = std::chrono::high_resolution_clock::now();
_cvTracker->track(image); // my function to test
std::cout << "Time taken = " << std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::high_resolution_clock::now() - t_start).count() << " microseconds" << std::endl;
// returns 0 most of the time
difftime does not seem to work either:
time_t start, finish;
time(&start);
_cvTracker->track(image);
time(&finish);
std::cout << "Time taken= " << difftime(finish, start) << std::endl;
// returns 0 most of the time
Referring to Obtaining Time in milliseconds
Why does below code produce zero as output?
int main()
{
steady_clock::time_point t1 = steady_clock::now();
//std::this_thread::sleep_for(std::chrono::milliseconds(1500));
steady_clock::time_point t2 = steady_clock::now();
auto timeC = t1.time_since_epoch().count();
auto timeD = t2.time_since_epoch().count();
auto timeA = duration_cast<std::chrono::nanoseconds > ( t1.time_since_epoch()).count();
auto timeB = duration_cast<std::chrono::nanoseconds > ( t2.time_since_epoch()).count();
std::cout << timeC << std::endl;
std::cout << timeB << std::endl;
std::cout << timeD << std::endl;
std::cout << timeA << std::endl;
std::cout << timeB - timeA << std::endl;
system("Pause");
return 0;
}
The output:
14374083030139686
1437408303013968600
14374083030139686
1437408303013968600
0
Press any key to continue . . .
I suppose there should be a difference of few nanoseconds, because of instruction execution time.
Under VS2012, steady_clock (and high_resolution_clock) uses GetSystemTimeAsFileTime, which has a very low resolution (and is non-steady to boot). This is acknowledged as a bug by Microsoft.
Your workaround is to use VS2015, use Boost.Chrono, or implement your own clock using QueryPerformanceCounter (see: https://stackoverflow.com/a/16299576/567292).
Just because you ask it to represent the value in nanoseconds, doesn't mean that the precision of the measurement is in nanoseconds.
When you look at your output you can see that the count are nanoseconds / 100. That that means that the count is representing time in units of 100 nanoseconds.
But even that does not tell you the period of the underlying counter on which steady_clock is built. All you know is it can't be better than 100 nanoseconds.
You can tell the actual period used for the counter by using the periodmember of the steady_clock
double periodInSeconds = double(steady_clock::period::num)
/ double(steady_clock::period::den);
Back to your question: "Why does below code produce zero as output?"
Since you haven't done any significant work between the two calls to now() it is highly unlikely that you have used up 100 nanoseconds, so the answers are the same -- hence the zero.
I am trying to use chrono::steady_clock to measure fractional seconds elapsed between a block of code in my program. I have this block of code working in LiveWorkSpace (http://liveworkspace.org/code/YT1I$9):
#include <chrono>
#include <iostream>
#include <vector>
int main()
{
auto start = std::chrono::steady_clock::now();
for (unsigned long long int i = 0; i < 10000; ++i) {
std::vector<int> v(i, 1);
}
auto end = std::chrono::steady_clock::now();
auto difference = std::chrono::duration_cast<std::chrono::microseconds>(end - start).count();
std::cout << "seconds since start: " << ((double)difference / 1000000);
}
When I implement the same idea into my program like so:
auto start = std::chrono::steady_clock::now();
// block of code to time
auto end = std::chrono::stead_clock::now();
auto difference = std::chrono::duration_cast<std::chrono::microseconds>(end - start).count()
std::cout << "seconds since start: " << ((double) difference / 1000000);
The program will only print out values of 0 and 0.001. I highly doubt that the execution time for my block of code always equals 0 or 1000 microseconds, so what is accounting for this rounding and how might I eliminate it so that I can get the proper fractional values?
This is a Windows program.
This question already has a good answer. But I'd like to add another suggestion:
Work within the <chrono> framework. Build your own clock. Build your own time_point. Build your own duration. The <chrono> framework is very customizable. By working within that system, you will not only learn std::chrono, but when your vendor starts shipping clocks you're happy with, it will be trivial to transition your code from your hand-rolled chrono::clock to std::high_resolution_clock (or whatever).
First though, a minor criticism about your original code:
std::cout << "seconds since start: " << ((double) difference / 1000000);
Whenever you see yourself introducing conversion constants (like 1000000) to get what you want, you're not using chrono correctly. Your code isn't incorrect, just fragile. Are you sure you got the right number of zeros in that constant?!
Even in this simple example you should say to yourself:
I want to see output in terms of seconds represented by a double.
And then you should use chrono do that for you. It is very easy once you learn how:
typedef std::chrono::duration<double> sec;
sec difference = end - start;
std::cout << "seconds since start: " << difference.count() << '\n';
The first line creates a type with a period of 1 second, represented by a double.
The second line simply subtracts your time_points and assigns it to your custom duration type. The conversion from the units of steady_clock::time_point to your custom duration (a double second) are done by the chrono library automatically. This is much simpler than:
auto difference = std::chrono::duration_cast<std::chrono::microseconds>(end - start).count()
And then finally you just print out your result with the .count() member function. This is again much simpler than:
std::cout << "seconds since start: " << ((double) difference / 1000000);
But since you're not happy with the precision of std::chrono::steady_clock, and you have access to QueryPerformanceCounter, you can do better. You can build your own clock on top of QueryPerformanceCounter.
<disclaimer>
I don't have a Windows system to test the following code on.
</disclaimer>
struct my_clock
{
typedef double rep;
typedef std::ratio<1> period;
typedef std::chrono::duration<rep, period> duration;
typedef std::chrono::time_point<my_clock> time_point;
static const bool is_steady = false;
static time_point now()
{
static const long long frequency = init_frequency();
long long t;
QueryPerformanceCounter(&t);
return time_point(duration(static_cast<rep>(t)/frequency));
}
private:
static long long init_frequency()
{
long long f;
QueryPerformanceFrequency(&f);
return f;
}
};
Since you wanted your output in terms of a double second, I've made the rep of this clock a double and the period 1 second. You could just as easily make the rep integral and the period some other unit such as microseconds or nanoseconds. You just adjust the typedefs and the conversion from QueryPerformanceCounter to your duration in now().
And now your code can look much like your original code:
int main()
{
auto start = my_clock::now();
for (unsigned long long int i = 0; i < 10000; ++i) {
std::vector<int> v(i, 1);
}
auto end = my_clock::now();
auto difference = end - start;
std::cout << "seconds since start: " << difference.count() << '\n';
}
But without the hand-coded conversion constants, and with (what I'm hoping is) sufficient precision for your needs. And with a much easier porting path to a future std::chrono::steady_clock implementation.
<chrono> was designed to be an extensible library. Please extend it. :-)
After running some tests on MSVC2012, I could confirm that the C++11 clocks in Microsoft's implementation do not have a high enough resolution. See C++ header's high_resolution_clock does not have high resolution for a bug report concerning this issue.
So, unfortunately for a higher resolution timer, you will need to use boost::chrono or QueryPerformanceCounter directly like so until they fix the bug:
#include <iostream>
#include <Windows.h>
int main()
{
LARGE_INTEGER frequency;
QueryPerformanceFrequency(&frequency);
LARGE_INTEGER start;
QueryPerformanceCounter(&start);
// Put code here to time
LARGE_INTEGER end;
QueryPerformanceCounter(&end);
// for microseconds use 1000000.0
double interval = static_cast<double>(end.QuadPart- start.QuadPart) /
frequency.QuadPart; // in seconds
std::cout << interval;
}