I want to be able to measure time elapsed (for frame time) with my Clock class. (Problem described below the code.)
Clock.h
typedef std::chrono::high_resolution_clock::time_point timePt;
class Clock
{
timePt currentTime;
timePt lastTime;
public:
Clock();
void update();
uint64_t deltaTime();
};
Clock.cpp
#include "Clock.h"
using namespace std::chrono;
Clock::Clock()
{
currentTime = high_resolution_clock::now();
lastTime = currentTime;
}
void Clock::update()
{
lastTime = currentTime;
currentTime = high_resolution_clock::now();
}
uint64_t Clock::deltaTime()
{
microseconds delta = duration_cast<microseconds>(currentTime - lastTime);
return delta.count();
}
When I try to use Clock like so
Clock clock;
while(1) {
clock.update();
uint64_t dt = clock.deltaTime();
for (int i=0; i < 10000; i++)
{
//do something to waste time between updates
int k = i*dt;
}
cout << dt << endl; //time elapsed since last update in microseconds
}
For me it prints about 30 times "0" until it finally prints a number which is always very close to something like "15625" microseconds (15.625 milliseconds).
My question is, why isn't there anything between? I'm wondering whether my implementation is wrong or the precision on high_resolution_clock is acting strange. Any ideas?
EDIT: I am using Codeblocks with mingw32 compiler on a windows 8 computer.
EDIT2:
I tried running the following code that should display high_resolution_clock precision:
template <class Clock>
void display_precision()
{
typedef std::chrono::duration<double, std::nano> NS;
NS ns = typename Clock::duration(1);
std::cout << ns.count() << " ns\n";
}
int main()
{
display_precision<std::chrono::high_resolution_clock>();
}
For me it prints: "1000 ns". So I guess high_resolution_clock has a precision of 1 microsecond right? Yet in my tests it seems to have a precision of 16 milliseconds?
What system are you using? (I guess it's Windows? Visual Studio is known to had this problem, now fixed in VS 2015, see the bug report). On some systems high_resolution_clock is defined as just an alias to system_clock, which can have really low resolution, like 16 ms you are seeing.
See for example this question.
I have the same problem with msys2 on Windows 10: the delta returned is 0 for most of my subfunctions tested and suddenly returns 15xxx or 24xxx microseconds. I thought there was a problem in my code as all the tutorials do not mention any problem.
Same thing for difftime(finish, start) in time.h which often returns 0.
I finally changed all my high_resolution clock with steady_clock, and I can find the proper times:
auto t_start = std::chrono::steady_clock::now();
_cvTracker->track(image); // my function to test
std::cout << "Time taken = " << std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::steady_clock ::now() - t_start).count() << " microseconds" << std::endl;
// returns the proper value (or at least a plausible value)
whereas this returns mostly 0:
auto t_start = std::chrono::high_resolution_clock::now();
_cvTracker->track(image); // my function to test
std::cout << "Time taken = " << std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::high_resolution_clock::now() - t_start).count() << " microseconds" << std::endl;
// returns 0 most of the time
difftime does not seem to work either:
time_t start, finish;
time(&start);
_cvTracker->track(image);
time(&finish);
std::cout << "Time taken= " << difftime(finish, start) << std::endl;
// returns 0 most of the time
Related
While I realize this is probably one of many identical questions, I can't seem to figure out how to properly use std::chrono. This is the solution I cobbled together.
#include <stdlib.h>
#include <iostream>
#include <chrono>
typedef std::chrono::high_resolution_clock Time;
typedef std::chrono::milliseconds ms;
float startTime;
float getCurrentTime();
int main () {
startTime = getCurrentTime();
std::cout << "Start Time: " << startTime << "\n";
while(true) {
std::cout << getCurrentTime() - startTime << "\n";
}
return EXIT_SUCCESS;
}
float getCurrentTime() {
auto now = Time::now();
return std::chrono::duration_cast<ms>(now.time_since_epoch()).count() / 1000;
}
For some reason, this only ever returns integer values as the difference, which increments upwards at rate of 1 per second, but starting from an arbitrary, often negative, value.
What am I doing wrong? Is there a better way of doing this?
Don't escape the chrono type system until you absolutely have to. That means don't use .count() except for I/O or interacting with legacy API.
This translates to: Don't use float as time_point.
Don't bother with high_resolution_clock. This is always a typedef to either system_clock or steady_clock. For more portable code, choose one of the latter.
.
#include <iostream>
#include <chrono>
using Time = std::chrono::steady_clock;
using ms = std::chrono::milliseconds;
To start, you're going to need a duration with a representation of float and the units of seconds. This is how you do that:
using float_sec = std::chrono::duration<float>;
Next you need a time_point which uses Time as the clock, and float_sec as its duration:
using float_time_point = std::chrono::time_point<Time, float_sec>;
Now your getCurrentTime() can just return Time::now(). No fuss, no muss:
float_time_point
getCurrentTime() {
return Time::now();
}
Your main, because it has to do the I/O, is responsible for unpacking the chrono types into scalars so that it can print them:
int main () {
auto startTime = getCurrentTime();
std::cout << "Start Time: " << startTime.time_since_epoch().count() << "\n";
while(true) {
std::cout << (getCurrentTime() - startTime).count() << "\n";
}
}
This program does a similar thing. Hopefully it shows some of the capabilities (and methodology) of std::chrono:
#include <iostream>
#include <chrono>
#include <thread>
int main()
{
using namespace std::literals;
namespace chrono = std::chrono;
using clock_type = chrono::high_resolution_clock;
auto start = clock_type::now();
for(;;) {
auto first = clock_type::now();
// note use of literal - this is c++14
std::this_thread::sleep_for(500ms);
// c++11 would be this:
// std::this_thread::sleep_for(chrono::milliseconds(500));
auto last = clock_type::now();
auto interval = last - first;
auto total = last - start;
// integer cast
std::cout << "we just slept for " << chrono::duration_cast<chrono::milliseconds>(interval).count() << "ms\n";
// another integer cast
std::cout << "also known as " << chrono::duration_cast<chrono::nanoseconds>(interval).count() << "ns\n";
// floating point cast
using seconds_fp = chrono::duration<double, chrono::seconds::period>;
std::cout << "which is " << chrono::duration_cast<seconds_fp>(interval).count() << " seconds\n";
std::cout << " total time wasted: " << chrono::duration_cast<chrono::milliseconds>(total).count() << "ms\n";
std::cout << " in seconds: " << chrono::duration_cast<seconds_fp>(total).count() << "s\n";
std::cout << std::endl;
}
return 0;
}
example output:
we just slept for 503ms
also known as 503144616ns
which is 0.503145 seconds
total time wasted: 503ms
in seconds: 0.503145s
we just slept for 500ms
also known as 500799185ns
which is 0.500799 seconds
total time wasted: 1004ms
in seconds: 1.00405s
we just slept for 505ms
also known as 505114589ns
which is 0.505115 seconds
total time wasted: 1509ms
in seconds: 1.50923s
we just slept for 502ms
also known as 502478275ns
which is 0.502478 seconds
total time wasted: 2011ms
in seconds: 2.01183s
I have following C code:
uint64_t combine(uint32_t const sec, uint32_t const usec){
return (uint64_t) sec << 32 | usec;
};
uint64_t now3(){
struct timeval tv;
gettimeofday(&tv, NULL);
return combine((uint32_t) tv.tv_sec, (uint32_t) tv.tv_usec);
}
What this do it combine 32 bit timestamp, and 32 bit "something", probably micro/nanoseconds into single 64 bit integer.
I have really hard time to rewrite it with C++11 chrono.
This is what I did so far, but I think this is wrong way to do it.
auto tse = std::chrono::system_clock::now().time_since_epoch();
auto dur = std::chrono::duration_cast<std::chrono::nanoseconds>( tse ).count();
uint64_t time = static_cast<uint64_t>( dur );
Important note - I only care about first 32 bit to be "valid" timestamp.
Second 32 bit "part" can be anything - nano or microseconds - everything is good as long as two sequential calls of this function give me different second "part".
i want seconds in one int, milliseconds in another.
Here is code to do that:
#include <chrono>
#include <iostream>
int
main()
{
auto now = std::chrono::system_clock::now().time_since_epoch();
std::cout << now.count() << '\n';
auto s = std::chrono::duration_cast<std::chrono::seconds>(now);
now -= s;
auto ms = std::chrono::duration_cast<std::chrono::milliseconds>(now);
int si = s.count();
int msi = ms.count();
std::cout << si << '\n';
std::cout << msi << '\n';
}
This just output for me:
1447109182307707
1447109182
307
The C++11 chrono types use only one number to represent a time since a given Epoch, unlike the timeval (or timespec) structure which uses two numbers to precisely represent a time. So with C++11 chrono you don't need the combine() method.
The content of the timestamp returned by now() depends on the clock you use; there are tree clocks, described in http://en.cppreference.com/w/cpp/chrono :
system_clock wall clock time from the system-wide realtime clock
steady_clock monotonic clock that will never be adjusted
high_resolution_clock the clock with the shortest tick period available
If you want successive timestamps to be always different, use the steady clock:
auto t1 = std::chrono::steady_clock::now();
...
auto t2 = std::chrono::steady_clock::now();
assert (t2 > t1);
Edit: answer to comment
#include <iostream>
#include <chrono>
#include <cstdint>
int main()
{
typedef std::chrono::duration< uint32_t, std::ratio<1> > s32_t;
typedef std::chrono::duration< uint32_t, std::milli > ms32_t;
s32_t first_part;
ms32_t second_part;
auto t1 = std::chrono::nanoseconds( 2500000000 ); // 2.5 secs
first_part = std::chrono::duration_cast<s32_t>(t1);
second_part = std::chrono::duration_cast<ms32_t>(t1-first_part);
std::cout << "first part = " << first_part.count() << " s\n"
<< "seconds part = " << second_part.count() << " ms" << std::endl;
auto t2 = std::chrono::nanoseconds( 2800000000 ); // 2.8 secs
first_part = std::chrono::duration_cast<s32_t>(t2);
second_part = std::chrono::duration_cast<ms32_t>(t2-first_part);
std::cout << "first part = " << first_part.count() << " s\n"
<< "seconds part = " << second_part.count() << " ms" << std::endl;
}
Output:
first part = 2 s
seconds part = 500 ms
first part = 2 s
seconds part = 800 ms
I wanted to ask that how can I calculate time in any units like picosecond, femtosecond and up to more precision. I am calculating running times for functions and using nanoseconds, the running time of functions is returning 0 when i use millisecond or nanosecond. I think Chrono library supports only till nanosecond, It was the most precise which appeared when I pressed ctrl+space after typing chrono::
int main()
{
auto t1 = std::chrono::high_resolution_clock::now();
f();
auto t2 = std::chrono::high_resolution_clock::now();
std::cout << "f() took "
<< std::chrono::duration_cast<std::chrono::milliseconds>(t2 - t1).count()
<< " milliseconds\n";
}
code source: http://en.cppreference.com/w/cpp/chrono/duration/duration_cast
Thanks.
You can calculate time in more precise durations (picoseconds...)
www.stroustrup.com/C++11FAQ.html
See the following definition:
typedef ratio<1, 1000000000000> pico;
then use:
duration<double, pico> d{1.23}; //...1.23 picoseconds
UPDATE
Your question has three parts:
How to use std::chrono and make calculation with std::chrono::duration
How to get higher precision timestamps
How to do performance mesurments of your code
Above I partially answered only for the first question (how to define "picoseconds" duration). Consider the following code as example:
#include <chrono>
#include <iostream>
#include <typeinfo>
using namespace std;
using namespace std::chrono;
void f()
{
enum { max_count = 10000 };
cout << '|';
for(volatile size_t count = 0; count != max_count; ++count)
{
if(!(count % (max_count / 10)))
cout << '.';
}
cout << "|\n";
}
int main()
{
typedef std::ratio<1l, 1000000000000l> pico;
typedef duration<long long, pico> picoseconds;
typedef std::ratio<1l, 1000000l> micro;
typedef duration<long long, micro> microseconds;
const auto t1 = high_resolution_clock::now();
enum { number_of_test_cycles = 10 };
for(size_t count = 0; count != number_of_test_cycles; ++count)
{
f();
}
const auto t2 = high_resolution_clock::now();
cout << number_of_test_cycles << " times f() took:\n"
<< duration_cast<milliseconds>(t2 - t1).count() << " milliseconds\n"
<< duration_cast<microseconds>(t2 - t1).count() << " microseconds\n"
<< duration_cast<picoseconds>(t2 - t1).count() << " picoseconds\n";
}
It produces this output:
$ ./test
|..........|
|..........|
|..........|
|..........|
|..........|
|..........|
|..........|
|..........|
|..........|
|..........|
10 times f() took:
1 milliseconds
1084 microseconds
1084000000 picoseconds
As you see in order to get 1 millisecond result I had to repeat f() 10 times. Repeating your test is general approach when your timer doesn't have enough precision. There is one problem associated with repetition - it's not neccessary that repeating your test N times takes proportianal period of time. You need to prove it first.
Another thing - although I can make calculation using picoseconds durations my high_resolution_timer can't give me higher precision than microseconds.
To get higher precision you can use timestamp counter, see wiki/Time_Stamp_Counter - but this is tricky and platform specific.
A "standard" PC has a resolution of around 100 nanoseconds, so trying to measure time at resolutions greater than that is not really possible unless you have custom hardware of some kind. See How precise is the internal clock of a modern PC? for a related question and check out the second answer: https://stackoverflow.com/a/2615977/1169863.
I am trying to use chrono::steady_clock to measure fractional seconds elapsed between a block of code in my program. I have this block of code working in LiveWorkSpace (http://liveworkspace.org/code/YT1I$9):
#include <chrono>
#include <iostream>
#include <vector>
int main()
{
auto start = std::chrono::steady_clock::now();
for (unsigned long long int i = 0; i < 10000; ++i) {
std::vector<int> v(i, 1);
}
auto end = std::chrono::steady_clock::now();
auto difference = std::chrono::duration_cast<std::chrono::microseconds>(end - start).count();
std::cout << "seconds since start: " << ((double)difference / 1000000);
}
When I implement the same idea into my program like so:
auto start = std::chrono::steady_clock::now();
// block of code to time
auto end = std::chrono::stead_clock::now();
auto difference = std::chrono::duration_cast<std::chrono::microseconds>(end - start).count()
std::cout << "seconds since start: " << ((double) difference / 1000000);
The program will only print out values of 0 and 0.001. I highly doubt that the execution time for my block of code always equals 0 or 1000 microseconds, so what is accounting for this rounding and how might I eliminate it so that I can get the proper fractional values?
This is a Windows program.
This question already has a good answer. But I'd like to add another suggestion:
Work within the <chrono> framework. Build your own clock. Build your own time_point. Build your own duration. The <chrono> framework is very customizable. By working within that system, you will not only learn std::chrono, but when your vendor starts shipping clocks you're happy with, it will be trivial to transition your code from your hand-rolled chrono::clock to std::high_resolution_clock (or whatever).
First though, a minor criticism about your original code:
std::cout << "seconds since start: " << ((double) difference / 1000000);
Whenever you see yourself introducing conversion constants (like 1000000) to get what you want, you're not using chrono correctly. Your code isn't incorrect, just fragile. Are you sure you got the right number of zeros in that constant?!
Even in this simple example you should say to yourself:
I want to see output in terms of seconds represented by a double.
And then you should use chrono do that for you. It is very easy once you learn how:
typedef std::chrono::duration<double> sec;
sec difference = end - start;
std::cout << "seconds since start: " << difference.count() << '\n';
The first line creates a type with a period of 1 second, represented by a double.
The second line simply subtracts your time_points and assigns it to your custom duration type. The conversion from the units of steady_clock::time_point to your custom duration (a double second) are done by the chrono library automatically. This is much simpler than:
auto difference = std::chrono::duration_cast<std::chrono::microseconds>(end - start).count()
And then finally you just print out your result with the .count() member function. This is again much simpler than:
std::cout << "seconds since start: " << ((double) difference / 1000000);
But since you're not happy with the precision of std::chrono::steady_clock, and you have access to QueryPerformanceCounter, you can do better. You can build your own clock on top of QueryPerformanceCounter.
<disclaimer>
I don't have a Windows system to test the following code on.
</disclaimer>
struct my_clock
{
typedef double rep;
typedef std::ratio<1> period;
typedef std::chrono::duration<rep, period> duration;
typedef std::chrono::time_point<my_clock> time_point;
static const bool is_steady = false;
static time_point now()
{
static const long long frequency = init_frequency();
long long t;
QueryPerformanceCounter(&t);
return time_point(duration(static_cast<rep>(t)/frequency));
}
private:
static long long init_frequency()
{
long long f;
QueryPerformanceFrequency(&f);
return f;
}
};
Since you wanted your output in terms of a double second, I've made the rep of this clock a double and the period 1 second. You could just as easily make the rep integral and the period some other unit such as microseconds or nanoseconds. You just adjust the typedefs and the conversion from QueryPerformanceCounter to your duration in now().
And now your code can look much like your original code:
int main()
{
auto start = my_clock::now();
for (unsigned long long int i = 0; i < 10000; ++i) {
std::vector<int> v(i, 1);
}
auto end = my_clock::now();
auto difference = end - start;
std::cout << "seconds since start: " << difference.count() << '\n';
}
But without the hand-coded conversion constants, and with (what I'm hoping is) sufficient precision for your needs. And with a much easier porting path to a future std::chrono::steady_clock implementation.
<chrono> was designed to be an extensible library. Please extend it. :-)
After running some tests on MSVC2012, I could confirm that the C++11 clocks in Microsoft's implementation do not have a high enough resolution. See C++ header's high_resolution_clock does not have high resolution for a bug report concerning this issue.
So, unfortunately for a higher resolution timer, you will need to use boost::chrono or QueryPerformanceCounter directly like so until they fix the bug:
#include <iostream>
#include <Windows.h>
int main()
{
LARGE_INTEGER frequency;
QueryPerformanceFrequency(&frequency);
LARGE_INTEGER start;
QueryPerformanceCounter(&start);
// Put code here to time
LARGE_INTEGER end;
QueryPerformanceCounter(&end);
// for microseconds use 1000000.0
double interval = static_cast<double>(end.QuadPart- start.QuadPart) /
frequency.QuadPart; // in seconds
std::cout << interval;
}
How do I call clock() in C++?
For example, I want to test how much time a linear search takes to find a given element in an array.
#include <iostream>
#include <cstdio>
#include <ctime>
int main() {
std::clock_t start;
double duration;
start = std::clock();
/* Your algorithm here */
duration = ( std::clock() - start ) / (double) CLOCKS_PER_SEC;
std::cout<<"printf: "<< duration <<'\n';
}
An alternative solution, which is portable and with higher precision, available since C++11, is to use std::chrono.
Here is an example:
#include <iostream>
#include <chrono>
typedef std::chrono::high_resolution_clock Clock;
int main()
{
auto t1 = Clock::now();
auto t2 = Clock::now();
std::cout << "Delta t2-t1: "
<< std::chrono::duration_cast<std::chrono::nanoseconds>(t2 - t1).count()
<< " nanoseconds" << std::endl;
}
Running this on ideone.com gave me:
Delta t2-t1: 282 nanoseconds
clock() returns the number of clock ticks since your program started. There is a related constant, CLOCKS_PER_SEC, which tells you how many clock ticks occur in one second. Thus, you can test any operation like this:
clock_t startTime = clock();
doSomeOperation();
clock_t endTime = clock();
clock_t clockTicksTaken = endTime - startTime;
double timeInSeconds = clockTicksTaken / (double) CLOCKS_PER_SEC;
On Windows at least, the only practically accurate measurement mechanism is QueryPerformanceCounter (QPC). std::chrono is implemented using it (since VS2015, if you use that), but it is not accurate to the same degree as using QueryPerformanceCounter directly. In particular it's claim to report at 1 nanosecond granularity is absolutely not correct. So, if you're measuring something that takes a very short amount of time (and your case might just be such a case), then you should use QPC, or the equivalent for your OS. I came up against this when measuring cache latencies, and I jotted down some notes that you might find useful, here;
https://github.com/jarlostensen/notesandcomments/blob/master/stdchronovsqcp.md
#include <iostream>
#include <ctime>
#include <cstdlib> //_sleep() --- just a function that waits a certain amount of milliseconds
using namespace std;
int main()
{
clock_t cl; //initializing a clock type
cl = clock(); //starting time of clock
_sleep(5167); //insert code here
cl = clock() - cl; //end point of clock
_sleep(1000); //testing to see if it actually stops at the end point
cout << cl/(double)CLOCKS_PER_SEC << endl; //prints the determined ticks per second (seconds passed)
return 0;
}
//outputs "5.17"
You can measure how long your program works. The following functions help measure the CPU time since the start of the program:
C++ (double)clock() / CLOCKS_PER_SEC with ctime included.
Python time.clock() returns floating-point value in seconds.
Java System.nanoTime() returns long value in nanoseconds.
My reference: algorithms toolbox week 1 course part of data structures and algorithms specialization by University of California San Diego & National Research University Higher School of Economics
So you can add this line of code after your algorithm:
cout << (double)clock() / CLOCKS_PER_SEC;
Expected Output: the output representing the number of clock ticks per second
Probably you might be interested in timer like this :
H : M : S . Msec.
the code in Linux OS:
#include <iostream>
#include <unistd.h>
using namespace std;
void newline();
int main() {
int msec = 0;
int sec = 0;
int min = 0;
int hr = 0;
//cout << "Press any key to start:";
//char start = _gtech();
for (;;)
{
newline();
if(msec == 1000)
{
++sec;
msec = 0;
}
if(sec == 60)
{
++min;
sec = 0;
}
if(min == 60)
{
++hr;
min = 0;
}
cout << hr << " : " << min << " : " << sec << " . " << msec << endl;
++msec;
usleep(100000);
}
return 0;
}
void newline()
{
cout << "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n";
}