While I realize this is probably one of many identical questions, I can't seem to figure out how to properly use std::chrono. This is the solution I cobbled together.
#include <stdlib.h>
#include <iostream>
#include <chrono>
typedef std::chrono::high_resolution_clock Time;
typedef std::chrono::milliseconds ms;
float startTime;
float getCurrentTime();
int main () {
startTime = getCurrentTime();
std::cout << "Start Time: " << startTime << "\n";
while(true) {
std::cout << getCurrentTime() - startTime << "\n";
}
return EXIT_SUCCESS;
}
float getCurrentTime() {
auto now = Time::now();
return std::chrono::duration_cast<ms>(now.time_since_epoch()).count() / 1000;
}
For some reason, this only ever returns integer values as the difference, which increments upwards at rate of 1 per second, but starting from an arbitrary, often negative, value.
What am I doing wrong? Is there a better way of doing this?
Don't escape the chrono type system until you absolutely have to. That means don't use .count() except for I/O or interacting with legacy API.
This translates to: Don't use float as time_point.
Don't bother with high_resolution_clock. This is always a typedef to either system_clock or steady_clock. For more portable code, choose one of the latter.
.
#include <iostream>
#include <chrono>
using Time = std::chrono::steady_clock;
using ms = std::chrono::milliseconds;
To start, you're going to need a duration with a representation of float and the units of seconds. This is how you do that:
using float_sec = std::chrono::duration<float>;
Next you need a time_point which uses Time as the clock, and float_sec as its duration:
using float_time_point = std::chrono::time_point<Time, float_sec>;
Now your getCurrentTime() can just return Time::now(). No fuss, no muss:
float_time_point
getCurrentTime() {
return Time::now();
}
Your main, because it has to do the I/O, is responsible for unpacking the chrono types into scalars so that it can print them:
int main () {
auto startTime = getCurrentTime();
std::cout << "Start Time: " << startTime.time_since_epoch().count() << "\n";
while(true) {
std::cout << (getCurrentTime() - startTime).count() << "\n";
}
}
This program does a similar thing. Hopefully it shows some of the capabilities (and methodology) of std::chrono:
#include <iostream>
#include <chrono>
#include <thread>
int main()
{
using namespace std::literals;
namespace chrono = std::chrono;
using clock_type = chrono::high_resolution_clock;
auto start = clock_type::now();
for(;;) {
auto first = clock_type::now();
// note use of literal - this is c++14
std::this_thread::sleep_for(500ms);
// c++11 would be this:
// std::this_thread::sleep_for(chrono::milliseconds(500));
auto last = clock_type::now();
auto interval = last - first;
auto total = last - start;
// integer cast
std::cout << "we just slept for " << chrono::duration_cast<chrono::milliseconds>(interval).count() << "ms\n";
// another integer cast
std::cout << "also known as " << chrono::duration_cast<chrono::nanoseconds>(interval).count() << "ns\n";
// floating point cast
using seconds_fp = chrono::duration<double, chrono::seconds::period>;
std::cout << "which is " << chrono::duration_cast<seconds_fp>(interval).count() << " seconds\n";
std::cout << " total time wasted: " << chrono::duration_cast<chrono::milliseconds>(total).count() << "ms\n";
std::cout << " in seconds: " << chrono::duration_cast<seconds_fp>(total).count() << "s\n";
std::cout << std::endl;
}
return 0;
}
example output:
we just slept for 503ms
also known as 503144616ns
which is 0.503145 seconds
total time wasted: 503ms
in seconds: 0.503145s
we just slept for 500ms
also known as 500799185ns
which is 0.500799 seconds
total time wasted: 1004ms
in seconds: 1.00405s
we just slept for 505ms
also known as 505114589ns
which is 0.505115 seconds
total time wasted: 1509ms
in seconds: 1.50923s
we just slept for 502ms
also known as 502478275ns
which is 0.502478 seconds
total time wasted: 2011ms
in seconds: 2.01183s
Related
I'm trying to use a time_point to effectively represent forever by setting it to seconds::max which, I believe, should represent that much time since epoch. When doing this, though, I get -1 as the time since epoch in the resulting time_point. What am I not understanding?
#include <iostream>
#include <chrono>
using namespace std;
using namespace std::chrono;
int main() {
auto tp1 = system_clock::time_point( seconds::zero() );
auto tp2 = system_clock::time_point( seconds::max() );
cout << "tp1: " << duration_cast<seconds>(tp1.time_since_epoch()).count() << endl;
cout << "tp2: " << duration_cast<seconds>(tp2.time_since_epoch()).count() << endl;
return 0;
}
The output running that is:
tp1: 0
tp2: -1
Here's a little quick&dirty program to explore the limits of system_clock time_points at different precisions:
#include <chrono>
#include <iostream>
using days = std::chrono::duration
<int, std::ratio_multiply<std::ratio<24>, std::chrono::hours::period>>;
using years = std::chrono::duration
<double, std::ratio_multiply<std::ratio<146097, 400>, days::period>>;
template <class Rep, class Period>
void
max_limit(std::chrono::duration<Rep, Period> d)
{
std::cout << "[" << Period::num << '/' << Period::den << "] ";
std::cout << years{d.max()}.count() + 1970 << '\n';
}
int
main()
{
using namespace std;
using namespace std::chrono;
max_limit(nanoseconds{});
max_limit(microseconds{});
max_limit(milliseconds{});
max_limit(seconds{});
}
This will output the year (in floating point) that time_point<system_clock, D> will max out at for any duration D. This program outputs:
[1/1000000000] 2262.28
[1/1000000] 294247
[1/1000] 2.92279e+08
[1/1] 2.92277e+11
Meaning system_clock based on nanoseconds overflows in the year 2262. If you coarsen that to microseconds, you overflow in the year 294,247. And so on.
Once you coarsen to seconds, the max goes out to a ridiculous range. But when you convert that back to system_clock::time_point, which is at least as fine as microseconds, and perhaps as fine as nanoseconds (depending on your platform), you just blow it out of the water.
To solve your problem I recommend:
auto M = system_clock::time_point::max();
Adding a few more diagnostics shows the issue (on my system):
#include <iostream>
#include <chrono>
using namespace std;
using namespace std::chrono;
int main() {
auto tp1 = system_clock::time_point( seconds::zero() );
auto tp2 = system_clock::time_point( seconds::max() );
using type = decltype(system_clock::time_point(seconds::zero()));
cout << type::duration::max().count() << endl;
cout << type::duration::period::den << endl;
cout << type::duration::period::num << endl;
cout << seconds::max().count() << endl;
cout << milliseconds::max().count() << endl;
cout << "tp1: " << duration_cast<seconds>(tp1.time_since_epoch()).count() << endl;
cout << "tp2: " << duration_cast<seconds>(tp2.time_since_epoch()).count() << endl;
return 0;
}
For me, the denominator value is 1,000,000 for the system_clock's time_point. Thus max seconds is going to overflow it when converted up.
I have a starting timepoint in milliseconds like so:
using namespace std::chrono;
typedef time_point<system_clock, milliseconds> MyTimePoint;
MyTimePoint startTimePoint = time_point_cast<MyTimePoint::duration>(system_clock::time_point(steady_clock::now()));
Now I will have a certain number of hours that I want to add or subtract to the startTimePoint.
int numHours = -5//or 5 etc (Can be a plus or minus number)
How can I add this abount of time to the original startTimePoint??
If you want to add five hours to startTimePoint, it's boringly simple:
startTimePoint += hours(5); // from the alias std::chrono::hours
Live example.
By the way, you're trying to convert a steady_clock::now() into a system_clock::time_point, which shouldn't even compile. Change the steady_clock::now() to system_clock::now() and you should be good to go.
Here I have used time in minutes you can go for anything that you want from the user.
So the below is the simple programme using chrono
#include <iostream>
#include <chrono>
using namespace std;
int main() {
using clock = std::chrono::system_clock;
clock::time_point nowp = clock::now();
cout<<"Enter the time that you want to add in minutes"<<endl;
int time_min;
cin>>time_min;
cin.ignore();
clock::time_point end = nowp + std::chrono::minutes(time_min);
time_t nowt = clock::to_time_t ( nowp );
time_t endt = clock::to_time_t ( end);
std::cout << " " << ctime(&nowt) << "\n";
std::cout << ctime(&endt) << std::endl;
return 0;
}
Convert time_point to duration or duration to time_point without intermediate.
It is inherently impossible to convert a time_point to duration or back directly.
Many examples use time_t as intermediate, which is a fine method.
I use the method that uses the time_point 'zero' as a helper.
#include <iostream>
#include <chrono>
#include <thread>
using namespace std;
int main(int argc, char *argv[])
{
using namespace std::chrono;
system_clock::time_point zero; // initialised to zero in constructor
system_clock::time_point tp_now; // now as time_point
duration<int, ratio<1>> dur_now; // now as duration
system_clock::time_point tp_future; // calculated future as time_point
// The objective is to sleep_until the system time is at the next 5 minutes
// boundary (e.g. time is 09:35)
tp_now = system_clock::now(); // What time is it now?
cout << "tp_now = " << tp_now.time_since_epoch().count() << endl;
// It is not possible to assign a time_point directly to a duration.
// but the difference between two time_points can be cast to duration
dur_now = duration_cast<seconds>(tp_now-zero); // subtract nothing from time_point
cout << "dur_now = " << dur_now.count() << endl;
// Instead of using seconds granularity, I want to use 5 minutes
// so I define a suitable type: 5 minutes in seconds
typedef duration<int,ratio<5*60>> dur5min;
// When assigning the time_point (ok: duration) is truncated to the nearest 5min
dur5min min5 = duration_cast<dur5min>(tp_now-zero); // (Yes, I do it from time_point again)
cout << "min5 ('now' in 5min units) = " << min5.count() << endl;
// The next 5 min time point is
min5 += dur5min{1};
cout << "min5 += dur5min{1} = " << min5.count() << endl;
// It is not possible to assign a duration directly to a time_point.
// but I can add a duration to a time_point directly
tp_future = zero + min5;
cout << "tp_future = " << tp_future.time_since_epoch().count() << endl;
// to be used in e.g. sleep_until
// std::this_thread::sleep_until(tp_future);
return 0;
}
Thanks to Carsten's solution I managed to create function:
#include <chrono>
auto getTimeDurationMovedWith(std::chrono::hours hours2move)
{
using namespace std::chrono;
auto current_time = system_clock::now();
decltype(current_time) zeroTime; // no better solution to move time found in stackoverflow
return chrono::duration_cast<microseconds>(
current_time - zeroTime + hours(hours2move));
}
And it can be used like that:
auto tmp = getTimeDurationMovedWith(chrono::hours(-10));
cout << tmp.count() << endl;
I have following which stop execution of program after certain time.
#include <iostream>
#include<ctime>
using namespace std;
int main( )
{
time_t timer1;
time(&timer1);
time_t timer2;
double second;
while(1)
{
time(&timer2);
second = difftime(timer2,timer1);
//check if timediff is cross 3 seconds
if(second > 3)
{
return 0;
}
}
return 0;
}
Is above program would work if time increase from 23:59 to 00:01 ?
If there any other better way?
Provided you have C++11, you can have a look at this example:
#include <thread>
#include <chrono>
int main() {
std::this_thread::sleep_for (std::chrono::seconds(3));
return 0;
}
Alternatively I'd go with a threading library of your choice and use its Thread sleep function. In most cases it is better to send your thread to sleep instead of busy waiting.
time() returns the time since the Epoch (00:00:00 UTC, January 1, 1970), measured in seconds. Thus, the time of day does not matter.
You can use std::chrono::steady_clock in C++11. Check the example in the now static method for an example :
using namespace std::chrono;
steady_clock::time_point clock_begin = steady_clock::now();
std::cout << "printing out 1000 stars...\n";
for (int i=0; i<1000; ++i) std::cout << "*";
std::cout << std::endl;
steady_clock::time_point clock_end = steady_clock::now();
steady_clock::duration time_span = clock_end - clock_begin;
double nseconds = double(time_span.count()) * steady_clock::period::num / steady_clock::period::den;
std::cout << "It took me " << nseconds << " seconds.";
std::cout << std::endl;
I mean: how can I measure time my CPU spent on function execution and wall clock time it takes to run my function? (Im interested in Linux/Windows and both x86 and x86_64). See what I want to do (Im using C++ here but I would prefer C solution):
int startcputime, endcputime, wcts, wcte;
startcputime = cputime();
function(args);
endcputime = cputime();
std::cout << "it took " << endcputime - startcputime << " s of CPU to execute this\n";
wcts = wallclocktime();
function(args);
wcte = wallclocktime();
std::cout << "it took " << wcte - wcts << " s of real time to execute this\n";
Another important question: is this type of time measuring architecture independent or not?
Here's a copy-paste solution that works on both Windows and Linux as well as C and C++.
As mentioned in the comments, there's a boost library that does this. But if you can't use boost, this should work:
// Windows
#ifdef _WIN32
#include <Windows.h>
double get_wall_time(){
LARGE_INTEGER time,freq;
if (!QueryPerformanceFrequency(&freq)){
// Handle error
return 0;
}
if (!QueryPerformanceCounter(&time)){
// Handle error
return 0;
}
return (double)time.QuadPart / freq.QuadPart;
}
double get_cpu_time(){
FILETIME a,b,c,d;
if (GetProcessTimes(GetCurrentProcess(),&a,&b,&c,&d) != 0){
// Returns total user time.
// Can be tweaked to include kernel times as well.
return
(double)(d.dwLowDateTime |
((unsigned long long)d.dwHighDateTime << 32)) * 0.0000001;
}else{
// Handle error
return 0;
}
}
// Posix/Linux
#else
#include <time.h>
#include <sys/time.h>
double get_wall_time(){
struct timeval time;
if (gettimeofday(&time,NULL)){
// Handle error
return 0;
}
return (double)time.tv_sec + (double)time.tv_usec * .000001;
}
double get_cpu_time(){
return (double)clock() / CLOCKS_PER_SEC;
}
#endif
There's a bunch of ways to implement these clocks. But here's what the above snippet uses:
For Windows:
Wall Time: Performance Counters
CPU Time: GetProcessTimes()
For Linux:
Wall Time: gettimeofday()
CPU Time: clock()
And here's a small demonstration:
#include <math.h>
#include <iostream>
using namespace std;
int main(){
// Start Timers
double wall0 = get_wall_time();
double cpu0 = get_cpu_time();
// Perform some computation.
double sum = 0;
#pragma omp parallel for reduction(+ : sum)
for (long long i = 1; i < 10000000000; i++){
sum += log((double)i);
}
// Stop timers
double wall1 = get_wall_time();
double cpu1 = get_cpu_time();
cout << "Wall Time = " << wall1 - wall0 << endl;
cout << "CPU Time = " << cpu1 - cpu0 << endl;
// Prevent Code Elimination
cout << endl;
cout << "Sum = " << sum << endl;
}
Output (12 threads):
Wall Time = 15.7586
CPU Time = 178.719
Sum = 2.20259e+011
C++11. Much easier to write!
Use std::chrono::system_clock for wall clock and std::clock for cpu clock
http://en.cppreference.com/w/cpp/chrono/system_clock
#include <cstdio>
#include <ctime>
#include <chrono>
....
std::clock_t startcputime = std::clock();
do_some_fancy_stuff();
double cpu_duration = (std::clock() - startcputime) / (double)CLOCKS_PER_SEC;
std::cout << "Finished in " << cpu_duration << " seconds [CPU Clock] " << std::endl;
auto wcts = std::chrono::system_clock::now();
do_some_fancy_stuff();
std::chrono::duration<double> wctduration = (std::chrono::system_clock::now() - wcts);
std::cout << "Finished in " << wctduration.count() << " seconds [Wall Clock]" << std::endl;
Et voilĂ , easy and portable! No need for #ifdef _WIN32 or LINUX!
You could even use chrono::high_resolution_clock if you need more precision
http://en.cppreference.com/w/cpp/chrono/high_resolution_clock
To give a concrete example of #lip's suggestion to use boost::timer if you can (tested with Boost 1.51):
#include <boost/timer/timer.hpp>
// this is wallclock AND cpu time
boost::timer::cpu_timer timer;
... run some computation ...
boost::timer::cpu_times elapsed = timer.elapsed();
std::cout << " CPU TIME: " << (elapsed.user + elapsed.system) / 1e9 << " seconds"
<< " WALLCLOCK TIME: " << elapsed.wall / 1e9 << " seconds"
<< std::endl;
Use the clock method in time.h:
clock_t start = clock();
/* Do stuffs */
clock_t end = clock();
float seconds = (float)(end - start) / CLOCKS_PER_SEC;
Unfortunately, this method returns CPU time on Linux, but returns wall-clock time on Windows (thanks to commenters for this information).
How do I call clock() in C++?
For example, I want to test how much time a linear search takes to find a given element in an array.
#include <iostream>
#include <cstdio>
#include <ctime>
int main() {
std::clock_t start;
double duration;
start = std::clock();
/* Your algorithm here */
duration = ( std::clock() - start ) / (double) CLOCKS_PER_SEC;
std::cout<<"printf: "<< duration <<'\n';
}
An alternative solution, which is portable and with higher precision, available since C++11, is to use std::chrono.
Here is an example:
#include <iostream>
#include <chrono>
typedef std::chrono::high_resolution_clock Clock;
int main()
{
auto t1 = Clock::now();
auto t2 = Clock::now();
std::cout << "Delta t2-t1: "
<< std::chrono::duration_cast<std::chrono::nanoseconds>(t2 - t1).count()
<< " nanoseconds" << std::endl;
}
Running this on ideone.com gave me:
Delta t2-t1: 282 nanoseconds
clock() returns the number of clock ticks since your program started. There is a related constant, CLOCKS_PER_SEC, which tells you how many clock ticks occur in one second. Thus, you can test any operation like this:
clock_t startTime = clock();
doSomeOperation();
clock_t endTime = clock();
clock_t clockTicksTaken = endTime - startTime;
double timeInSeconds = clockTicksTaken / (double) CLOCKS_PER_SEC;
On Windows at least, the only practically accurate measurement mechanism is QueryPerformanceCounter (QPC). std::chrono is implemented using it (since VS2015, if you use that), but it is not accurate to the same degree as using QueryPerformanceCounter directly. In particular it's claim to report at 1 nanosecond granularity is absolutely not correct. So, if you're measuring something that takes a very short amount of time (and your case might just be such a case), then you should use QPC, or the equivalent for your OS. I came up against this when measuring cache latencies, and I jotted down some notes that you might find useful, here;
https://github.com/jarlostensen/notesandcomments/blob/master/stdchronovsqcp.md
#include <iostream>
#include <ctime>
#include <cstdlib> //_sleep() --- just a function that waits a certain amount of milliseconds
using namespace std;
int main()
{
clock_t cl; //initializing a clock type
cl = clock(); //starting time of clock
_sleep(5167); //insert code here
cl = clock() - cl; //end point of clock
_sleep(1000); //testing to see if it actually stops at the end point
cout << cl/(double)CLOCKS_PER_SEC << endl; //prints the determined ticks per second (seconds passed)
return 0;
}
//outputs "5.17"
You can measure how long your program works. The following functions help measure the CPU time since the start of the program:
C++ (double)clock() / CLOCKS_PER_SEC with ctime included.
Python time.clock() returns floating-point value in seconds.
Java System.nanoTime() returns long value in nanoseconds.
My reference: algorithms toolbox week 1 course part of data structures and algorithms specialization by University of California San Diego & National Research University Higher School of Economics
So you can add this line of code after your algorithm:
cout << (double)clock() / CLOCKS_PER_SEC;
Expected Output: the output representing the number of clock ticks per second
Probably you might be interested in timer like this :
H : M : S . Msec.
the code in Linux OS:
#include <iostream>
#include <unistd.h>
using namespace std;
void newline();
int main() {
int msec = 0;
int sec = 0;
int min = 0;
int hr = 0;
//cout << "Press any key to start:";
//char start = _gtech();
for (;;)
{
newline();
if(msec == 1000)
{
++sec;
msec = 0;
}
if(sec == 60)
{
++min;
sec = 0;
}
if(min == 60)
{
++hr;
min = 0;
}
cout << hr << " : " << min << " : " << sec << " . " << msec << endl;
++msec;
usleep(100000);
}
return 0;
}
void newline()
{
cout << "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n";
}