I was playing arround with std::chrono.
While I do some testing I wonder if I can get the ratio that was used to construct a std::chrono::duration because I want to print it.
Here some code to show what exactly I want to do:
You schould be able to compile this on windows and linux (g++) by adding the -std=c++11 flag.
This small sample code should measure the time your machine needs to cout to max int value.
main.cpp
#include<iostream>
#include "stopchrono.hpp"
#include<chrono>
#include<limit>
int main (){
stopchrono<> main_timer(true);
stopchrono<unsigned long long int,std::ratio<1,1000000000>,std::chrono::high_resolution_clock> m_timer(true);//<use long long int to store ticks,(1/1000000000)sekond per tick, obtain time_point from std::chrono::high_resolution_clock>
stopchrono<unsigned long long int,std::ratio<1,1000000000>> mtimer(true);
std::cout<<"count to max of int ..."<<std::endl;
for(int i=0;i<std::numeric_limits<int>::max();i++){}
std::cout<<"finished."<<std::endl;
main_timer.stop();
m_timer.stop();
mtimer.stop();
std::cout<<std::endl<<"It took me "<<(main_timer.elapsed()).count()<<" Seconds."<<std::endl;
std::cout<<" "<<(m_timer.elapsed()).count()<<std::endl;//print amount of elapsed ticks by std::chrono::duration::count()
std::cout<<" "<<(mtimer.elapsed()).count()<<std::endl;
std::cin.ignore();
return 0;
}
stopchrono.hpp
#ifndef STOPCHRONO_DEFINED
#define STOPCHRONO_DEFINED
#include<chrono>
template<class rep=double,class period=std::ratio<1>,class clock=std::chrono::steady_clock> //this templates first two parameters determines the duration type that will be returned, the third parameter defines from which clock the duration will be obtained
class stopchrono { // class for measurement of time programm parts are running
typename clock::time_point start_point;
std::chrono::duration<rep,period> elapsed_time;
bool running;
public:
stopchrono():
start_point(clock::now()),
elapsed_time(elapsed_time.zero()),
running(false)
{}
stopchrono(bool runnit)://construct already started object
running(runnit),
elapsed_time(elapsed_time.zero()),
start_point(clock::now())
{}
void start(){//set start_point to current clock::now() if not running
if(!running){
start_point=clock::now();
running=true;
}
}
void stop(){// add current duration to elapsed_time
if(running){
elapsed_time+=std::chrono::duration_cast<std::chrono::duration<rep,period>>(clock::now()-start_point);
running=false;
}
}
void reset(){// set elapsed_time to 0 and running to false
elapsed_time=elapsed_time.zero();
running=false;
}
std::chrono::duration<rep,period> elapsed(){//return elapsed_time
if(running){
return (std::chrono::duration_cast<std::chrono::duration<rep,period>>(elapsed_time+(clock::now()-start_point)));
}else{
return (elapsed_time);
}
}
bool is_running()const{// determine if the timer is running
return running;
}
};
#endif
actual sample output
count to max of int ...
finished.
It took me 81.6503 Seconds.
81650329344
81650331344
target sample output
count to max of int ...
finished.
It took me 81.6503 Seconds.
81650329344 (1/1000000000)sekonds
81650331344
How can I obtain the used period std::ratio<1,1000000000> from the returned duration even if I don't know which I have used to create the stopchrono object?
Is that even possible?
The std::chrono::duration class has a typedef period which is what you are looking for. You can access it via decltype(your_variable)::period. Something like the following should do
auto elapsed = main_timer.elapsed();
cout << elapsed.count() << " " << decltype(elapsed)::period::num << "/"
<< decltype(elapsed)::period::den << endl;
See also this working example which prints the elapsed time and the ratio of seconds.
Related
What's the best way to calculate a time difference in C++? I'm timing the execution speed of a program, so I'm interested in milliseconds. Better yet, seconds.milliseconds..
The accepted answer works, but needs to include ctime or time.h as noted in the comments.
See std::clock() function.
const clock_t begin_time = clock();
// do something
std::cout << float( clock () - begin_time ) / CLOCKS_PER_SEC;
If you want calculate execution time for self ( not for user ), it is better to do this in clock ticks ( not seconds ).
EDIT:
responsible header files - <ctime> or <time.h>
I added this answer to clarify that the accepted answer shows CPU time which may not be the time you want. Because according to the reference, there are CPU time and wall clock time. Wall clock time is the time which shows the actual elapsed time regardless of any other conditions like CPU shared by other processes. For example, I used multiple processors to do a certain task and the CPU time was high 18s where it actually took 2s in actual wall clock time.
To get the actual time you do,
#include <chrono>
auto t_start = std::chrono::high_resolution_clock::now();
// the work...
auto t_end = std::chrono::high_resolution_clock::now();
double elapsed_time_ms = std::chrono::duration<double, std::milli>(t_end-t_start).count();
if you are using c++11, here is a simple wrapper (see this gist):
#include <iostream>
#include <chrono>
class Timer
{
public:
Timer() : beg_(clock_::now()) {}
void reset() { beg_ = clock_::now(); }
double elapsed() const {
return std::chrono::duration_cast<second_>
(clock_::now() - beg_).count(); }
private:
typedef std::chrono::high_resolution_clock clock_;
typedef std::chrono::duration<double, std::ratio<1> > second_;
std::chrono::time_point<clock_> beg_;
};
Or for c++03 on *nix:
#include <iostream>
#include <ctime>
class Timer
{
public:
Timer() { clock_gettime(CLOCK_REALTIME, &beg_); }
double elapsed() {
clock_gettime(CLOCK_REALTIME, &end_);
return end_.tv_sec - beg_.tv_sec +
(end_.tv_nsec - beg_.tv_nsec) / 1000000000.;
}
void reset() { clock_gettime(CLOCK_REALTIME, &beg_); }
private:
timespec beg_, end_;
};
Example of usage:
int main()
{
Timer tmr;
double t = tmr.elapsed();
std::cout << t << std::endl;
tmr.reset();
t = tmr.elapsed();
std::cout << t << std::endl;
return 0;
}
I would seriously consider the use of Boost, particularly boost::posix_time::ptime and boost::posix_time::time_duration (at http://www.boost.org/doc/libs/1_38_0/doc/html/date_time/posix_time.html).
It's cross-platform, easy to use, and in my experience provides the highest level of time resolution an operating system provides. Possibly also very important; it provides some very nice IO operators.
To use it to calculate the difference in program execution (to microseconds; probably overkill), it would look something like this [browser written, not tested]:
ptime time_start(microsec_clock::local_time());
//... execution goes here ...
ptime time_end(microsec_clock::local_time());
time_duration duration(time_end - time_start);
cout << duration << '\n';
boost 1.46.0 and up includes the Chrono library:
thread_clock class provides access to the real thread wall-clock, i.e.
the real CPU-time clock of the calling thread. The thread relative
current time can be obtained by calling thread_clock::now()
#include <boost/chrono/thread_clock.hpp>
{
...
using namespace boost::chrono;
thread_clock::time_point start = thread_clock::now();
...
thread_clock::time_point stop = thread_clock::now();
std::cout << "duration: " << duration_cast<milliseconds>(stop - start).count() << " ms\n";
In Windows: use GetTickCount
//GetTickCount defintition
#include <windows.h>
int main()
{
DWORD dw1 = GetTickCount();
//Do something
DWORD dw2 = GetTickCount();
cout<<"Time difference is "<<(dw2-dw1)<<" milliSeconds"<<endl;
}
You can also use the clock_gettime. This method can be used to measure:
System wide real-time clock
System wide monotonic clock
Per Process CPU time
Per process Thread CPU time
Code is as follows:
#include < time.h >
#include <iostream>
int main(){
timespec ts_beg, ts_end;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts_beg);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts_end);
std::cout << (ts_end.tv_sec - ts_beg.tv_sec) + (ts_end.tv_nsec - ts_beg.tv_nsec) / 1e9 << " sec";
}
`
just in case you are on Unix, you can use time to get the execution time:
$ g++ myprog.cpp -o myprog
$ time ./myprog
For me, the most easy way is:
#include <boost/timer.hpp>
boost::timer t;
double duration;
t.restart();
/* DO SOMETHING HERE... */
duration = t.elapsed();
t.restart();
/* DO OTHER STUFF HERE... */
duration = t.elapsed();
using this piece of code you don't have to do the classic end - start.
Enjoy your favorite approach.
Just a side note: if you're running on Windows, and you really really need precision, you can use QueryPerformanceCounter. It gives you time in (potentially) nanoseconds.
Get the system time in milliseconds at the beginning, and again at the end, and subtract.
To get the number of milliseconds since 1970 in POSIX you would write:
struct timeval tv;
gettimeofday(&tv, NULL);
return ((((unsigned long long)tv.tv_sec) * 1000) +
(((unsigned long long)tv.tv_usec) / 1000));
To get the number of milliseconds since 1601 on Windows you would write:
SYSTEMTIME systime;
FILETIME filetime;
GetSystemTime(&systime);
if (!SystemTimeToFileTime(&systime, &filetime))
return 0;
unsigned long long ns_since_1601;
ULARGE_INTEGER* ptr = (ULARGE_INTEGER*)&ns_since_1601;
// copy the result into the ULARGE_INTEGER; this is actually
// copying the result into the ns_since_1601 unsigned long long.
ptr->u.LowPart = filetime.dwLowDateTime;
ptr->u.HighPart = filetime.dwHighDateTime;
// Compute the number of milliseconds since 1601; we have to
// divide by 10,000, since the current value is the number of 100ns
// intervals since 1601, not ms.
return (ns_since_1601 / 10000);
If you cared to normalize the Windows answer so that it also returned the number of milliseconds since 1970, then you would have to adjust your answer by 11644473600000 milliseconds. But that isn't necessary if all you care about is the elapsed time.
If you are using:
tstart = clock();
// ...do something...
tend = clock();
Then you will need the following to get time in seconds:
time = (tend - tstart) / (double) CLOCKS_PER_SEC;
This seems to work fine for intel Mac 10.7:
#include <time.h>
time_t start = time(NULL);
//Do your work
time_t end = time(NULL);
std::cout<<"Execution Time: "<< (double)(end-start)<<" Seconds"<<std::endl;
I have to write a stopwatch class in c++. The way I am trying to do this is by defining a variable to save laps (named 'time') and a bool that I use to see if the watch is started or stopped. When entering a char the timer should start and set time1. When another char is entered the bool switches to false and sets time2 and then prints time2-time1. This should be repeatable until 'n' is entered
I also am not quite sure I understand what unit of time time_t is in. In my code i get a return value of ±40 units every time i try to measure the interval of a lap, which I am guessing is the runtime of the program and not actually the time of the interval.
#ifndef stoppuhr_hpp
#define stoppuhr_hpp
#include <iostream>
#include <time.h>
class Stoppuhr{
private:
bool running;
clock_t time;
public:
void pushButtonStartStop () {
char t=0;
running=false;
time=0;
std::cout << "to start/stop watch please press a key, to end
clock type 'n' " << std::endl;
clock_t time1=0;
clock_t time2=0;
std::cout << time;
while (t!='n') {
std::cin >> t;
running= !running;
if (running) {
time1=clock();
}
else {
time2=clock();
time+=time2-time1;
std::cout << time << std::endl;
}
}
}
};
#endif /* stoppuhr_hpp */
I also am not quite sure I understand what unit of time time_t is in.
The unit of time represented by time_t is implementation specified. Usually, it represents seconds, as specified by POSIX.
However, you don't use time_t anywhere in your program.
I am guessing is the runtime of the program
I recommend not to guess, but to read documentation instead. clock() returns the processor time used by the program since some point in time. So deducting two timepoints returned by clock() will give you the processor time used between those timepoints. The unit of clock_t is 1 / CLOCKS_PER_SEC seconds.
i get a return value of ±40 units every time
Granularity of clock is implementation specified. It might be 40 units on your system. The program consumes hardly any processor time while it waits for input.
I have to write a stopwatch class
Stopwatches typically measure real world time i.e. wall clock time. Measuring processor time would be futile for this task.
I recommend using std::chrono::steady_clock::now instead.
If you insist on using time.h, then you can use time(nullptr) to get the wall clock time but I don't recommend it.
I am using QueryPerformanceCounter to measure the time of some functions/operations. It used to give me correct numbers, for instance, I could test Sleep(1000) and it would return a time very close to 1 second. Now, it returns a time very different. I am not sure what the issue is as the code hasn't changed at all. Here is the code:
Expected Output:
duration : 1.000 seconds
Actual Output:
duration : 187.988 seconds
Code:
#include <windows.h>
#pragma comment (lib, "winmm.lib")
struct Clock{
Clock(){}
virtual ~Clock(){}
void start();
void stop();
long double duration();
LARGE_INTEGER _start, _end, _freq;
};
void Clock::start(){
_start.QuadPart = 0;
QueryPerformanceCounter(&_freq);
QueryPerformanceCounter(&_start);
}
void Clock::stop(){
QueryPerformanceCounter(&_end);
}
long double Clock::duration(){
//microseconds
LARGE_INTEGER delta;
delta.QuadPart = (_end.QuadPart - _start.QuadPart) * 1000000;
long double the_duration = ((long double)delta.QuadPart) / _freq.QuadPart;
std::cout << "duration : " << the_duration << " seconds" << std::endl;
return the_duration;
}
void main(){
Clock clock;
clock.start();
Sleep(1000);
clock.stop();
clock.duration();
std::cin.get();
}
You are assuming that the frequency from the performance counter is exactly 1000000 Hz.
You need to call QueryPerformanceFrequency instead, as the frequency can vary (some kernels use the motherboard's 1.024 MHz timer, others use the CPUs time-stamp-counter, which runs at approximately the CPU's clock frequency).
I use it in VBA. Sometimes I get 8976543566 ms instead of 40
I deal with it by calling it twice in the beginning of procedure:
If gbolShowRunTime = True Then
QueryPerformanceCounter curFrequencyStartLoop
End If
If gbolShowBFTime = True Then
QueryPerformanceCounter curFrequencyEndLoop 'Get the end time
QueryPerformanceCounter curFrequencyStartLoop ' reseed it
End If
Thereafter it works fine.
I am still new to C++, is the clock function absolute (meaning it counts how long you sleep for), or is it how much time the application actually executes for?
I want a reliable way to produce exact intervals of 1 second. I am saving files, so I need to account for that. I was returning the runtime for that in milliseconds, and then sleeping for the remainder.
Is there a more accurate or simpler way to do this?
EDIT:
The main problem I am having is that I am getting a negative number:
double FCamera::getRuntime(clock_t* end, clock_t* start)
{
return((double(end - start)/CLOCKS_PER_SEC)*1000);
}
clock_t start = clock();
doWork();
clock_t end = clock();
double runtimeInMilliseconds = getRuntime(&end, &start);
It's giving me a negative number, what's up with that?
Walter
clock() returns the number of clock ticks elapsed since the program was launched. If you want to convert the value returned by clock into seconds divide by CLOCKS_PER_SEC (and multiply for the other way around).
There is just one pitfall, the initial moment of reference used by clock as the beginning of the program execution may vary between platforms. To calculate the actual processing times of a program, the value returned by clock should be compared to a value returned by an initial call to clock.
EDIT
larsman has been so kind to post other pitfalls in the comments. I have included them here for future reference.
On several other implementations, the value returned by clock() also includes the times of any children whose status has been collected via wait(2) (or another wait-type call). Linux does not include the times of waited-for children in the value returned by clock().
Note that the time can wrap around. On a 32-bit system where CLOCKS_PER_SEC equals 1000000 [as mandated by POSIX] this function will return the same value approximately every 72 minutes.
EDIT2
After messing around a while here is my portable (Linux/Windows) msleep. Be wary though, I'm not experienced with C/C++ and will most likely contain the stupidest error ever.
#ifdef _WIN32
#include <windows.h>
#define msleep(ms) Sleep((DWORD) ms)
#else
#include <unistd.h>
inline void msleep(unsigned long ms) {
while (ms--) usleep(1000);
}
#endif
You missed * (pointer) ,
Your argument is pointer (address of clock_t variable)
so, Your code must be modified::
return((double(*end - *start)/CLOCKS_PER_SEC)*1000);
Under windows, you can use:
VOID WINAPI Sleep(
__in DWORD dwMilliseconds
);
In linux, you will want to use:
#include <unistd.h>
unsigned int sleep(unsigned int seconds);
Notice the parameter difference - milliseconds under windows and seconds under linux.
My approach relies on:
int gettimeofday(struct timeval *tv, struct timezone *tz);
which gives the number of seconds and microseconds since the Epoch. According to the man pages:
The tv argument is a struct timeval (as specified in <sys/time.h>):
struct timeval {
time_t tv_sec; /* seconds */
suseconds_t tv_usec; /* microseconds */
};
So here we go:
#include <sys/time.h>
#include <iostream>
#include <iomanip>
static long myclock()
{
struct timeval tv;
gettimeofday(&tv, NULL);
return (tv.tv_sec * 1000000) + tv.tv_usec;
}
double getRuntime(long* end, long* start)
{
return (*end - *start);
}
void doWork()
{
sleep(3);
}
int main(void)
{
long start = myclock();
doWork();
long end = myclock();
std::cout << "Time elapsed: " << std::setprecision(6) << getRuntime(&end, &start)/1000.0 << " miliseconds" << std::endl;
std::cout << "Time elapsed: " << std::setprecision(3) << getRuntime(&end, &start)/1000000.0 << " seconds" << std::endl;
return 0;
}
Outputs:
Time elapsed: 3000.08 miliseconds
Time elapsed: 3 seconds
I am writing a program that will be used on a Solaris machine. I need a way of keeping track of how many seconds has passed since the start of the program. I'm talking very simple here. For example I would have an int seconds = 0; but how would I go about updating the seconds variable as each second passes?
It seems that some of the various time functions that I've looked at only work on Windows machines, so I'm just not sure.
Any suggestions would be appreciated.
Thanks for your time.
A very simple method:
#include <time.h>
time_t start = time(0);
double seconds_since_start = difftime( time(0), start);
The main drawback to this is that you have to poll for the updates. You'll need platform support or some other lib/framework to do this on an event basis.
Use std::chrono.
#include <chrono>
#include <iostream>
int main(int argc, char *argv[])
{
auto start_time = std::chrono::high_resolution_clock::now();
auto current_time = std::chrono::high_resolution_clock::now();
std::cout << "Program has been running for " << std::chrono::duration_cast<std::chrono::seconds>(current_time - start_time).count() << " seconds" << std::endl;
return 0;
}
If you only need a resolution of seconds, then std::steady_clock should be sufficient.
You are approaching it backwards. Instead of having a variable you have to worry about updating every second, just initialize a variable on program start with the current time, and then whenever you need to know how many seconds have elapsed, you subtract the now current time from that initial time. Much less overhead that way, and no need to nurse some timing related variable update.
#include <stdio.h>
#include <time.h>
#include <windows.h>
using namespace std;
void wait ( int seconds );
int main ()
{
time_t start, end;
double diff;
time (&start); //useful call
for (int i=0;i<10;i++) //this loop is useless, just to pass some time.
{
printf ("%s\n", ctime(&start));
wait(1);
}
time (&end);//useful call
diff = difftime(end,start);//this will give you time spent between those two calls.
printf("difference in seconds=%f",diff); //convert secs as u like
system("pause");
return 0;
}
void wait ( int seconds )
{
clock_t endwait;
endwait = clock () + seconds * CLOCKS_PER_SEC ;
while (clock() < endwait) {}
}
this should work fine on solaris/unix also, just remove win refs
You just need to store the date/time when application started. Whenever you need to display for how long your program is running get current date/time and subtract the when application started.