I have tried clock_gettime(CLOCK_REALTIME) and gettimeofday() without luck - And the most basic like clock(), what return 0 to me (?).
But none of they count the time under sleep. I don't need a high resolution timer, but I need something for getting the elapsed time in ms.
EDIT: Final program:
#include <iostream>
#include <string>
#include <time.h>
#include <sys/time.h>
#include <sys/resource.h>
using namespace std;
// Non-system sleep (wasting cpu)
void wait ( int seconds )
{
clock_t endwait;
endwait = clock () + seconds * CLOCKS_PER_SEC ;
while (clock() < endwait) {}
}
int show_time() {
timeval tv;
gettimeofday(&tv, 0);
time_t t = tv.tv_sec;
long sub_sec = tv.tv_usec;
cout<<"t value: "<<t<<endl;
cout<<"sub_sec value: "<<sub_sec<<endl;
}
int main() {
cout<<show_time()<<endl;
sleep(2);
cout<<show_time()<<endl;
wait(2);
cout<<show_time()<<endl;
}
You need to try gettimeofday() again, it certainly count the wall clock time, so it counts when the process sleep as well.
long long getmsofday()
{
struct timeval tv;
gettimeofday(&tv);
return (long long)tv.tv_sec*1000 + tv.tv_usec/1000;
}
...
long long start = getmsofday();
do_something();
long long end = getmsofday();
printf("do_something took %lld ms\n",end - start);
Your problem probably relates to integral division. You need to cast one of the division operands to float/double to avoid truncation of decimal values less than a second.
clock_t start = clock();
// do stuff
// Can cast either operand for the division result to a double.
// I chose the right-hand operand, CLOCKS_PER_SEC.
double time_passed = clock() / static_cast<double>(CLOCKS_PER_SEC);
[Edit] As pointed out, clock() measures CPU time (clock ticks/cycles) and is not suitable well-suited for wall timer tests. If you want a portable solution for that, #see Boost.Timer as a possible solution
You actually want clock_gettime(CLOCK_MONOTONIC, ...).
Related
What's the best way to calculate a time difference in C++? I'm timing the execution speed of a program, so I'm interested in milliseconds. Better yet, seconds.milliseconds..
The accepted answer works, but needs to include ctime or time.h as noted in the comments.
See std::clock() function.
const clock_t begin_time = clock();
// do something
std::cout << float( clock () - begin_time ) / CLOCKS_PER_SEC;
If you want calculate execution time for self ( not for user ), it is better to do this in clock ticks ( not seconds ).
EDIT:
responsible header files - <ctime> or <time.h>
I added this answer to clarify that the accepted answer shows CPU time which may not be the time you want. Because according to the reference, there are CPU time and wall clock time. Wall clock time is the time which shows the actual elapsed time regardless of any other conditions like CPU shared by other processes. For example, I used multiple processors to do a certain task and the CPU time was high 18s where it actually took 2s in actual wall clock time.
To get the actual time you do,
#include <chrono>
auto t_start = std::chrono::high_resolution_clock::now();
// the work...
auto t_end = std::chrono::high_resolution_clock::now();
double elapsed_time_ms = std::chrono::duration<double, std::milli>(t_end-t_start).count();
if you are using c++11, here is a simple wrapper (see this gist):
#include <iostream>
#include <chrono>
class Timer
{
public:
Timer() : beg_(clock_::now()) {}
void reset() { beg_ = clock_::now(); }
double elapsed() const {
return std::chrono::duration_cast<second_>
(clock_::now() - beg_).count(); }
private:
typedef std::chrono::high_resolution_clock clock_;
typedef std::chrono::duration<double, std::ratio<1> > second_;
std::chrono::time_point<clock_> beg_;
};
Or for c++03 on *nix:
#include <iostream>
#include <ctime>
class Timer
{
public:
Timer() { clock_gettime(CLOCK_REALTIME, &beg_); }
double elapsed() {
clock_gettime(CLOCK_REALTIME, &end_);
return end_.tv_sec - beg_.tv_sec +
(end_.tv_nsec - beg_.tv_nsec) / 1000000000.;
}
void reset() { clock_gettime(CLOCK_REALTIME, &beg_); }
private:
timespec beg_, end_;
};
Example of usage:
int main()
{
Timer tmr;
double t = tmr.elapsed();
std::cout << t << std::endl;
tmr.reset();
t = tmr.elapsed();
std::cout << t << std::endl;
return 0;
}
I would seriously consider the use of Boost, particularly boost::posix_time::ptime and boost::posix_time::time_duration (at http://www.boost.org/doc/libs/1_38_0/doc/html/date_time/posix_time.html).
It's cross-platform, easy to use, and in my experience provides the highest level of time resolution an operating system provides. Possibly also very important; it provides some very nice IO operators.
To use it to calculate the difference in program execution (to microseconds; probably overkill), it would look something like this [browser written, not tested]:
ptime time_start(microsec_clock::local_time());
//... execution goes here ...
ptime time_end(microsec_clock::local_time());
time_duration duration(time_end - time_start);
cout << duration << '\n';
boost 1.46.0 and up includes the Chrono library:
thread_clock class provides access to the real thread wall-clock, i.e.
the real CPU-time clock of the calling thread. The thread relative
current time can be obtained by calling thread_clock::now()
#include <boost/chrono/thread_clock.hpp>
{
...
using namespace boost::chrono;
thread_clock::time_point start = thread_clock::now();
...
thread_clock::time_point stop = thread_clock::now();
std::cout << "duration: " << duration_cast<milliseconds>(stop - start).count() << " ms\n";
In Windows: use GetTickCount
//GetTickCount defintition
#include <windows.h>
int main()
{
DWORD dw1 = GetTickCount();
//Do something
DWORD dw2 = GetTickCount();
cout<<"Time difference is "<<(dw2-dw1)<<" milliSeconds"<<endl;
}
You can also use the clock_gettime. This method can be used to measure:
System wide real-time clock
System wide monotonic clock
Per Process CPU time
Per process Thread CPU time
Code is as follows:
#include < time.h >
#include <iostream>
int main(){
timespec ts_beg, ts_end;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts_beg);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts_end);
std::cout << (ts_end.tv_sec - ts_beg.tv_sec) + (ts_end.tv_nsec - ts_beg.tv_nsec) / 1e9 << " sec";
}
`
just in case you are on Unix, you can use time to get the execution time:
$ g++ myprog.cpp -o myprog
$ time ./myprog
For me, the most easy way is:
#include <boost/timer.hpp>
boost::timer t;
double duration;
t.restart();
/* DO SOMETHING HERE... */
duration = t.elapsed();
t.restart();
/* DO OTHER STUFF HERE... */
duration = t.elapsed();
using this piece of code you don't have to do the classic end - start.
Enjoy your favorite approach.
Just a side note: if you're running on Windows, and you really really need precision, you can use QueryPerformanceCounter. It gives you time in (potentially) nanoseconds.
Get the system time in milliseconds at the beginning, and again at the end, and subtract.
To get the number of milliseconds since 1970 in POSIX you would write:
struct timeval tv;
gettimeofday(&tv, NULL);
return ((((unsigned long long)tv.tv_sec) * 1000) +
(((unsigned long long)tv.tv_usec) / 1000));
To get the number of milliseconds since 1601 on Windows you would write:
SYSTEMTIME systime;
FILETIME filetime;
GetSystemTime(&systime);
if (!SystemTimeToFileTime(&systime, &filetime))
return 0;
unsigned long long ns_since_1601;
ULARGE_INTEGER* ptr = (ULARGE_INTEGER*)&ns_since_1601;
// copy the result into the ULARGE_INTEGER; this is actually
// copying the result into the ns_since_1601 unsigned long long.
ptr->u.LowPart = filetime.dwLowDateTime;
ptr->u.HighPart = filetime.dwHighDateTime;
// Compute the number of milliseconds since 1601; we have to
// divide by 10,000, since the current value is the number of 100ns
// intervals since 1601, not ms.
return (ns_since_1601 / 10000);
If you cared to normalize the Windows answer so that it also returned the number of milliseconds since 1970, then you would have to adjust your answer by 11644473600000 milliseconds. But that isn't necessary if all you care about is the elapsed time.
If you are using:
tstart = clock();
// ...do something...
tend = clock();
Then you will need the following to get time in seconds:
time = (tend - tstart) / (double) CLOCKS_PER_SEC;
This seems to work fine for intel Mac 10.7:
#include <time.h>
time_t start = time(NULL);
//Do your work
time_t end = time(NULL);
std::cout<<"Execution Time: "<< (double)(end-start)<<" Seconds"<<std::endl;
I am trying to get the running time of Insertion Sort Algorithm. MSDN said that using CTime could get the Elapsed Time. But I tried many times and always got zero. I thought it is impossible that the time of running this algorithm is zero. There must be some error or something else. Could anybody help me? I posted my code below:
#include <cstdlib>
#include <iostream>
#include <atltime.h>
using namespace std;
//member function
void insertion_sort(int arr[], int length);
int *create_array(int arrSize);
int main() {
//Create random array
int arraySize=100;
int *randomArray=new int[arraySize];
int s;
for (s=0;s<arraySize;s++){
randomArray[s]=(rand()%99)+1;
}
CTime startTime = CTime::GetCurrentTime();
int iter;
for (iter=0;iter<1000;iter++){
insertion_sort(randomArray,arraySize);
}
CTime endTime = CTime::GetCurrentTime();
CTimeSpan elapsedTime = endTime - startTime;
double nTMSeconds = elapsedTime.GetTotalSeconds()*1000;
cout<<nTMSeconds;
return 0;
}//end of main
CTime isn't meant to time things to a resolution less than one second. I think what you are really after is something like GetTickCount or GetTickCount64 . See this MSDN link .
GetTickCount function
Retrieves the number of milliseconds that have elapsed since the system was started, up to 49.7 days.
If using GetTickCount64 you could declare startTime and endTime this way:
uint64_t endTime, startTime, diffTime;
Then use GetTickCount64 to retrieve the time in milliseconds with something like
startTime = GetTickCount64();
... do stuff ...
endTime = GetTickCount64();
diffTime = endTime - startTime;
And of course diffTime can be used however you want.
If you don't need to time things for more than a month then you can simply use GetTickCount and the type returned will be a uint32_t instead of uint64_t
If you need resolution beyond 1 millisecond for timing and your computer supports a high resolution timer then this code may work:
LARGE_INTEGER freq;
double time_sec = 0.0;
if (QueryPerformanceFrequency(&freq))
{
LARGE_INTEGER start;
LARGE_INTEGER stop;
QueryPerformanceCounter(&start);
// Do Stuff to time Here
QueryPerformanceCounter(&stop);
time_sec = (uint64_t)(stop.QuadPart - start.QuadPart) / (double)freq.QuadPart;
}
else {
cout << "Your computer doesn't have a high resolution timer to use";
}
Information on the high performance timer can be found in this MSDN entry
For timing an algorithm (approximately in ms), which of these two approaches is better:
clock_t start = clock();
algorithm();
clock_t end = clock();
double time = (double) (end-start) / CLOCKS_PER_SEC * 1000.0;
Or,
time_t start = time(0);
algorithm();
time_t end = time(0);
double time = difftime(end, start) * 1000.0;
Also, from some discussion in the C++ channel at Freenode, I know clock has a very bad resolution, so the timing will be zero for a (relatively) fast algorithm. But, which has better resolution time() or clock()? Or is it the same?
<chrono> would be a better library if you're using C++11.
#include <iostream>
#include <chrono>
#include <thread>
void f()
{
std::this_thread::sleep_for(std::chrono::seconds(1));
}
int main()
{
auto t1 = std::chrono::high_resolution_clock::now();
f();
auto t2 = std::chrono::high_resolution_clock::now();
std::cout << "f() took "
<< std::chrono::duration_cast<std::chrono::milliseconds>(t2-t1).count()
<< " milliseconds\n";
}
Example taken from here.
It depends what you want: time measures the real time while clock measures the processing time taken by the current process. If your process sleeps for any appreciable amount of time, or the system is busy with other processes, the two will be very different.
http://en.cppreference.com/w/cpp/chrono/c/clock
The time_t structure is probably going to be an integer, which means it will have a resolution of second.
The first piece of code: It will only count the time that the CPU was doing something, so when you do sleep(), it will not count anything. It can be bypassed by counting the time you sleep(), but it will probably start to drift after a while.
The second piece: Only resolution of seconds, not so great if you need sub-second time readings.
For time readings with the best resolution you can get, you should do something like this:
double getUnixTime(void)
{
struct timespec tv;
if(clock_gettime(CLOCK_REALTIME, &tv) != 0) return 0;
return (tv.tv_sec + (tv.tv_nsec / 1000000000.0));
}
double start_time = getUnixTime();
double stop_time, difference;
doYourStuff();
stop_time = getUnixTime();
difference = stop_time - start_time;
On most systems it's resolution will be down to few microseconds, but it can vary with different CPUs, and probably even major kernel versions.
<chrono> is the best. Visual Studio 2013 provides this feature. Personally, I have tried all the methods mentioned above. I strongly recommend you use the <chrono> library. It can track the wall time and at the same time have a good resolution (much less than a second).
How about gettimeofday()? When it is called it updates two structs (timeval and timezone), with timing information. Usually, passing a timeval struct is enough and the timezone struct can be set to NULL. The updated timeval struct will have two members tv_sec and tv_usec. tv_sec is the number of seconds since 00:00:00, January 1, 1970 (Unix Epoch) and tv_usec is additional number of microseconds w.r.t. tv_sec. Thus, one can get time expressed in very good resolution.
It can be used as follows:
#include <time.h>
struct timeval start_time;
double mtime, seconds, useconds;
gettimeofday(&start_time, NULL); //timeval is usually enough
int seconds = start_time.tv_sec; //time in seconds
int useconds = start_time.tv_usec; //further time in microseconds
int desired_time = seconds * 1000000 + useconds; //time in microseconds
I am still new to C++, is the clock function absolute (meaning it counts how long you sleep for), or is it how much time the application actually executes for?
I want a reliable way to produce exact intervals of 1 second. I am saving files, so I need to account for that. I was returning the runtime for that in milliseconds, and then sleeping for the remainder.
Is there a more accurate or simpler way to do this?
EDIT:
The main problem I am having is that I am getting a negative number:
double FCamera::getRuntime(clock_t* end, clock_t* start)
{
return((double(end - start)/CLOCKS_PER_SEC)*1000);
}
clock_t start = clock();
doWork();
clock_t end = clock();
double runtimeInMilliseconds = getRuntime(&end, &start);
It's giving me a negative number, what's up with that?
Walter
clock() returns the number of clock ticks elapsed since the program was launched. If you want to convert the value returned by clock into seconds divide by CLOCKS_PER_SEC (and multiply for the other way around).
There is just one pitfall, the initial moment of reference used by clock as the beginning of the program execution may vary between platforms. To calculate the actual processing times of a program, the value returned by clock should be compared to a value returned by an initial call to clock.
EDIT
larsman has been so kind to post other pitfalls in the comments. I have included them here for future reference.
On several other implementations, the value returned by clock() also includes the times of any children whose status has been collected via wait(2) (or another wait-type call). Linux does not include the times of waited-for children in the value returned by clock().
Note that the time can wrap around. On a 32-bit system where CLOCKS_PER_SEC equals 1000000 [as mandated by POSIX] this function will return the same value approximately every 72 minutes.
EDIT2
After messing around a while here is my portable (Linux/Windows) msleep. Be wary though, I'm not experienced with C/C++ and will most likely contain the stupidest error ever.
#ifdef _WIN32
#include <windows.h>
#define msleep(ms) Sleep((DWORD) ms)
#else
#include <unistd.h>
inline void msleep(unsigned long ms) {
while (ms--) usleep(1000);
}
#endif
You missed * (pointer) ,
Your argument is pointer (address of clock_t variable)
so, Your code must be modified::
return((double(*end - *start)/CLOCKS_PER_SEC)*1000);
Under windows, you can use:
VOID WINAPI Sleep(
__in DWORD dwMilliseconds
);
In linux, you will want to use:
#include <unistd.h>
unsigned int sleep(unsigned int seconds);
Notice the parameter difference - milliseconds under windows and seconds under linux.
My approach relies on:
int gettimeofday(struct timeval *tv, struct timezone *tz);
which gives the number of seconds and microseconds since the Epoch. According to the man pages:
The tv argument is a struct timeval (as specified in <sys/time.h>):
struct timeval {
time_t tv_sec; /* seconds */
suseconds_t tv_usec; /* microseconds */
};
So here we go:
#include <sys/time.h>
#include <iostream>
#include <iomanip>
static long myclock()
{
struct timeval tv;
gettimeofday(&tv, NULL);
return (tv.tv_sec * 1000000) + tv.tv_usec;
}
double getRuntime(long* end, long* start)
{
return (*end - *start);
}
void doWork()
{
sleep(3);
}
int main(void)
{
long start = myclock();
doWork();
long end = myclock();
std::cout << "Time elapsed: " << std::setprecision(6) << getRuntime(&end, &start)/1000.0 << " miliseconds" << std::endl;
std::cout << "Time elapsed: " << std::setprecision(3) << getRuntime(&end, &start)/1000000.0 << " seconds" << std::endl;
return 0;
}
Outputs:
Time elapsed: 3000.08 miliseconds
Time elapsed: 3 seconds
Does anyone know how to calculate time difference in C++ in milliseconds?
I used difftime but it doesn't have enough precision for what I'm trying to measure.
I know this is an old question, but there's an updated answer for C++0x. There is a new header called <chrono> which contains modern time utilities. Example use:
#include <iostream>
#include <thread>
#include <chrono>
int main()
{
typedef std::chrono::high_resolution_clock Clock;
typedef std::chrono::milliseconds milliseconds;
Clock::time_point t0 = Clock::now();
std::this_thread::sleep_for(milliseconds(50));
Clock::time_point t1 = Clock::now();
milliseconds ms = std::chrono::duration_cast<milliseconds>(t1 - t0);
std::cout << ms.count() << "ms\n";
}
50ms
More information can be found here:
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2661.htm
There is also now a boost implementation of <chrono>.
You have to use one of the more specific time structures, either timeval (microsecond-resolution) or timespec (nanosecond-resolution), but you can do it manually fairly easily:
#include <time.h>
int diff_ms(timeval t1, timeval t2)
{
return (((t1.tv_sec - t2.tv_sec) * 1000000) +
(t1.tv_usec - t2.tv_usec))/1000;
}
This obviously has some problems with integer overflow if the difference in times is really large (or if you have 16-bit ints), but that's probably not a common case.
if you are using win32 FILETIME is the most accurate that you can get:
Contains a 64-bit value representing the number of 100-nanosecond intervals since January 1, 1601 (UTC).
So if you want to calculate the difference between two times in milliseconds you do the following:
UINT64 getTime()
{
SYSTEMTIME st;
GetSystemTime(&st);
FILETIME ft;
SystemTimeToFileTime(&st, &ft); // converts to file time format
ULARGE_INTEGER ui;
ui.LowPart=ft.dwLowDateTime;
ui.HighPart=ft.dwHighDateTime;
return ui.QuadPart;
}
int _tmain(int argc, TCHAR* argv[], TCHAR* envp[])
{
//! Start counting time
UINT64 start, finish;
start=getTime();
//do something...
//! Stop counting elapsed time
finish = getTime();
//now you can calculate the difference any way that you want
//in seconds:
_tprintf(_T("Time elapsed executing this code: %.03f seconds."), (((float)(finish-start))/((float)10000))/1000 );
//or in miliseconds
_tprintf(_T("Time elapsed executing this code: %I64d seconds."), (finish-start)/10000 );
}
The clock function gives you a millisecond timer, but it's not the greatest. Its real resolution is going to depend on your system. You can try
#include <time.h>
int clo = clock();
//do stuff
cout << (clock() - clo) << endl;
and see how your results are.
You can use gettimeofday to get the number of microseconds since epoch. The seconds segment of the value returned by gettimeofday() is the same as that returned by time() and can be cast to a time_t and used in difftime. A millisecond is 1000 microseconds.
After you use difftime, calculate the difference in the microseconds field yourself.
You can get micro and nanosecond precision out of Boost.Date_Time.
If you're looking to do benchmarking, you might want to see some of the other threads here on SO which discuss the topic.
Also, be sure you understand the difference between accuracy and precision.
I think you will have to use something platform-specific. Hopefully that won't matter?
eg. On Windows, look at QueryPerformanceCounter() which will give you something much
better than milliseconds.