What's the best way to calculate a time difference in C++? I'm timing the execution speed of a program, so I'm interested in milliseconds. Better yet, seconds.milliseconds..
The accepted answer works, but needs to include ctime or time.h as noted in the comments.
See std::clock() function.
const clock_t begin_time = clock();
// do something
std::cout << float( clock () - begin_time ) / CLOCKS_PER_SEC;
If you want calculate execution time for self ( not for user ), it is better to do this in clock ticks ( not seconds ).
EDIT:
responsible header files - <ctime> or <time.h>
I added this answer to clarify that the accepted answer shows CPU time which may not be the time you want. Because according to the reference, there are CPU time and wall clock time. Wall clock time is the time which shows the actual elapsed time regardless of any other conditions like CPU shared by other processes. For example, I used multiple processors to do a certain task and the CPU time was high 18s where it actually took 2s in actual wall clock time.
To get the actual time you do,
#include <chrono>
auto t_start = std::chrono::high_resolution_clock::now();
// the work...
auto t_end = std::chrono::high_resolution_clock::now();
double elapsed_time_ms = std::chrono::duration<double, std::milli>(t_end-t_start).count();
if you are using c++11, here is a simple wrapper (see this gist):
#include <iostream>
#include <chrono>
class Timer
{
public:
Timer() : beg_(clock_::now()) {}
void reset() { beg_ = clock_::now(); }
double elapsed() const {
return std::chrono::duration_cast<second_>
(clock_::now() - beg_).count(); }
private:
typedef std::chrono::high_resolution_clock clock_;
typedef std::chrono::duration<double, std::ratio<1> > second_;
std::chrono::time_point<clock_> beg_;
};
Or for c++03 on *nix:
#include <iostream>
#include <ctime>
class Timer
{
public:
Timer() { clock_gettime(CLOCK_REALTIME, &beg_); }
double elapsed() {
clock_gettime(CLOCK_REALTIME, &end_);
return end_.tv_sec - beg_.tv_sec +
(end_.tv_nsec - beg_.tv_nsec) / 1000000000.;
}
void reset() { clock_gettime(CLOCK_REALTIME, &beg_); }
private:
timespec beg_, end_;
};
Example of usage:
int main()
{
Timer tmr;
double t = tmr.elapsed();
std::cout << t << std::endl;
tmr.reset();
t = tmr.elapsed();
std::cout << t << std::endl;
return 0;
}
I would seriously consider the use of Boost, particularly boost::posix_time::ptime and boost::posix_time::time_duration (at http://www.boost.org/doc/libs/1_38_0/doc/html/date_time/posix_time.html).
It's cross-platform, easy to use, and in my experience provides the highest level of time resolution an operating system provides. Possibly also very important; it provides some very nice IO operators.
To use it to calculate the difference in program execution (to microseconds; probably overkill), it would look something like this [browser written, not tested]:
ptime time_start(microsec_clock::local_time());
//... execution goes here ...
ptime time_end(microsec_clock::local_time());
time_duration duration(time_end - time_start);
cout << duration << '\n';
boost 1.46.0 and up includes the Chrono library:
thread_clock class provides access to the real thread wall-clock, i.e.
the real CPU-time clock of the calling thread. The thread relative
current time can be obtained by calling thread_clock::now()
#include <boost/chrono/thread_clock.hpp>
{
...
using namespace boost::chrono;
thread_clock::time_point start = thread_clock::now();
...
thread_clock::time_point stop = thread_clock::now();
std::cout << "duration: " << duration_cast<milliseconds>(stop - start).count() << " ms\n";
In Windows: use GetTickCount
//GetTickCount defintition
#include <windows.h>
int main()
{
DWORD dw1 = GetTickCount();
//Do something
DWORD dw2 = GetTickCount();
cout<<"Time difference is "<<(dw2-dw1)<<" milliSeconds"<<endl;
}
You can also use the clock_gettime. This method can be used to measure:
System wide real-time clock
System wide monotonic clock
Per Process CPU time
Per process Thread CPU time
Code is as follows:
#include < time.h >
#include <iostream>
int main(){
timespec ts_beg, ts_end;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts_beg);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts_end);
std::cout << (ts_end.tv_sec - ts_beg.tv_sec) + (ts_end.tv_nsec - ts_beg.tv_nsec) / 1e9 << " sec";
}
`
just in case you are on Unix, you can use time to get the execution time:
$ g++ myprog.cpp -o myprog
$ time ./myprog
For me, the most easy way is:
#include <boost/timer.hpp>
boost::timer t;
double duration;
t.restart();
/* DO SOMETHING HERE... */
duration = t.elapsed();
t.restart();
/* DO OTHER STUFF HERE... */
duration = t.elapsed();
using this piece of code you don't have to do the classic end - start.
Enjoy your favorite approach.
Just a side note: if you're running on Windows, and you really really need precision, you can use QueryPerformanceCounter. It gives you time in (potentially) nanoseconds.
Get the system time in milliseconds at the beginning, and again at the end, and subtract.
To get the number of milliseconds since 1970 in POSIX you would write:
struct timeval tv;
gettimeofday(&tv, NULL);
return ((((unsigned long long)tv.tv_sec) * 1000) +
(((unsigned long long)tv.tv_usec) / 1000));
To get the number of milliseconds since 1601 on Windows you would write:
SYSTEMTIME systime;
FILETIME filetime;
GetSystemTime(&systime);
if (!SystemTimeToFileTime(&systime, &filetime))
return 0;
unsigned long long ns_since_1601;
ULARGE_INTEGER* ptr = (ULARGE_INTEGER*)&ns_since_1601;
// copy the result into the ULARGE_INTEGER; this is actually
// copying the result into the ns_since_1601 unsigned long long.
ptr->u.LowPart = filetime.dwLowDateTime;
ptr->u.HighPart = filetime.dwHighDateTime;
// Compute the number of milliseconds since 1601; we have to
// divide by 10,000, since the current value is the number of 100ns
// intervals since 1601, not ms.
return (ns_since_1601 / 10000);
If you cared to normalize the Windows answer so that it also returned the number of milliseconds since 1970, then you would have to adjust your answer by 11644473600000 milliseconds. But that isn't necessary if all you care about is the elapsed time.
If you are using:
tstart = clock();
// ...do something...
tend = clock();
Then you will need the following to get time in seconds:
time = (tend - tstart) / (double) CLOCKS_PER_SEC;
This seems to work fine for intel Mac 10.7:
#include <time.h>
time_t start = time(NULL);
//Do your work
time_t end = time(NULL);
std::cout<<"Execution Time: "<< (double)(end-start)<<" Seconds"<<std::endl;
I've got timer issue in GLUT.
glutGet(GLUT_ELAPSED_TIME) only get time with sec accuracy (1000, 2000, 3000...)
and
glutTimerFunc(...) works only when millis parameter is set greater than 1000.
I don't know exactly how GLUT measure time
but I think there's something wrong with my system time setting.
How can I get time with millis accuracy in OpenGL?
As already mentioned in the comments above, you could use more reliable C++ date and time utilities like the std::chrono library. Here is a simple example:
#include <iostream>
#include <chrono>
int main()
{
const auto start = std::chrono::high_resolution_clock::now();
// do something...
const auto end = std::chrono::high_resolution_clock::now();
std::cout << "Took " << std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count() << " ms\n";
return 0;
}
I want to measure the time of a child process
#include <time.h>
int main() {
...
time t begin, end, diff;
...
//fork etc in here
time(&begin);
...
//some things
...
time(&end);
return 0;
}
I have 2 Time stamps now, is there a way to format it to the run-time of the child process to hours:minutes:seconds?
I have tried
diff = end - begin;
But I get a huge number then.
(Sorry for only a part of the code but it's on another PC.)
You can compute the difference with difftime:
double diff_in_seconds = difftime(end, begin);
or, for better precision, use one of C++11 chrono monotonic clocks such as std::steady_clock:
auto start = std::chrono::steady_clock::now();
// some things
auto end = std::chrono::steady_clock::now();
double time_in_seconds = std::chrono::duration_cast<double>(end - start).count();
See also this answer for details why you should use a monotonic clock.
You should probably compute the difference using difftime instead of subtraction, in case your system uses some other format for time_t besides "integer number of seconds".
difftime returns the number of seconds between the two times, as a double. It's then a simple matter of arithmetic to convert to hours, minutes and seconds.
The attempt in the question is a C way, not C++. In C++11 (assuming you have one), you can get 2 time points and then cast the difference between them to the units you need, as in the example here: http://en.cppreference.com/w/cpp/chrono/duration/duration_cast
Nearly copying the code:
auto t1 = std::chrono::high_resolution_clock::now();
// Call your child process here
auto t2 = std::chrono::high_resolution_clock::now();
std::cout << "Child process took "
<< std::chrono::duration_cast<std::chrono::milliseconds>(t2 - t1).count()
<< " milliseconds\n";
If I had the following code
clock_t t;
t = clock();
//algorithm
t = clock() - t;
t would equal the number of ticks to run the program. Is this the same is CPU time? Are there any other ways to measure CPU time in C++?
OS -- Debian GNU/Linux
I am open to anything that will work. I am wanting to compare the CPU time of two algorithms.
clock() is specified to measure CPU time however not all implementations do this. In particular Microsoft's implementation in VS does not count additional time when multiple threads are running, or count less time when the program's threads are sleeping/waiting.
Also note that clock() should measure the CPU time used by the entire program, so while CPU time used by multiple threads in //algorithm will be measured, other threads that are not part of //algorithm also get counted.
clock() is the only method specified in the standard to measure CPU time, however there are certainly other, platform specific, methods for measuring CPU time.
std::chrono does not include any clock for measuring CPU time. It only has a clock synchronized to the system time, a clock that advances at a steady rate with respect to real time, and a clock that is 'high resolution' but which does not necessarily measure CPU time.
How to mesure cpu-time used ?
#include <ctime>
std::clock_t c_start = std::clock();
// your_algorithm
std::clock_t c_end = std::clock();
long_double time_elapsed_ms = 1000.0 * (c_end-c_start) / CLOCKS_PER_SEC;
std::cout << "CPU time used: " << time_elapsed_ms << " ms\n";
Of course, if you display time in seconds :
std::cout << "CPU time used: " << time_elapsed_ms / 1000.0 << " s\n";
Source : http://en.cppreference.com/w/cpp/chrono/c/clock
A non-standard way to do this using pthreads is:
#include <pthread.h>
#include <time.h>
timespec cpu_time()
{
thread_local bool initialized(false);
thread_local clockid_t clock_id;
if (!initialized)
{
pthread_getcpuclockid(pthread_self(), &clock_id);
initialized = true;
}
timespec result;
clock_gettime(clock_id, &result);
return result;
}
Addition: A more standard way is:
#include <sys/resource.h>
#include <stdio.h>
int main(int argc, char* argv[])
{
struct rusage r;
getrusage(RUSAGE_SELF, &r);
printf("CPU usage: %d.%06d\n", r.ru_utime.tv_sec, r.ru_utime.tv_usec);
}
C++ has a chrono library. See http://en.cppreference.com/w/cpp/chrono. There are also often platform dependent ways to get at high resolution timers (obviously varies by platform).
The boost chrono library vs1.51 on my macbook pro returns negative times when I substract endTime - startTime. If you print the timepoints you see that the end time is earlier than the startTime. How can this happen?
typedef boost::chrono::steady_clock clock_t;
clock_t clock;
// Start time measurement
boost::chrono::time_point<clock_t> startTime = clock.now();
short test_times = 7;
// Spend some time...
for ( int i=0; i<test_times; ++i )
{
xnodeptr spResultDoc=parser.parse(inputSrc);
xstring sXmlResult = spResultDoc->str();
const char16_t* szDbg = sXmlResult.c_str();
BOOST_CHECK(spResultDoc->getNodeType()==xnode::DOCUMENT_NODE && sXmlResult == sXml);
}
// Stop time measurement
boost::chrono::time_point<clock_t> endTime = clock.now();
clock_t::duration elapsed( endTime - startTime);
std::cout << std::endl;
std::cout << "Now time: " << clock.now() << std::endl;
std::cout << "Start time: " << startTime << std::endl;
std::cout << "End time: " << endTime << std::endl;
std::cout << std::endl << "Total Parse time: " << elapsed << std::endl;
std::cout << "Avarage Parse time per iteration: " << (boost::chrono::duration_cast<boost::chrono::milliseconds>(elapsed) / test_times) << std::endl;
I tried different clocks but no difference.
Any help would be appreciated!
EDIT: Forgot to add the output:
Now time: 1 nanosecond since boot
Start time: 140734799802912 nanoseconds since boot
End time: 140734799802480 nanoseconds since boot
Total Parse time: -432 nanoseconds
Avarage Parse time per iteration: 0 milliseconds
Hyperthreading or just scheduling interference, the Boost implementation punts monotonic support to the OS:
POSIX: clock_gettime (CLOCK_MONOTONIC) although it still may fail due to kernel errors handling hyper-threading when calibrating the system.
WIN32: QueryPerformanceCounter() which on anything but Nehalem architecture or newer is not going to be monotonic across cores and threads.
OSX: mach_absolute_time(), i.e. the steady & high resolution clocks are the same. The source code shows that it uses RDTSC thus strict dependency upon hardware stability: i.e. no guarantees.
Disabling hyperthreading is a recommended way to go, but say on Windows you are really limited. Aside of dropping timer resolution the only available method is direct access to the underlying hardware timers whilst ensuring thread affinity.
It looks like a good time to submit a bug to Boost, I would recommend:
Win32: Use GetTick64Count(), as discussed here.
OSX: Use clock_get_time (SYSTEM_CLOCK) according to this question.