Is there any way in C++ to calculate how long does it take to run a given program or routine in CPU time?
I work with Visual Studio 2008 running on Windows 7.
If you want to know the total amount of CPU time used by a process, neither clock nor rdtsc (either directly or via a compiler intrinsic) is really the best choice, at least IMO. If you need the code to be portable, about the best you can do is use clock, test with the system as quiescent as possible, and hope for the best (but if you do, be aware that the resolution of clock is CLOCKS_PER_SEC, which may or may not be 1000, and even if it is, your actual timing resolution often won't be that good -- it may give you times in milliseconds, but at least normally advance tens of milliseconds at a time).
Since, however, you don't seem to mind the code being specific to Windows, you can do quite a bit better. At least if my understanding of what you're looking for is correctly, what you really want is probably GetProcessTimes, which will (separately) tell you both kernel-mode and user-mode CPU usage of the process (as well as the start time and exit time, from which you can compute wall time used, if you care). There's also QueryProcessCycleTime, which will tell you the total number of CPU clock cycles used by the process (total of both user and kernel mode in all threads). Personally, I have a hard time imagining much use for the latter though -- counting individual clock cycles can be useful for small sections of code subject to intensive optimization, but I'm less certain about how you'd apply it to a complete process. GetProcessTimes uses FILETIME structures, which support resolutions of 100 nanoseconds, but in reality most times you'll see will be multiples of the scheduler's time slice (which varies with the version of windows, but is on the order of milliseconds to tens of milliseconds).
In any case, if you truly want time from beginning to end, GetProcessTimes will let you do that -- if you spawn the program (e.g., with CreateProcess), you'll get a handle to the process which will be signaled when the child process exits. You can then call GetProcessTimes on that handle, and retrieve the times even though the child has already exited -- the handle will remain valid as long as at least one handle to the process remains open.
Here's one way. It measures routine exeution time in milliseconds.
clock_t begin=clock(); starts before the route is executed and clock_t end=clock(); starts right after the routine exits.
The two time sets are then subtracted from each other and the result is a millisecod value.
#include <stdio.h>
#include <iostream>
#include <time.h>
using namespace std;
double get_CPU_time_usage(clock_t clock1,clock_t clock2)
{
double diffticks=clock1-clock2;
double diffms=(diffticks*1000)/CLOCKS_PER_SEC;
return diffms;
}
void test_CPU_usage()
{
cout << "Standby.. measuring exeution time: ";
for (int i=0; i<10000;i++)
{
cout << "\b\\" << std::flush;
cout << "\b|" << std::flush;
cout << "\b/" << std::flush;
cout << "\b-" << std::flush;
}
cout << " \n\n";
}
int main (void)
{
clock_t begin=clock();
test_CPU_usage();
clock_t end=clock();
cout << "Time elapsed: " << double(get_CPU_time_usage(end,begin)) << " ms ("<<double(get_CPU_time_usage(end,begin))/1000<<" sec) \n\n";
return 0;
}
The __rdtscp intrinsic will give you the time in CPU cycles with some caveats.
Here's the MSDN article
It depends really what you want to measure. For better results take the average of a few million (if not billion) iterations.
The clock() function [as provided by Visual C++ 2008] doesn't return processor time used by the program, while it should (according to the C standard and/or C++ standard). That said, to measure CPU time on Windows, I have this helper class (which is inevitably non-portable):
class ProcessorTimer
{
public:
ProcessorTimer() { start(); }
void start() { ::GetProcessTimes(::GetCurrentProcess(), &ft_[3], &ft_[2], &ft_[1], &ft_[0]); }
std::tuple<double, double> stop()
{
::GetProcessTimes(::GetCurrentProcess(), &ft_[5], &ft_[4], &ft_[3], &ft_[2]);
ULARGE_INTEGER u[4];
for (size_t i = 0; i < 4; ++i)
{
u[i].LowPart = ft_[i].dwLowDateTime;
u[i].HighPart = ft_[i].dwHighDateTime;
}
double user = (u[2].QuadPart - u[0].QuadPart) / 10000000.0;
double kernel = (u[3].QuadPart - u[1].QuadPart) / 10000000.0;
return std::make_tuple(user, kernel);
}
private:
FILETIME ft_[6];
};
class ScopedProcessorTimer
{
public:
ScopedProcessorTimer(std::ostream& os = std::cerr) : timer_(ProcessorTimer()), os_(os) { }
~ScopedProcessorTimer()
{
std::tuple<double, double> t = timer_.stop();
os_ << "user " << std::get<0>(t) << "\n";
os_ << "kernel " << std::get<1>(t) << "\n";
}
private:
ProcessorTimer timer_;
std::ostream& os_;
}
For example, one can measure how long it takes a block to execute, by defining a ScopedProcessorTimer at the beginning of that {} block.
This Code is Process Cpu Usage
ULONGLONG LastCycleTime = NULL;
LARGE_INTEGER LastPCounter;
LastPCounter.QuadPart = 0; // LARGE_INTEGER Init
// cpu get core number
SYSTEM_INFO sysInfo;
GetSystemInfo(&sysInfo);
int numProcessors = sysInfo.dwNumberOfProcessors;
HANDLE hProcess = OpenProcess(PROCESS_QUERY_INFORMATION | PROCESS_VM_READ, FALSE, Process::pid);
if (hProcess == NULL)
nResult = 0;
int count = 0;
while (true)
{
ULONG64 CycleTime;
LARGE_INTEGER qpcLastInt;
if (!QueryProcessCycleTime(hProcess, &CycleTime))
nResult = 0;
ULONG64 cycle = CycleTime - LastCycleTime;
if (!QueryPerformanceCounter(&qpcLastInt))
nResult = 0;
double Usage = cycle / ((double)(qpcLastInt.QuadPart - LastPCounter.QuadPart));
// Scaling
Usage *= 1.0 / numProcessors;
Usage *= 0.1;
LastPCounter = qpcLastInt;
LastCycleTime = CycleTime;
if (count > 3)
{
printf("%.1f", Usage);
break;
}
Sleep(1); // QueryPerformanceCounter Function Resolution is 1 microsecond
count++;
}
CloseHandle(hProcess);
Related
#include <iostream>
#include <chrono>
using namespace std;
class MyTimer {
private:
std::chrono::time_point<std::chrono::steady_clock> starter;
std::chrono::time_point<std::chrono::steady_clock> ender;
public:
void startCounter() {
starter = std::chrono::steady_clock::now();
}
double getCounter() {
ender = std::chrono::steady_clock::now();
return double(std::chrono::duration_cast<std::chrono::nanoseconds>(ender - starter).count()) /
1000000; // millisecond output
}
// timer need to have nanosecond precision
int64_t getCounterNs() {
return std::chrono::duration_cast<std::chrono::nanoseconds>(std::chrono::steady_clock::now() - starter).count();
}
};
MyTimer timer1, timer2, timerMain;
volatile int64_t dummy = 0, res1 = 0, res2 = 0;
// time run without any time measure
void func0() {
dummy++;
}
// we're trying to measure the cost of startCounter() and getCounterNs(), not "dummy++"
void func1() {
timer1.startCounter();
dummy++;
res1 += timer1.getCounterNs();
}
void func2() {
// start your counter here
dummy++;
// res2 += end your counter here
}
int main()
{
int i, ntest = 1000 * 1000 * 100;
int64_t runtime0, runtime1, runtime2;
timerMain.startCounter();
for (i=1; i<=ntest; i++) func0();
runtime0 = timerMain.getCounter();
cout << "Time0 = " << runtime0 << "ms\n";
timerMain.startCounter();
for (i=1; i<=ntest; i++) func1();
runtime1 = timerMain.getCounter();
cout << "Time1 = " << runtime1 << "ms\n";
timerMain.startCounter();
for (i=1; i<=ntest; i++) func2();
runtime2 = timerMain.getCounter();
cout << "Time2 = " << runtime2 << "ms\n";
return 0;
}
I'm trying to profile a program where certain critical parts have execution time measured in < 50 nanoseconds. I found that my timer class using std::chrono is too expensive (code with timing takes 40% more time than code without). How can I make a faster timer class?
I think some OS-specific system calls would be the fastest solution. The platform is Linux Ubuntu.
Edit: all code is compiled with -O3. It's ensured that each timer is only initialized once, so the measured cost is due to the startMeasure/stopMeasure functions only. I'm not doing any text printing.
Edit 2: the accepted answer doesn't include the method to actually convert number-of-cycles to nanoseconds. If someone can do that, it'd be very helpful.
What you want is called "micro-benchmarking". It can get very complex. I assume you are using Ubuntu Linux on x86_64. This is not valid form ARM, ARM64 or any other platforms.
std::chrono is implemented at libstdc++ (gcc) and libc++ (clang) on Linux as simply a thin wrapper around the GLIBC, the C library, which does all the heavy lifting. If you look at std::chrono::steady_clock::now() you will see calls to clock_gettime().
clock_gettime() is a VDSO, ie it is kernel code that runs in userspace. It should be very fast but it might be that from time to time it has to do some housekeeping and take a long time every n-th call. So I would not recommend for microbenchmarking.
Almost every platform has a cycle counter and x86 has the assembly instruction rdtsc. This instruction can be inserted in your code by crafting asm calls or by using the compiler-specific builtins __builtin_ia32_rdtsc() or __rdtsc().
These calls will return a 64-bit integer representing the number of clocks since the machine power up. rdtsc is not immediate but fast, it will take roughly 15-40 cycles to complete.
It is not guaranteed in all platforms that this counter will be the same for each core so beware when the process gets moved from core to core. In modern systems this should not be a problem though.
Another problem with rdtsc is that compilers will often reorder instructions if they find they don't have side effects and unfortunately rdtsc is one of them. So you have to use fake barriers around these counter reads if you see that the compiler is playing tricks on you - look at the generated assembly.
Also a big problem is cpu out of order execution itself. Not only the compiler can change the order of execution but the cpu can as well. Since the x86 486 the Intel CPUs are pipelined so several instructions can be executed at the same time - roughly speaking. So you might end up measuring spurious execution.
I recommend you to get familiar with the quantum-like problems of micro-benchmarking. It is not straightforward.
Notice that rdtsc() will return the number of cycles. You have to convert to nanoseconds using the timestamp counter frequency.
Here is one example:
#include <iostream>
#include <cstdio>
void dosomething() {
// yada yada
}
int main() {
double sum = 0;
const uint32_t numloops = 100000000;
for ( uint32_t j=0; j<numloops; ++j ) {
uint64_t t0 = __builtin_ia32_rdtsc();
dosomething();
uint64_t t1 = __builtin_ia32_rdtsc();
uint64_t elapsed = t1-t0;
sum += elapsed;
}
std::cout << "Average:" << sum/numloops << std::endl;
}
This paper is a bit outdated (2010) but it is sufficiently up to date to give you a good introduction to micro-benchmarking:
How to Benchmark Code Execution Times on Intel® IA-32 and IA-64 Instruction Set Architectures
I am trying to use a while loop to create a timer that consistently measures out 3000μs (3 ms) and while it works most of the time, other times the timer can be late by as much as 500μs. Why does this happen and is there a more precise way to make a timer like this?
int getTime() {
chrono::microseconds μs = chrono::duration_cast< chrono::microseconds >(
chrono::system_clock::now().time_since_epoch() //Get time since last epoch in μs
);
return μs.count(); //Return as integer
}
int main()
{
int target = 3000, difference = 0;
while (true) {
int start = getTime(), time = start;
while ((time-start) < target) {
time = getTime();
}
difference = time - start;
if (difference - target > 1) { //If the timer wasn't accurate to within 1μs
cout << "Timer missed the mark by " + to_string(difference - target) + " microseconds" << endl; //Log the delay
}
}
return 0;
}
I would expect this code to log delays that are consistently within 5 or so μs, but the console output looks like this.
Edit to clarify: I'm running on Windows 10 Enterprise Build 16299, but the behavior persists on a Debian virtual machine.
You need to also take into account other running processes. The operating system is likely preempting your process to give CPU time to those other processes/threads, and will non-deterministically return control to your process/thread running this timer.
Granted, this is not 100% true when we consider real-time operating systems or flat-schedulers. But this is likely the case in your code if you're running on a general purpose machine.
Since you are running on Windows, that RTOS is responsible for keeping time through NTP, as C++ has no built-in functions for it. Check out this Windows API for the SetTimer() function: http://msdn.microsoft.com/en-us/library/ms644906(v=vs.85).aspx.
If you want the best and most high-resolution clock through C++, check out the chrono library:
#include <iostream>
#include <chrono>
#include "chrono_io"
int main()
{
typedef std::chrono::high_resolution_clock Clock;
auto t1 = Clock::now();
auto t2 = Clock::now();
std::cout << t2-t1 << '\n';
}
I have been looking at the performance of our C++ server application running on embedded Linux (ARM). The pseudo code for the main processing loop of the server is this -
for i = 1 to 1000
Process item i
Sleep for 20 ms
The processing for one item takes about 2ms. The "Sleep" here is really a call to the Poco library to do a "tryWait" on an event. If the event is fired (which it never is in my tests) or the time expires, it comes returns. I don't know what system call this equates to. Although we ask for a 2ms block, it turns out to be roughly 20ms. I can live with that - that's not the problem. The sleep is just an artificial delay so that other threads in the process are not starved.
The loop takes about 24 seconds to go through 1000 items.
The problem is, we changed the way the sleep is used so that we had a bit more control. I mean - 20ms delay for 2ms processing doesn't allow us to do much processing. With this new parameter set to a certain value it does something like this -
For i = 1 to 1000
Process item i
if i % 50 == 0 then sleep for 1000ms
That's the rough code, in reality the number of sleeps is slightly different and it happens to work out at a 24s cycle to get through all the items - just as before.
So we are doing exactly the same amount of processing in the same amount of time.
Problem 1 - the CPU usage for the original code is reported at around 1% (it varies a little but that's about average) and the CPU usage reported for the new code is about 5%. I think they should be the same.
Well perhaps this CPU reporting isn't accurate so I thought I'd sort a large text file at the same time and see how much it's slowed up by our server. This is a CPU bound process (98% CPU usage according to top). The results are very odd. With the old code, the time taken to sort the file goes up by 21% when our server is running.
Problem 2 - If the server is only using 1% of the CPU then wouldn't the time taken to do the sort be pretty much the same?
Also, the time taken to go through all the items doesn't change - it's still 24 seconds with or without the sort running.
Then I tried the new code, it only slows the sort down by about 12% but it now takes about 40% longer to get through all the items it has to process.
Problem 3 - Why do the two ways of introducing an artificial delay cause such different results. It seems that the server which sleeps more frequently but for a minimum time is getting more priority.
I have a half baked theory on the last one - whatever the system call that is used to do the "sleep" is switching back to the server process when the time is elapsed. This gives the process another bite at the time slice on a regular basis.
Any help appreciated. I suspect I'm just not understanding it correctly and that things are more complicated than I thought. I can provide more details if required.
Thanks.
Update: replaced tryWait(2) with usleep(2000) - no change. In fact, sched_yield() does the same.
Well I can at least answer problem 1 and problem 2 (as they are the same issue).
After trying out various options in the actual server code, we came to the conclusion that the CPU reporting from the OS is incorrect. It's quite result so to make sure, I wrote a stand alone program that doesn't use Poco or any of our code. Just plain Linux system calls and standard C++ features. It implements the pseudo code above. The processing is replaced with a tight loop just checking the elapsed time to see if 2ms is up. The sleeps are proper sleeps.
The small test program shows exactly the same problem. i.e. doing the same amount of processing but splitting up the way the sleep function is called, produces very different results for CPU usage. In the case of the test program, the reported CPU usage was 0.0078 seconds using 1000 20ms sleeps but 1.96875 when a less frequent 1000ms sleep was used. The amount of processing done is the same.
Running the test on a Linux PC did not show the problem. Both ways of sleeping produced exactly the same CPU usage.
So clearly a problem with our embedded system and the way it measures CPU time when a process is yielding so often (you get the same problem with sched_yeild instead of a sleep).
Update: Here's the code. RunLoop is where the main bit is done -
int sleepCount;
double getCPUTime( )
{
clockid_t id = CLOCK_PROCESS_CPUTIME_ID;
struct timespec ts;
if ( id != (clockid_t)-1 && clock_gettime( id, &ts ) != -1 )
return (double)ts.tv_sec +
(double)ts.tv_nsec / 1000000000.0;
return -1;
}
double GetElapsedMilliseconds(const timeval& startTime)
{
timeval endTime;
gettimeofday(&endTime, NULL);
double elapsedTime = (endTime.tv_sec - startTime.tv_sec) * 1000.0; // sec to ms
elapsedTime += (endTime.tv_usec - startTime.tv_usec) / 1000.0; // us to ms
return elapsedTime;
}
void SleepMilliseconds(int milliseconds)
{
timeval startTime;
gettimeofday(&startTime, NULL);
usleep(milliseconds * 1000);
double elapsedMilliseconds = GetElapsedMilliseconds(startTime);
if (elapsedMilliseconds > milliseconds + 0.3)
std::cout << "Sleep took longer than it should " << elapsedMilliseconds;
sleepCount++;
}
void DoSomeProcessingForAnItem()
{
timeval startTime;
gettimeofday(&startTime, NULL);
double processingTimeMilliseconds = 2.0;
double elapsedMilliseconds;
do
{
elapsedMilliseconds = GetElapsedMilliseconds(startTime);
} while (elapsedMilliseconds <= processingTimeMilliseconds);
if (elapsedMilliseconds > processingTimeMilliseconds + 0.1)
std::cout << "Processing took longer than it should " << elapsedMilliseconds;
}
void RunLoop(bool longSleep)
{
int numberOfItems = 1000;
timeval startTime;
gettimeofday(&startTime, NULL);
timeval startMainLoopTime;
gettimeofday(&startMainLoopTime, NULL);
for (int i = 0; i < numberOfItems; i++)
{
DoSomeProcessingForAnItem();
double elapsedMilliseconds = GetElapsedMilliseconds(startTime);
if (elapsedMilliseconds > 100)
{
std::cout << "Item count = " << i << "\n";
if (longSleep)
{
SleepMilliseconds(1000);
}
gettimeofday(&startTime, NULL);
}
if (longSleep == false)
{
// Does 1000 * 20 ms sleeps.
SleepMilliseconds(20);
}
}
double elapsedMilliseconds = GetElapsedMilliseconds(startMainLoopTime);
std::cout << "Main loop took " << elapsedMilliseconds / 1000 <<" seconds\n";
}
void DoTest(bool longSleep)
{
timeval startTime;
gettimeofday(&startTime, NULL);
double startCPUtime = getCPUTime();
sleepCount = 0;
int runLoopCount = 1;
for (int i = 0; i < runLoopCount; i++)
{
RunLoop(longSleep);
std::cout << "**** Done one loop of processing ****\n";
}
double endCPUtime = getCPUTime();
std::cout << "Elapsed time is " <<GetElapsedMilliseconds(startTime) / 1000 << " seconds\n";
std::cout << "CPU time used is " << endCPUtime - startCPUtime << " seconds\n";
std::cout << "Sleep count " << sleepCount << "\n";
}
void testLong()
{
std::cout << "Running testLong\n";
DoTest(true);
}
void testShort()
{
std::cout << "Running testShort\n";
DoTest(false);
}
I'm having trouble getting anything useful from the clock() method in the ctime library in particular situations on my Mac. Specifically, if I'm trying to run VS2010 in Windows 7 under either VMWare Fusion or on Boot Camp, it always seems to return the same value. Some test code to test the issue:
#include <time.h>
#include "iostream"
using namespace std;
// Calculate the factorial of n recursively.
unsigned long long recursiveFactorial(int n) {
// Define the base case.
if (n == 1) {
return n;
}
// To handle other cases, call self recursively.
else {
return (n * recursiveFactorial(n - 1));
}
}
int main() {
int n = 60;
unsigned long long result;
clock_t start, stop;
// Mark the start time.
start = clock();
// Calculate the factorial of n;
result = recursiveFactorial(n);
// Mark the end time.
stop = clock();
// Output the result of the factorial and the elapsed time.
cout << "The factorial of " << n << " is " << result << endl;
cout << "The calculation took " << ((double) (stop - start) / CLOCKS_PER_SEC) << " seconds." << endl;
return 0;
}
Under Xcode 4.3.3, the function executes in about 2 μs.
Under Visual Studio 2010 in a Windows 7 virtual machine (under VMWare Fusion 4.1.3), the same code gives an execution time of 0; this machine is given 2 of the Mac’s 4 cores and 2GB RAM.
Under Boot Camp running Windows 7, again I get an execution time of 0.
Is this a question of being "too far from the metal"?
It could be that the resolution of the timer is not as high under the virtual machine. The compiler can easily convert the tail recursion into a loop; 60 multiplications don't tend to take a terribly long time. Try computing something significantly more costly, like Fibonacci numbers (recursively, of course) and you should see the timer go on.
From time.h included with MSVC,
#define CLOCKS_PER_SEC 1000
which means clock() only has a resolution of 1 millisecond when using the Visual C++ runtime libraries, so any set of operations that takes less than that will almost always be measured as having zero time elapsed.
For higher resolution timing on Windows that can help you, check out QueryPerformanceCounter
and this sample code.
I need some way in c++ to keep track of the number of milliseconds since program execution. And I need the precision to be in milliseconds. (In my googling, I've found lots of folks that said to include time.h and then multiply the output of time() by 1000 ... this won't work.)
clock has been suggested a number of times. This has two problems. First of all, it often doesn't have a resolution even close to a millisecond (10-20 ms is probably more common). Second, some implementations of it (e.g., Unix and similar) return CPU time, while others (E.g., Windows) return wall time.
You haven't really said whether you want wall time or CPU time, which makes it hard to give a really good answer. On Windows, you could use GetProcessTimes. That will give you the kernel and user CPU times directly. It will also tell you when the process was created, so if you want milliseconds of wall time since process creation, you can subtract the process creation time from the current time (GetSystemTime). QueryPerformanceCounter has also been mentioned. This has a few oddities of its own -- for example, in some implementations it retrieves time from the CPUs cycle counter, so its frequency varies when/if the CPU speed changes. Other implementations read from the motherboard's 1.024 MHz timer, which does not vary with the CPU speed (and the conditions under which each are used aren't entirely obvious).
On Unix, you can use GetTimeOfDay to just get the wall time with (at least the possibility of) relatively high precision. If you want time for a process, you can use times or getrusage (the latter is newer and gives more complete information that may also be more precise).
Bottom line: as I said in my comment, there's no way to get what you want portably. Since you haven't said whether you want CPU time or wall time, even for a specific system, there's not one right answer. The one you've "accepted" (clock()) has the virtue of being available on essentially any system, but what it returns also varies just about the most widely.
See std::clock()
Include time.h, and then use the clock() function. It returns the number of clock ticks elapsed since the program was launched. Just divide it by "CLOCKS_PER_SEC" to obtain the number of seconds, you can then multiply by 1000 to obtain the number of milliseconds.
Some cross platform solution. This code was used for some kind of benchmarking:
#ifdef WIN32
LARGE_INTEGER g_llFrequency = {0};
BOOL g_bQueryResult = QueryPerformanceFrequency(&g_llFrequency);
#endif
//...
long long osQueryPerfomance()
{
#ifdef WIN32
LARGE_INTEGER llPerf = {0};
QueryPerformanceCounter(&llPerf);
return llPerf.QuadPart * 1000ll / ( g_llFrequency.QuadPart / 1000ll);
#else
struct timeval stTimeVal;
gettimeofday(&stTimeVal, NULL);
return stTimeVal.tv_sec * 1000000ll + stTimeVal.tv_usec;
#endif
}
The most portable way is using the clock function.It usually reports the time that your program has been using the processor, or an approximation thereof. Note however the following:
The resolution is not very good for GNU systems. That's really a pity.
Take care of casting everything to double before doing divisions and assignations.
The counter is held as a 32 bit number in GNU 32 bits, which can be pretty annoying for long-running programs.
There are alternatives using "wall time" which give better resolution, both in Windows and Linux. But as the libc manual states: If you're trying to optimize your program or measure its efficiency, it's very useful to know how much processor time it uses. For that, calendar time and elapsed times are useless because a process may spend time waiting for I/O or for other processes to use the CPU.
Here is a C++0x solution and an example why clock() might not do what you think it does.
#include <chrono>
#include <iostream>
#include <cstdlib>
#include <ctime>
int main()
{
auto start1 = std::chrono::monotonic_clock::now();
auto start2 = std::clock();
sleep(1);
for( int i=0; i<100000000; ++i);
auto end1 = std::chrono::monotonic_clock::now();
auto end2 = std::clock();
auto delta1 = end1-start1;
auto delta2 = end2-start2;
std::cout << "chrono: " << std::chrono::duration_cast<std::chrono::duration<float>>(delta1).count() << std::endl;
std::cout << "clock: " << static_cast<float>(delta2)/CLOCKS_PER_SEC << std::endl;
}
On my system this outputs:
chrono: 1.36839
clock: 0.36
You'll notice the clock() method is missing a second. An astute observer might also notice that clock() looks to have less resolution. On my system it's ticking by in 12 millisecond increments, terrible resolution.
If you are unable or unwilling to use C++0x, take a look at Boost.DateTime's ptime microsec_clock::universal_time().
This isn't C++ specific (nor portable), but you can do:
SYSTEMTIME systemDT;
In Windows.
From there, you can access each member of the systemDT struct.
You can record the time when the program started and compare the current time to the recorded time (systemDT versus systemDTtemp, for instance).
To refresh, you can call GetLocalTime(&systemDT);
To access each member, you would do systemDT.wHour, systemDT.wMinute, systemDT.wMilliseconds.
To get more information on SYSTEMTIME.
Do you want wall clock time, CPU time, or some other measurement? Also, what platform is this? There is no universally portable way to get more precision than time() and clock() give you, but...
on most Unix systems, you can use gettimeofday() and/or clock_gettime(), which give at least microsecond precision and access to a variety of timers;
I'm not nearly as familiar with Windows, but one of these functions probably does what you want.
You can try this code (get from StockFish chess engine source code (GPL)):
#include <iostream>
#include <stdio>
#if !defined(_WIN32) && !defined(_WIN64) // Linux - Unix
# include <sys/time.h>
typedef timeval sys_time_t;
inline void system_time(sys_time_t* t) {
gettimeofday(t, NULL);
}
inline long long time_to_msec(const sys_time_t& t) {
return t.tv_sec * 1000LL + t.tv_usec / 1000;
}
#else // Windows and MinGW
# include <sys/timeb.h>
typedef _timeb sys_time_t;
inline void system_time(sys_time_t* t) { _ftime(t); }
inline long long time_to_msec(const sys_time_t& t) {
return t.time * 1000LL + t.millitm;
}
#endif
struct Time {
void restart() { system_time(&t); }
uint64_t msec() const { return time_to_msec(t); }
long long elapsed() const {
return long long(current_time().msec() - time_to_msec(t));
}
static Time current_time() { Time t; t.restart(); return t; }
private:
sys_time_t t;
};
int main() {
sys_time_t t;
system_time(&t);
long long currentTimeMs = time_to_msec(t);
std::cout << "currentTimeMs:" << currentTimeMs << std::endl;
Time time = Time::current_time();
for (int i = 0; i < 1000000; i++) {
//Do something
}
long long e = time.elapsed();
std::cout << "time elapsed:" << e << std::endl;
getchar(); // wait for keyboard input
}