I want to time the real-time performance of some C++ functions I have written. How do I get the timing in milliseconds scale?
I know how to get time in seconds via
start=clock()
diff=(clock()-start)/(double) CLOCKS_PER_SEC
cout<<diff
I am using Ubuntu-Linux OS and g++ compiler.
In Linux, take a look at clock_gettime(). It can essentially give you the time elapsed since an arbitrary point, in nanoseconds (which should be good enough for you).
Note that it is specified by the POSIX standard, so you should be fine using it on Unix-derived systems.
Try diff = (clock() - start) * 1000.0 / CLOCKS_PER_SEC;
The idea is that you multiply the number of clocks by 1000, so that whereas before you might get 2 (seconds), you now get 2000 (milliseconds).
Notes:
On my Dell desktop, which is reasonably quick ...
ubuntu bogomips peak at 5210
time(0) takes about 80 nano-seconds (30 million calls in 2.4 seconds)
time(0) allows me to measure
clock_gettime() which takes about 1.3 u-seconds per call (2.2 million in 3 seconds)
(I don't remember how many nano-seconds per time step)
So typically, I use the following, with about 3 seconds of invocations.
// ////////////////////////////////////////////////////////////////////////////
void measuring_something_duration()
...
uint64_t start_us = dtb::get_system_microsecond();
do_something_for_about_3_seconds()
uint64_t test_duration_us = dtb::get_system_microsecond() - start_us;
uint64_t test_duration_ms = test_duration_us / 1000;
...
which use these functions
// /////////////////////////////////////////////////////////////////////////////
uint64_t mynamespace::get_system_microsecond(void)
{
uint64_t total_ns = dtb::get_system_nanosecond(); // see below
uint64_t ret_val = total_ns / NSPUS; // NanoSecondsPerMicroSeconds
return(ret_val);
}
// /////////////////////////////////////////////////////////////////////////////
uint64_t mynamespace::get_system_nanosecond(void)
{
//struct timespec { __time_t tv_sec; long int tv_nsec; }; -- total 8 bytes
struct timespec ts;
// CLOCK_REALTIME - system wide real time clock
int status = clock_gettime(CLOCK_REALTIME, &ts);
dtb_assert(0 == status);
// to 8 byte from 4 byte
uint64_t uli_nsec = ts.tv_nsec;
uint64_t uli_sec = ts.tv_sec;
uint64_t total_ns = uli_nsec + (uli_sec * NSPS); // nano-seconds-per-second
return(total_ns);
}
Remember to link -lrt
Related
I am trying to calculate the number of ticks a function uses to run and to do so using the clock() function like so:
unsigned long time = clock();
myfunction();
unsigned long time2 = clock() - time;
printf("time elapsed : %lu",time2);
But the problem is that the value it returns is a multiple of 10000, which I think is the CLOCK_PER_SECOND. Is there a way or an equivalent function value that is more precise?
I am using Ubuntu 64-bit, but would prefer if the solution can work on other systems like Windows & Mac OS.
There are a number of more accurate timers in POSIX.
gettimeofday() - officially obsolescent, but very widely available; microsecond resolution.
clock_gettime() - the replacement for gettimeofday() (but not necessarily so widely available; on an old version of Solaris, requires -lposix4 to link), with nanosecond resolution.
There are other sub-second timers of greater or lesser antiquity, portability, and resolution, including:
ftime() - millisecond resolution (marked 'legacy' in POSIX 2004; not in POSIX 2008).
clock() - which you already know about. Note that it measures CPU time, not elapsed (wall clock) time.
times() - CLK_TCK or HZ. Note that this measures CPU time for parent and child processes.
Do not use ftime() or times() unless there is nothing better. The ultimate fallback, but not meeting your immediate requirements, is
time() - one second resolution.
The clock() function reports in units of CLOCKS_PER_SEC, which is required to be 1,000,000 by POSIX, but the increment may happen less frequently (100 times per second was one common frequency). The return value must be divided by CLOCKS_PER_SEC to get time in seconds.
The most precise (but highly not portable) way to measure time is to count CPU ticks.
For instance on x86
unsigned long long int asmx86Time ()
{
unsigned long long int realTimeClock = 0;
asm volatile ( "rdtsc\n\t"
"salq $32, %%rdx\n\t"
"orq %%rdx, %%rax\n\t"
"movq %%rax, %0"
: "=r" ( realTimeClock )
: /* no inputs */
: "%rax", "%rdx" );
return realTimeClock;
}
double cpuFreq ()
{
ifstream file ( "/sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq" );
string sFreq; if ( file ) file >> sFreq;
stringstream ssFreq ( sFreq ); double freq = 0.;
if ( ssFreq ) { ssFreq >> freq; freq *= 1000; } // kHz to Hz
return freq;
}
// Timing
unsigned long long int asmStart = asmx86Time ();
doStuff ();
unsigned long long int asmStop = asmx86Time ();
float asmDuration = ( asmStop - asmStart ) / cpuFreq ();
If you don't have an x86, you'll have to re-write the assembler code accordingly to your CPU. If you need maximum precision, that's unfortunatelly the only way to go... otherwise use clock_gettime().
Per the clock() manpage, on POSIX platforms the value of the CLOCKS_PER_SEC macro must be 1000000. As you say that the return value you're getting from clock() is a multiple of 10000, that would imply that the resolution is 10 ms.
Also note that clock() on Linux returns an approximation of the processor time used by the program. On Linux, again, scheduler statistics are updated when the scheduler runs, at CONFIG_HZ frequency. So if the periodic timer tick is 100 Hz, you get process CPU time consumption statistics with 10 ms resolution.
Walltime measurements are not bound by this, and can be much more accurate. clock_gettime(CLOCK_MONOTONIC, ...) on a modern Linux system provides nanosecond resolution.
I agree with the solution of Jonathan. Here is the implementation of clock_gettime() with nanoseconds of precision.
//Import
#define _XOPEN_SOURCE 500
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <time.h>
#include <sys/time.h>
int main(int argc, char *argv[])
{
struct timespec ts;
int ret;
while(1)
{
ret = clock_gettime (CLOCK_MONOTONIC, &ts);
if (ret)
{
perror ("clock_gettime");
return;
}
ts.tv_nsec += 20000; //goto sleep for 20000 n
printf("Print before sleep tid%ld %ld\n",ts.tv_sec,ts.tv_nsec );
// printf("going to sleep tid%d\n",turn );
ret = clock_nanosleep (CLOCK_MONOTONIC, TIMER_ABSTIME,&ts, NULL);
}
}
Although It's difficult to achieve ns precision, but this can be used to get precision for less than a microseconds (700-900 ns). printf above is used to just print the thread # (it'll definitely take 2-3 micro seconds to just print a statement).
I'd like to time how long a function takes in C++ in milliseconds.
Here's what I have:
#include<iostream>
#include<chrono>
using timepoint = std::chrono::steady_clock::time_point;
float elapsed_time[100];
// Run function and count time
for(int k=0;k<100;k++) {
// Start timer
const timepoint clock_start = chrono::system_clock::now();
// Run Function
Recursive_Foo();
// Stop timer
const timepoint clock_stop = chrono::system_clock::now();
// Calculate time in milliseconds
chrono::duration<double,std::milli> timetaken = clock_stop - clock_start;
elapsed_time[k] = timetaken.count();
}
for(int l=0;l<100;l++) {
cout<<"Array: "<<l<<" Time: "<<elapsed_time[l]<<" ms"<<endl;
}
This compiles but I think multithreading is preventing it from working properly. The output produces times in irregular intervals, e.g.:
Array: 0 Time: 0 ms
Array: 1 Time: 0 ms
Array: 2 Time: 15.6 ms
Array: 3 Time: 0 ms
Array: 4 Time: 0 ms
Array: 5 Time: 0 ms
Array: 6 Time: 15.6 ms
Array: 7 Time: 0 ms
Array: 8 Time: 0 ms
Do I need to use some kind of mutex lock? Or is there an easier way to time how many milliseconds a function took to execute?
EDIT
Maybe people are suggesting using high_resolution_clock or steady_clock, but all three produce the same irregular results.
This solution seems to produce real results: How to use QueryPerformanceCounter? but it's not clear to me why. Also, https://gamedev.stackexchange.com/questions/26759/best-way-to-get-elapsed-time-in-miliseconds-in-windows works well. Seems to be a Windows implementation issue.
Microsoft has a nice, clean solution in microseconds, via: MSDN
#include <windows.h>
LONGLONG measure_activity_high_resolution_timing()
{
LARGE_INTEGER StartingTime, EndingTime, ElapsedMicroseconds;
LARGE_INTEGER Frequency;
QueryPerformanceFrequency(&Frequency);
QueryPerformanceCounter(&StartingTime);
// Activity to be timed
QueryPerformanceCounter(&EndingTime);
ElapsedMicroseconds.QuadPart = EndingTime.QuadPart - StartingTime.QuadPart;
//
// We now have the elapsed number of ticks, along with the
// number of ticks-per-second. We use these values
// to convert to the number of elapsed microseconds.
// To guard against loss-of-precision, we convert
// to microseconds *before* dividing by ticks-per-second.
//
ElapsedMicroseconds.QuadPart *= 1000000;
ElapsedMicroseconds.QuadPart /= Frequency.QuadPart;
return ElapsedMicroseconds.QuadPart;
}
Profile code using a high-resolution timer, not the system-clock; which, as you're seeing, has a very limited granularity.
http://www.cplusplus.com/reference/chrono/high_resolution_clock/
typedef tp high_resolution_clock::time_point
const tp start = high_resolution_clock::now();
// do stuff
const tp end = high_resolution_clock::now();
If you suspect that some other process or thread in your app is taking too much CPU time then use:
GetThreadTimes under windows
or
clock_gettime with CLOCK_THREAD_CPUTIME_ID under linux
to measure threads CPU time your function was being executed. This will exclude from your measurements time other threads/processes were executed during profiling.
The thing is, I have to somehow get current time of day in milliseconds in convenient format.
Example of desired output:
21 h 04 min 12 s 512 ms
I know how to get this format in seconds, but I have no idea how to get my hands on milliseconds?
Using the portable std::chrono
auto now = std::chrono::system_clock::now();
auto time = std::chrono::system_clock::to_time_t(now);
auto ms = std::chrono::duration_cast<std::chrono::milliseconds>(now.time_since_epoch()) -
std::chrono::duration_cast<std::chrono::seconds>(now.time_since_epoch());
std::cout << std::put_time(std::localtime(&time), "%H h %M m %S s ");
std::cout << ms.count() << " ms" << std::endl;
Output:
21 h 24 m 22 s 428 ms
Live example
Note for systems with clocks that doesn't support millisecond resolution
As pointed out by #user4581301, on some systems std::system_clock might not have enough resolution for accurately representing current time in milliseconds. If that is the case, try using std::high_resolution_clock for calculating the number of milliseconds since the last second. This will ensure the highest available resolution provided by your implementation.
Taking the time from two clocks will inevitably lead you to get two separate points in time (however small the time difference will be). So keep in mind that using a separate clock for calculating the milliseconds will not yield perfect synchronization between the second, and millisecond periods.
// Use system clock for time.
auto now = std::chrono::system_clock::now();
/* A small amount of time passes between storing the time points. */
// Use separate high resolution clock for calculating milliseconds.
auto hnow = std::chrono::high_resolution_clock::now();
auto ms = std::chrono::duration_cast<std::chrono::milliseconds>(hnow.time_since_epoch()) -
std::chrono::duration_cast<std::chrono::seconds>(hnow.time_since_epoch());
Also, there seems to be no guarantee that the tick events of std::high_resolution_clock and std::system_clock are synchronized, and because of this the millisecond period might not be in sync with the periodic update of the current second given by the system clock.
Because of these reasons, using a separate high resolution clock for millisecond resolution should not be used when <1 second precision is critical.
With the exception of using boost::chrono, I am not aware of any system independent method. I have implemented the following for windows and posix:
LgrDate LgrDate::gmt()
{
LgrDate rtn;
#ifdef _WIN32
SYSTEMTIME sys;
GetSystemTime(&sys);
rtn.setDate(
sys.wYear,
sys.wMonth,
sys.wDay);
rtn.setTime(
sys.wHour,
sys.wMinute,
sys.wSecond,
sys.wMilliseconds*uint4(nsecPerMSec));
#else
struct timeval time_of_day;
struct tm broken_down;
gettimeofday(&time_of_day,0);
gmtime_r(
&time_of_day.tv_sec,
&broken_down);
rtn.setDate(
broken_down.tm_year + 1900,
broken_down.tm_mon + 1,
broken_down.tm_mday);
rtn.setTime(
broken_down.tm_hour,
broken_down.tm_min,
broken_down.tm_sec,
time_of_day.tv_usec * nsecPerUSec);
#endif
return rtn;
} // gmt
On a POSIX system I would do
#include <sys/time.h>
#include <sys/resource.h>
struct timespec tspec;
clock_gettime(CLOCK_REALTIME, &tspec);
int sec = (int) tspec.tv_sec;
int msec = (int) ((double) tspec.tv_nsec) / 1000000.0;
Note, CLOCK_REALTIME is used to get the wall clock, which is adjusted using NTP
and then use whatever you have for the h:m:s part
How can I get the Windows system time with millisecond resolution?
If the above is not possible, then how can I get the operating system start time? I would like to use this value together with timeGetTime() in order to compute a system time with millisecond resolution.
Try this article from MSDN Magazine. It's actually quite complicated.
Implement a Continuously Updating, High-Resolution Time Provider for Windows
(archive link)
This is an elaboration of the above comments to explain the some of the whys.
First, the GetSystemTime* calls are the only Win32 APIs providing the system's time. This time has a fairly coarse granularity, as most applications do not need the overhead required to maintain a higher resolution. Time is (likely) stored internally as a 64-bit count of milliseconds. Calling timeGetTime gets the low order 32 bits. Calling GetSystemTime, etc requests Windows to return this millisecond time, after converting into days, etc and including the system start time.
There are two time sources in a machine: the CPU's clock and an on-board clock (e.g., real-time clock (RTC), Programmable Interval Timers (PIT), and High Precision Event Timer (HPET)). The first has a resolution of around ~0.5ns (2GHz) and the second is generally programmable down to a period of 1ms (though newer chips (HPET) have higher resolution). Windows uses these periodic ticks to perform certain operations, including updating the system time.
Applications can change this period via timerBeginPeriod; however, this affects the entire system. The OS will check / update regular events at the requested frequency. Under low CPU loads / frequencies, there are idle periods for power savings. At high frequencies, there isn't time to put the processor into low power states. See Timer Resolution for further details. Finally, each tick has some overhead and increasing the frequency consumes more CPU cycles.
For higher resolution time, the system time is not maintained to this accuracy, no more than Big Ben has a second hand. Using QueryPerformanceCounter (QPC) or the CPU's ticks (rdtsc) can provide the resolution between the system time ticks. Such an approach was used in the MSDN magazine article Kevin cited. Though these approaches may have drift (e.g., due to frequency scaling), etc and therefore need to be synced to the system time.
In Windows, the base of all time is a function called GetSystemTimeAsFiletime.
It returns a structure that is capable of holding a time with 100ns resoution.
It is kept in UTC
The FILETIME structure records the number of 100ns intervals since January 1, 1600; meaning its resolution is limited to 100ns.
This forms our first function:
A 64-bit number of 100ns ticks since January 1, 1600 is somewhat unwieldy. Windows provides a handy helper function, FileTimeToSystemTime that can decode this 64-bit integer into useful parts:
record SYSTEMTIME {
wYear: Word;
wMonth: Word;
wDayOfWeek: Word;
wDay: Word;
wHour: Word;
wMinute: Word;
wSecond: Word;
wMilliseconds: Word;
}
Notice that SYSTEMTIME has a built-in resolution limitation of 1ms
Now we have a way to go from FILETIME to SYSTEMTIME:
We could write the function to get the current system time as a SYSTEIMTIME structure:
SYSTEMTIME GetSystemTime()
{
//Get the current system time utc in it's native 100ns FILETIME structure
FILETIME ftNow;
GetSytemTimeAsFileTime(ref ft);
//Decode the 100ns intervals into a 1ms resolution SYSTEMTIME for us
SYSTEMTIME stNow;
FileTimeToSystemTime(ref stNow);
return stNow;
}
Except Windows already wrote such a function for you: GetSystemTime
Local, rather than UTC
Now what if you don't want the current time in UTC. What if you want it in your local time? Windows provides a function to convert a FILETIME that is in UTC into your local time: FileTimeToLocalFileTime
You could write a function that returns you a FILETIME in local time already:
FILETIME GetLocalTimeAsFileTime()
{
FILETIME ftNow;
GetSystemTimeAsFileTime(ref ftNow);
//convert to local
FILETIME ftNowLocal
FileTimeToLocalFileTime(ftNow, ref ftNowLocal);
return ftNowLocal;
}
And lets say you want to decode the local FILETIME into a SYSTEMTIME. That's no problem, you can use FileTimeToSystemTime again:
Fortunately, Windows already provides you a function that returns you the value:
Precise
There is another consideration. Before Windows 8, the clock had a resolution of around 15ms. In Windows 8 they improved the clock to 100ns (matching the resolution of FILETIME).
GetSystemTimeAsFileTime (legacy, 15ms resolution)
GetSystemTimeAsPreciseFileTime (Windows 8, 100ns resolution)
This means we should always prefer the new value:
You asked for the time
You asked for the time; but you have some choices.
The timezone:
UTC (system native)
Local timezone
The format:
FILETIME (system native, 100ns resolution)
SYTEMTIME (decoded, 1ms resolution)
Summary
100ns resolution: FILETIME
UTC: GetSytemTimeAsPreciseFileTime (or GetSystemTimeAsFileTime)
Local: (roll your own)
1ms resolution: SYSTEMTIME
UTC: GetSystemTime
Local: GetLocalTime
GetTickCount will not get it done for you.
Look into QueryPerformanceFrequency / QueryPerformanceCounter. The only gotcha here is CPU scaling though, so do your research.
Starting with Windows 8 Microsoft has introduced the new API command GetSystemTimePreciseAsFileTime
Unfortunately you can't use that if you create software which must also run on older operating systems.
My current solution is as follows, but be aware: The determined time is not exact, it is only near to the real time. The result should always be smaller or equal to the real time, but with a fixed error (unless the computer went to standby). The result has a millisecond resolution. For my purpose it is exact enough.
void GetHighResolutionSystemTime(SYSTEMTIME* pst)
{
static LARGE_INTEGER uFrequency = { 0 };
static LARGE_INTEGER uInitialCount;
static LARGE_INTEGER uInitialTime;
static bool bNoHighResolution = false;
if(!bNoHighResolution && uFrequency.QuadPart == 0)
{
// Initialize performance counter to system time mapping
bNoHighResolution = !QueryPerformanceFrequency(&uFrequency);
if(!bNoHighResolution)
{
FILETIME ftOld, ftInitial;
GetSystemTimeAsFileTime(&ftOld);
do
{
GetSystemTimeAsFileTime(&ftInitial);
QueryPerformanceCounter(&uInitialCount);
} while(ftOld.dwHighDateTime == ftInitial.dwHighDateTime && ftOld.dwLowDateTime == ftInitial.dwLowDateTime);
uInitialTime.LowPart = ftInitial.dwLowDateTime;
uInitialTime.HighPart = ftInitial.dwHighDateTime;
}
}
if(bNoHighResolution)
{
GetSystemTime(pst);
}
else
{
LARGE_INTEGER uNow, uSystemTime;
{
FILETIME ftTemp;
GetSystemTimeAsFileTime(&ftTemp);
uSystemTime.LowPart = ftTemp.dwLowDateTime;
uSystemTime.HighPart = ftTemp.dwHighDateTime;
}
QueryPerformanceCounter(&uNow);
LARGE_INTEGER uCurrentTime;
uCurrentTime.QuadPart = uInitialTime.QuadPart + (uNow.QuadPart - uInitialCount.QuadPart) * 10000000 / uFrequency.QuadPart;
if(uCurrentTime.QuadPart < uSystemTime.QuadPart || abs(uSystemTime.QuadPart - uCurrentTime.QuadPart) > 1000000)
{
// The performance counter has been frozen (e. g. after standby on laptops)
// -> Use current system time and determine the high performance time the next time we need it
uFrequency.QuadPart = 0;
uCurrentTime = uSystemTime;
}
FILETIME ftCurrent;
ftCurrent.dwLowDateTime = uCurrentTime.LowPart;
ftCurrent.dwHighDateTime = uCurrentTime.HighPart;
FileTimeToSystemTime(&ftCurrent, pst);
}
}
GetSystemTimeAsFileTime gives the best precision of any Win32 function for absolute time. QPF/QPC as Joel Clark suggested will give better relative time.
Since we all come here for quick snippets instead of boring explanations, I'll write one:
FILETIME t;
GetSystemTimeAsFileTime(&t); // unusable as is
ULARGE_INTEGER i;
i.LowPart = t.dwLowDateTime;
i.HighPart = t.dwHighDateTime;
int64_t ticks_since_1601 = i.QuadPart; // now usable
int64_t us_since_1601 = (i.QuadPart * 1e-1);
int64_t ms_since_1601 = (i.QuadPart * 1e-4);
int64_t sec_since_1601 = (i.QuadPart * 1e-7);
// unix epoch
int64_t unix_us = (i.QuadPart * 1e-1) - 11644473600LL * 1000000;
int64_t unix_ms = (i.QuadPart * 1e-4) - 11644473600LL * 1000;
double unix_sec = (i.QuadPart * 1e-7) - 11644473600LL;
// i.QuadPart is # of 100ns ticks since 1601-01-01T00:00:00Z
// difference to Unix Epoch is 11644473600 seconds (attention to units!)
No idea how drifting performance-counter-based answers went up, don't do slippage bugs, guys.
QueryPerformanceCounter() is built for fine-grained timer resolution.
It is the highest resolution timer that the system has to offer that you can use in your application code to identify performance bottlenecks
Here is a simple implementation for C# devs:
[DllImport("kernel32.dll")]
extern static short QueryPerformanceCounter(ref long x);
[DllImport("kernel32.dll")]
extern static short QueryPerformanceFrequency(ref long x);
private long m_endTime;
private long m_startTime;
private long m_frequency;
public Form1()
{
InitializeComponent();
}
public void Begin()
{
QueryPerformanceCounter(ref m_startTime);
}
public void End()
{
QueryPerformanceCounter(ref m_endTime);
}
private void button1_Click(object sender, EventArgs e)
{
QueryPerformanceFrequency(ref m_frequency);
Begin();
for (long i = 0; i < 1000; i++) ;
End();
MessageBox.Show((m_endTime - m_startTime).ToString());
}
If you are a C/C++ dev, then take a look here: How to use the QueryPerformanceCounter function to time code in Visual C++
Well, this one is very old, yet there is another useful function in Windows C library _ftime, which returns a structure with local time as time_t, milliseconds, timezone, and daylight saving time flag.
In C11 and above (or C++17 and above) you can use timespec_get() to get time with higher precision portably
#include <stdio.h>
#include <time.h>
int main(void)
{
struct timespec ts;
timespec_get(&ts, TIME_UTC);
char buff[100];
strftime(buff, sizeof buff, "%D %T", gmtime(&ts.tv_sec));
printf("Current time: %s.%09ld UTC\n", buff, ts.tv_nsec);
}
If you're using C++ then since C++11 you can use std::chrono::high_resolution_clock, std::chrono::system_clock (wall clock), or std::chrono::steady_clock (monotonic clock) in the new <chrono> header. No need to use Windows-specific APIs anymore
auto start1 = std::chrono::high_resolution_clock::now();
auto start2 = std::chrono::system_clock::now();
auto start3 = std::chrono::steady_clock::now();
// do some work
auto end1 = std::chrono::high_resolution_clock::now();
auto end2 = std::chrono::system_clock::now();
auto end3 = std::chrono::steady_clock::now();
std::chrono::duration<long long, std::milli> diff1 = end1 - start1;
std::chrono::duration<double, std::milli> diff2 = end2 - start2;
auto diff3 = std::chrono::duration_cast<std::chrono::milliseconds>(end3 - start3);
std::cout << diff.count() << ' ' << diff2.count() << ' ' << diff3.count() << '\n';
The program is a middleware between a database and application. For each database access I most calculate the time length in milliseconds. The example bellow is using TDateTime from Builder library. I must, as far as possible, only use standard c++ libraries.
AnsiString TimeInMilliseconds(TDateTime t) {
Word Hour, Min, Sec, MSec;
DecodeTime(t, Hour, Min, Sec, MSec);
long ms = MSec + Sec * 1000 + Min * 1000 * 60 + Hour * 1000 * 60 * 60;
return IntToStr(ms);
}
// computing times
TDateTime SelectStart = Now();
sql_manipulation_statement();
TDateTime SelectEnd = Now();
On both Windows and POSIX-compliant systems (Linux, OSX, etc.), you can calculate the time in 1/CLOCKS_PER_SEC (timer ticks) for a call using clock() found in <ctime>. The return value from that call will be the elapsed time since the program started running in milliseconds. Two calls to clock() can then be subtracted from each other to calculate the running time of a given block of code.
So for example:
#include <ctime>
#include <cstdio>
clock_t time_a = clock();
//...run block of code
clock_t time_b = clock();
if (time_a == ((clock_t)-1) || time_b == ((clock_t)-1))
{
perror("Unable to calculate elapsed time");
}
else
{
unsigned int total_time_ticks = (unsigned int)(time_b - time_a);
}
Edit: You are not going to be able to directly compare the timings from a POSIX-compliant platform to a Windows platform because on Windows clock() measures the the wall-clock time, where-as on a POSIX system, it measures elapsed CPU time. But it is a function in a standard C++ library, and for comparing performance between different blocks of code on the same platform, should fit your needs.
On windows you can use GetTickCount (MSDN) Which will give the number of milliseconds that have elapsed since the system was started. Using this before and after the call you get the amount of milliseconds the call took.
DWORD start = GetTickCount();
//Do your stuff
DWORD end = GetTickCount();
cout << "the call took " << (end - start) << " ms";
Edit:
As Jason mentioned, Clock(); would be better because it is not related to Windows only.