Related
I have a program that reads the current time from the system clock and saves it to a text file. I previously used the GetSystemTime function which worked, but the times weren't completely consistent eg: one of the times is 32567.789 and the next time is 32567.780 which is backwards in time.
I am using this program to save the time up to 10 times a second. I read that the GetSystemTimeAsFileTime function is more accurate. My question is, how to I convert my current code to use the GetSystemTimeAsFileTime function? I tried to use the FileTimeToSystemTime function but that had the same problems.
SYSTEMTIME st;
GetSystemTime(&st);
WORD sec = (st.wHour*3600) + (st.wMinute*60) + st.wSecond; //convert to seconds in a day
lStr.Format( _T("%d %d.%d\n"),GetFrames() ,sec, st.wMilliseconds);
std::wfstream myfile;
myfile.open("time.txt", std::ios::out | std::ios::in | std::ios::app );
if (myfile.is_open())
{
myfile.write((LPCTSTR)lStr, lStr.GetLength());
myfile.close();
}
else {lStr.Format( _T("open file failed: %d"), WSAGetLastError());
}
EDIT To add some more info, the code captures an image from a camera which runs 10 times every second and saves the time the image was taken into a text file. When I subtract the 1st entry of the text file from the second and so on eg: entry 2-1 3-2 4-3 etc I get this graph, where the x axis is the number of entries and the y axis is the subtracted values.
All of them should be around the 0.12 mark which most of them are. However you can see that a lot of them vary and some even go negative. This isn't due to the camera because the camera has its own internal clock and that has no variations. It has something to do with capturing the system time. What I want is the most accurate method to extract the system time with the highest resolution and as little noise as possible.
Edit 2 I have taken on board your suggestions and ran the program again. This is the result:
As you can see it is a lot better than before but it is still not right. I find it strange that it seems to do it very incrementally. I also just plotted the times and this is the result, where x is the entry and y is the time:
Does anyone have any idea on what could be causing the time to go out every 30 frames or so?
First of all, you wanna get the FILETIME as follows
FILETIME fileTime;
GetSystemTimeAsFileTime(&fileTime);
// Or for higher precision, use
// GetSystemTimePreciseAsFileTime(&fileTime);
According to FILETIME's documentation,
It is not recommended that you add and subtract values from the FILETIME structure to obtain relative times. Instead, you should copy the low- and high-order parts of the file time to a ULARGE_INTEGER structure, perform 64-bit arithmetic on the QuadPart member, and copy the LowPart and HighPart members into the FILETIME structure.
So, what you should be doing next are
ULARGE_INTEGER theTime;
theTime.LowPart = fileTime.dwLowDateTime;
theTime.HighPart = fileTime.dwHighDateTime;
__int64 fileTime64Bit = theTime.QuadPart;
And that's it. The fileTime64Bit variable now contains the time you're looking for.
If you want to get a SYSTEMTIME object instead, you could just do the following:
SYSTEMTIME systemTime;
FileTimeToSystemTime(&fileTime, &systemTime);
Getting the system time out of Windows with decent accuracy is something that I've had fun with, too... I discovered that Javascript code running on Chrome seemed to produce more consistent timer results than I could with C++ code, so I went looking in the Chrome source. An interesting place to start is the comments at the top of time_win.cc in the Chrome source. The links given there to a Mozilla bug and a Dr. Dobb's article are also very interesting.
Based on the Mozilla and Chrome sources, and the above links, the code I generated for my own use is here. As you can see, it's a lot of code!
The basic idea is that getting the absolute current time is quite expensive. Windows does provide a high resolution timer that's cheap to access, but that only gives you a relative, not absolute time. What my code does is split the problem up into two parts:
1) Get the system time accurately. This is in CalibrateNow(). The basic technique is to call timeBeginPeriod(1) to get accurate times, then call GetSystemTimeAsFileTime() until the result changes, which means that the timeBeginPeriod() call has had an effect. This gives us an accurate system time, but is quite an expensive operation (and the timeBeginPeriod() call can affect other processes) so we don't want to do it each time we want a time. The code also calls QueryPerformanceCounter() to get the current high resolution timer value.
bool NeedCalibration = true;
LONGLONG CalibrationFreq = 0;
LONGLONG CalibrationCountBase = 0;
ULONGLONG CalibrationTimeBase = 0;
void CalibrateNow(void)
{
// If the timer frequency is not known, try to get it
if (CalibrationFreq == 0)
{
LARGE_INTEGER freq;
if (::QueryPerformanceFrequency(&freq) == 0)
CalibrationFreq = -1;
else
CalibrationFreq = freq.QuadPart;
}
if (CalibrationFreq > 0)
{
// Get the current system time, accurate to ~1ms
FILETIME ft1, ft2;
::timeBeginPeriod(1);
::GetSystemTimeAsFileTime(&ft1);
do
{
// Loop until the value changes, so that the timeBeginPeriod() call has had an effect
::GetSystemTimeAsFileTime(&ft2);
}
while (FileTimeToValue(ft1) == FileTimeToValue(ft2));
::timeEndPeriod(1);
// Get the current timer value
LARGE_INTEGER counter;
::QueryPerformanceCounter(&counter);
// Save calibration values
CalibrationCountBase = counter.QuadPart;
CalibrationTimeBase = FileTimeToValue(ft2);
NeedCalibration = false;
}
}
2) When we want the current time, get the high resolution timer by calling QueryPerformanceCounter(), and use the change in that timer since the last CalibrateNow() call to work out an accurate "now". This is in Now() in my code. This also periodcally calls CalibrateNow() to ensure that the system time doesn't go backwards, or drift out.
FILETIME GetNow(void)
{
for (int i = 0; i < 4; i++)
{
// Calibrate if needed, and give up if this fails
if (NeedCalibration)
CalibrateNow();
if (NeedCalibration)
break;
// Get the current timer value and use it to compute now
FILETIME ft;
::GetSystemTimeAsFileTime(&ft);
LARGE_INTEGER counter;
::QueryPerformanceCounter(&counter);
LONGLONG elapsed = ((counter.QuadPart - CalibrationCountBase) * 10000000) / CalibrationFreq;
ULONGLONG now = CalibrationTimeBase + elapsed;
// Don't let time go back
static ULONGLONG lastNow = 0;
now = max(now,lastNow);
lastNow = now;
// Check for clock skew
if (LONGABS(FileTimeToValue(ft) - now) > 2 * GetTimeIncrement())
{
NeedCalibration = true;
lastNow = 0;
}
if (!NeedCalibration)
return ValueToFileTime(now);
}
// Calibration has failed to stabilize, so just use the system time
FILETIME ft;
::GetSystemTimeAsFileTime(&ft);
return ft;
}
It's all a bit hairy but works better than I had hoped. This also seems to work well as far back on Windows as I have tested (which was Windows XP).
I believe you are looking for GetSystemTimePreciseAsFileTime() function or even QueryPerformanceCounter() - to be short for something that is guarantied to produce monotone values.
I need to retrieve the current time point with a precision of microseconds. The time point can be relative to any fixed date.
How can it be achieved? For job policy, I really should't use boost or any other lib.
I'm working at a multiplatform application and under Linux, I can use C++11 system_clock::now().time_since_epoch(), but under Windows I work with VS2010, so I have no std::chrono library.
I've seen the RtlTimeToSecondsSince1970 function, but its resolution is a second.
Timers and timing is a tricky enough subject that In my opinion current cross platform implementations are not quite up to scratch. So I'd recommend a specific version for windows with appropriate #ifdef's. See other answers if you want a cross-platform version.
If you've got to/want to use a windows specific call then GetSystemTimeAsFileTime (or on windows 8 GetSystemTimePreciseAsFileTime) are the best calls for getting UTC time and QueryPerformanceCounter is good for high resolution timestamps. It gives back the number of 100-nanosecond intervals since January 1, 1601 UTC into a FILETIME structure.
This fine article goes into the gory details of measuring timers and timestamps in windows and is well worth a read.
EDIT: Converting a FILETIME to us, you need to go via a ULARGE_INTEGER.
FILETIME ft;
GetSystemTimeAsFileTime(&ft);
ULARGE_INTEGER li;
li.LowPart = ft.dwLowDateTime;
li.HighPart = ft.dwHighDateTime;
unsigned long long valueAsHns = li.QuadPart;
unsigned long long valueAsUs = valueAsHns/10;
This code works for me in VS2010. The constructor tests to see if high-precision timing is available on the processor and currentTime() returns a time stamp in seconds. Compare time stamps for delta time. I use this for a game engine to get very small delta time values. Please note precision isn't limited to seconds despite the return value being named so (its a double).
Basically you find out how many seconds per cpu tick with QueryPerformanceFrequency and get the time using QueryPerformanceCounter.
////////////////////////
//Grabs speed of processor
////////////////////////
Timer::Timer()
{
__int64 _iCountsPerSec = 0;
bool _bPerfExists = QueryPerformanceFrequency((LARGE_INTEGER*)&_iCountsPerSec) != 0;
if (_bPerfExists)
{
m_dSecondsPerCount = 1.0 / static_cast<double>(_iCountsPerSec);
}
}
////////////////////////
//Returns current real time
////////////////////////
double Timer::currentTime() const
{
__int64 time = 0;
QueryPerformanceCounter((LARGE_INTEGER*)&time);
double timeInSeconds = static_cast<double>(time)* m_dSecondsPerCount;
return timeInSeconds;
}
The following code works in visual studio.
#include <time.h>
clock_t start , end ;
int getTicks_u32()
{
int cpu_time_used ;
end = clock() ;
cpu_time_used = (static_cast<int> (end - start)) / CLOCKS_PER_SEC;
return cpu_time_used ;
}
void initSystemClock_bl(void)
{
start = clock();
}
How can I get the Windows system time with millisecond resolution?
If the above is not possible, then how can I get the operating system start time? I would like to use this value together with timeGetTime() in order to compute a system time with millisecond resolution.
Try this article from MSDN Magazine. It's actually quite complicated.
Implement a Continuously Updating, High-Resolution Time Provider for Windows
(archive link)
This is an elaboration of the above comments to explain the some of the whys.
First, the GetSystemTime* calls are the only Win32 APIs providing the system's time. This time has a fairly coarse granularity, as most applications do not need the overhead required to maintain a higher resolution. Time is (likely) stored internally as a 64-bit count of milliseconds. Calling timeGetTime gets the low order 32 bits. Calling GetSystemTime, etc requests Windows to return this millisecond time, after converting into days, etc and including the system start time.
There are two time sources in a machine: the CPU's clock and an on-board clock (e.g., real-time clock (RTC), Programmable Interval Timers (PIT), and High Precision Event Timer (HPET)). The first has a resolution of around ~0.5ns (2GHz) and the second is generally programmable down to a period of 1ms (though newer chips (HPET) have higher resolution). Windows uses these periodic ticks to perform certain operations, including updating the system time.
Applications can change this period via timerBeginPeriod; however, this affects the entire system. The OS will check / update regular events at the requested frequency. Under low CPU loads / frequencies, there are idle periods for power savings. At high frequencies, there isn't time to put the processor into low power states. See Timer Resolution for further details. Finally, each tick has some overhead and increasing the frequency consumes more CPU cycles.
For higher resolution time, the system time is not maintained to this accuracy, no more than Big Ben has a second hand. Using QueryPerformanceCounter (QPC) or the CPU's ticks (rdtsc) can provide the resolution between the system time ticks. Such an approach was used in the MSDN magazine article Kevin cited. Though these approaches may have drift (e.g., due to frequency scaling), etc and therefore need to be synced to the system time.
In Windows, the base of all time is a function called GetSystemTimeAsFiletime.
It returns a structure that is capable of holding a time with 100ns resoution.
It is kept in UTC
The FILETIME structure records the number of 100ns intervals since January 1, 1600; meaning its resolution is limited to 100ns.
This forms our first function:
A 64-bit number of 100ns ticks since January 1, 1600 is somewhat unwieldy. Windows provides a handy helper function, FileTimeToSystemTime that can decode this 64-bit integer into useful parts:
record SYSTEMTIME {
wYear: Word;
wMonth: Word;
wDayOfWeek: Word;
wDay: Word;
wHour: Word;
wMinute: Word;
wSecond: Word;
wMilliseconds: Word;
}
Notice that SYSTEMTIME has a built-in resolution limitation of 1ms
Now we have a way to go from FILETIME to SYSTEMTIME:
We could write the function to get the current system time as a SYSTEIMTIME structure:
SYSTEMTIME GetSystemTime()
{
//Get the current system time utc in it's native 100ns FILETIME structure
FILETIME ftNow;
GetSytemTimeAsFileTime(ref ft);
//Decode the 100ns intervals into a 1ms resolution SYSTEMTIME for us
SYSTEMTIME stNow;
FileTimeToSystemTime(ref stNow);
return stNow;
}
Except Windows already wrote such a function for you: GetSystemTime
Local, rather than UTC
Now what if you don't want the current time in UTC. What if you want it in your local time? Windows provides a function to convert a FILETIME that is in UTC into your local time: FileTimeToLocalFileTime
You could write a function that returns you a FILETIME in local time already:
FILETIME GetLocalTimeAsFileTime()
{
FILETIME ftNow;
GetSystemTimeAsFileTime(ref ftNow);
//convert to local
FILETIME ftNowLocal
FileTimeToLocalFileTime(ftNow, ref ftNowLocal);
return ftNowLocal;
}
And lets say you want to decode the local FILETIME into a SYSTEMTIME. That's no problem, you can use FileTimeToSystemTime again:
Fortunately, Windows already provides you a function that returns you the value:
Precise
There is another consideration. Before Windows 8, the clock had a resolution of around 15ms. In Windows 8 they improved the clock to 100ns (matching the resolution of FILETIME).
GetSystemTimeAsFileTime (legacy, 15ms resolution)
GetSystemTimeAsPreciseFileTime (Windows 8, 100ns resolution)
This means we should always prefer the new value:
You asked for the time
You asked for the time; but you have some choices.
The timezone:
UTC (system native)
Local timezone
The format:
FILETIME (system native, 100ns resolution)
SYTEMTIME (decoded, 1ms resolution)
Summary
100ns resolution: FILETIME
UTC: GetSytemTimeAsPreciseFileTime (or GetSystemTimeAsFileTime)
Local: (roll your own)
1ms resolution: SYSTEMTIME
UTC: GetSystemTime
Local: GetLocalTime
GetTickCount will not get it done for you.
Look into QueryPerformanceFrequency / QueryPerformanceCounter. The only gotcha here is CPU scaling though, so do your research.
Starting with Windows 8 Microsoft has introduced the new API command GetSystemTimePreciseAsFileTime
Unfortunately you can't use that if you create software which must also run on older operating systems.
My current solution is as follows, but be aware: The determined time is not exact, it is only near to the real time. The result should always be smaller or equal to the real time, but with a fixed error (unless the computer went to standby). The result has a millisecond resolution. For my purpose it is exact enough.
void GetHighResolutionSystemTime(SYSTEMTIME* pst)
{
static LARGE_INTEGER uFrequency = { 0 };
static LARGE_INTEGER uInitialCount;
static LARGE_INTEGER uInitialTime;
static bool bNoHighResolution = false;
if(!bNoHighResolution && uFrequency.QuadPart == 0)
{
// Initialize performance counter to system time mapping
bNoHighResolution = !QueryPerformanceFrequency(&uFrequency);
if(!bNoHighResolution)
{
FILETIME ftOld, ftInitial;
GetSystemTimeAsFileTime(&ftOld);
do
{
GetSystemTimeAsFileTime(&ftInitial);
QueryPerformanceCounter(&uInitialCount);
} while(ftOld.dwHighDateTime == ftInitial.dwHighDateTime && ftOld.dwLowDateTime == ftInitial.dwLowDateTime);
uInitialTime.LowPart = ftInitial.dwLowDateTime;
uInitialTime.HighPart = ftInitial.dwHighDateTime;
}
}
if(bNoHighResolution)
{
GetSystemTime(pst);
}
else
{
LARGE_INTEGER uNow, uSystemTime;
{
FILETIME ftTemp;
GetSystemTimeAsFileTime(&ftTemp);
uSystemTime.LowPart = ftTemp.dwLowDateTime;
uSystemTime.HighPart = ftTemp.dwHighDateTime;
}
QueryPerformanceCounter(&uNow);
LARGE_INTEGER uCurrentTime;
uCurrentTime.QuadPart = uInitialTime.QuadPart + (uNow.QuadPart - uInitialCount.QuadPart) * 10000000 / uFrequency.QuadPart;
if(uCurrentTime.QuadPart < uSystemTime.QuadPart || abs(uSystemTime.QuadPart - uCurrentTime.QuadPart) > 1000000)
{
// The performance counter has been frozen (e. g. after standby on laptops)
// -> Use current system time and determine the high performance time the next time we need it
uFrequency.QuadPart = 0;
uCurrentTime = uSystemTime;
}
FILETIME ftCurrent;
ftCurrent.dwLowDateTime = uCurrentTime.LowPart;
ftCurrent.dwHighDateTime = uCurrentTime.HighPart;
FileTimeToSystemTime(&ftCurrent, pst);
}
}
GetSystemTimeAsFileTime gives the best precision of any Win32 function for absolute time. QPF/QPC as Joel Clark suggested will give better relative time.
Since we all come here for quick snippets instead of boring explanations, I'll write one:
FILETIME t;
GetSystemTimeAsFileTime(&t); // unusable as is
ULARGE_INTEGER i;
i.LowPart = t.dwLowDateTime;
i.HighPart = t.dwHighDateTime;
int64_t ticks_since_1601 = i.QuadPart; // now usable
int64_t us_since_1601 = (i.QuadPart * 1e-1);
int64_t ms_since_1601 = (i.QuadPart * 1e-4);
int64_t sec_since_1601 = (i.QuadPart * 1e-7);
// unix epoch
int64_t unix_us = (i.QuadPart * 1e-1) - 11644473600LL * 1000000;
int64_t unix_ms = (i.QuadPart * 1e-4) - 11644473600LL * 1000;
double unix_sec = (i.QuadPart * 1e-7) - 11644473600LL;
// i.QuadPart is # of 100ns ticks since 1601-01-01T00:00:00Z
// difference to Unix Epoch is 11644473600 seconds (attention to units!)
No idea how drifting performance-counter-based answers went up, don't do slippage bugs, guys.
QueryPerformanceCounter() is built for fine-grained timer resolution.
It is the highest resolution timer that the system has to offer that you can use in your application code to identify performance bottlenecks
Here is a simple implementation for C# devs:
[DllImport("kernel32.dll")]
extern static short QueryPerformanceCounter(ref long x);
[DllImport("kernel32.dll")]
extern static short QueryPerformanceFrequency(ref long x);
private long m_endTime;
private long m_startTime;
private long m_frequency;
public Form1()
{
InitializeComponent();
}
public void Begin()
{
QueryPerformanceCounter(ref m_startTime);
}
public void End()
{
QueryPerformanceCounter(ref m_endTime);
}
private void button1_Click(object sender, EventArgs e)
{
QueryPerformanceFrequency(ref m_frequency);
Begin();
for (long i = 0; i < 1000; i++) ;
End();
MessageBox.Show((m_endTime - m_startTime).ToString());
}
If you are a C/C++ dev, then take a look here: How to use the QueryPerformanceCounter function to time code in Visual C++
Well, this one is very old, yet there is another useful function in Windows C library _ftime, which returns a structure with local time as time_t, milliseconds, timezone, and daylight saving time flag.
In C11 and above (or C++17 and above) you can use timespec_get() to get time with higher precision portably
#include <stdio.h>
#include <time.h>
int main(void)
{
struct timespec ts;
timespec_get(&ts, TIME_UTC);
char buff[100];
strftime(buff, sizeof buff, "%D %T", gmtime(&ts.tv_sec));
printf("Current time: %s.%09ld UTC\n", buff, ts.tv_nsec);
}
If you're using C++ then since C++11 you can use std::chrono::high_resolution_clock, std::chrono::system_clock (wall clock), or std::chrono::steady_clock (monotonic clock) in the new <chrono> header. No need to use Windows-specific APIs anymore
auto start1 = std::chrono::high_resolution_clock::now();
auto start2 = std::chrono::system_clock::now();
auto start3 = std::chrono::steady_clock::now();
// do some work
auto end1 = std::chrono::high_resolution_clock::now();
auto end2 = std::chrono::system_clock::now();
auto end3 = std::chrono::steady_clock::now();
std::chrono::duration<long long, std::milli> diff1 = end1 - start1;
std::chrono::duration<double, std::milli> diff2 = end2 - start2;
auto diff3 = std::chrono::duration_cast<std::chrono::milliseconds>(end3 - start3);
std::cout << diff.count() << ' ' << diff2.count() << ' ' << diff3.count() << '\n';
timeGetTime seems to be quite good to query for system time. However, its return value is 32-bit only, so it wraps around every 49 days approx.
It's not too hard to detect the rollover in calling code, but it adds some complexity, and (worse) requires keeping a state.
Is there some replacement for timeGetTime that would not have this wrap-around problem (probably by returning a 64-bit value), and have roughly the same precision and cost?
Unless you need to time an event that is over 49 days, you can SAFELY ignore the wrap-around. Just always subtract the previous timeGetTime() from the current timeGetTime() and you will always obtain a delta measured time that is accurate, even across wrap-around -- provided that you are timing events whose total duration is under 49 days. This all works due to how unsigned modular math works inside the computer.
// this code ALWAYS works, even with wrap-around!
DWORD dwStart = timeGetTime();
// provided the event timed here has a duration of less than 49 days
DWORD dwDuration = timeGetTime()-dwStart;
TIP: look into TimeBeginPeriod(1L) to increase the accuracy of timeGetTime().
BUT... if you want a 64-bit version of timeGetTime, here it is:
__int64 timeGetTime64() {
static __int64 time64=0;
// warning: if multiple threads call this function, protect with a critical section!
return (time64 += (timeGetTime()-(DWORD)time64));
}
Please note that if this function is not called at least once every 49 days, that this function will fail to properly detect a wrap-around.
What platform?
You could use GetTickCount64() if you're running on Vista or later, or synthesise your own GetTickCount64() from GetTickCount() and a timer...
I deal with the rollover issue in GetTickCount() and synthesising a GetTickCount64() on platforms that don't support it here on my blog about testing non-trivial code: http://www.lenholgate.com/blog/2008/04/practical-testing-17---a-whole-new-approach.html
Nope, tracking roll-over requires state. It can be as simple as just incrementing your own 64-bit counter on each callback.
It is pretty unusual to want to track time periods to a resolution as low as 1 millisecond for up to 49 days. You'd have to worry that the accuracy is still there after such a long period. The next step is to use the clock, GetTickCount(64), GetSystemTimeAsFileTime have a resolution of 15.625 milliseconds and are kept accurate with a time server.
Have a look at GetSystemTimeAsFileTime(). It fills a FILETIME struct that contains a "64-bit value representing the number of 100-nanosecond intervals since January 1, 1601 (UTC)"
How are you trying to use it? I frequently use the Win32 equivalent when checking for durations that I know will be under 49 days. For example the following code will always work.
DWORD start = timeGetTime();
DoSomthingThatTakesLessThen49Days();
DWORD duration = timeGetTime() - start;
Even if timeGetTime rolled over while calling DoSomthingThatTakesLessThen49Days duration will still be correct.
Note the following code could fail on rollover.
DWORD start = timeGetTime();
DoSomthingThatTakesLessThen49Days();
if (now + 5000 < timeGetTime())
{
}
but can easy be re-written to work as follows
DWORD start = timeGetTime();
DoSomthingThatTakesLessThen49Days();
if (timeGetTime() - start < 5000)
{
}
Assuming you can guarantee that this function will called at least once every 49 days, something like this will work:
// Returns current time in milliseconds
uint64_t timeGetTime64()
{
static uint32_t _prevVal = 0;
static uint64_t _wrapOffset = 0;
uint32_t newVal = (uint32_t) timeGetTime();
if (newVal < _prevVal) _wrapOffset += (((uint64_t)1)<<32);
_prevVal = newVal;
return _wrapOffset+newVal;
}
Note that due to the use of static variables, this function isn't multithread-safe, so if you plan on calling it from multiple threads you should serialize it via a critical section or mutex or similar.
I'm not sure if this fully meets your needs, but
std::chrono::system_clock
might be along the lines of what you're looking for.
http://en.cppreference.com/w/cpp/chrono/system_clock
You could use RDTSC intrinsic. To get time in milliseconds you could get transform coefficient:
double get_rdtsc_coeff() {
static double coeff = 0.0;
if ( coeff < 1.0 ) { // count it only once
unsigned __int64 t00 = __rdtsc();
Sleep(1000);
unsigned __int64 t01 = __rdtsc();
coeff = (t01-t00)/1000.0;
}
return coeff; // transformation coefficient
}
Now you could get count of milliseconds from the last reset:
__int64 get_ms_from_start() {
return static_cast<__int64>(__rdtsc()/get_rdtsc_coeff());
}
If your system uses SpeedStep or similar technologies you could use QueryPerformanceCounter/QueryPerformanceFrequency functions. Windows gives guarantees then the frequency cannot change while the system is running.
I have a problem in using time.
I want to use and get microseconds on windows using C++.
I can't find the way.
The "canonical" answer was given by unwind :
One popular way is using the QueryPerformanceCounter() call.
There are however few problems with this method:
it's intended for measurement of time intervals, not time. This means you have to write code to establish "epoch time" from which you will measure precise intervals. This is called calibration.
As you calibrate your clock, you also need to periodically adjust it so it's never too much out of sync (this is called drift) with your system clock.
QueryPerformanceCounter is not implemented in user space; this means context switch is needed to call kernel side of implementation, and that is relatively expensive (around 0.7 microsecond). This seems to be required to support legacy hardware.
Not all is lost, though. Points 1. and 2. are something you can do with a bit of coding, 3. can be replaced with direct call to RDTSC (available in newer versions of Visual C++ via __rdtsc() intrinsic), as long as you know accurate CPU clock frequency. Although, on older CPUs, such call would be susceptible to changes in cpu internal clock speed, in all newer Intel and AMD CPUs it is guaranteed to give fairly accurate results and won't be affected by changes in CPU clock (e.g. power saving features).
Lets get started with 1. Here is data structure to hold calibration data:
struct init
{
long long stamp; // last adjustment time
long long epoch; // last sync time as FILETIME
long long start; // counter ticks to match epoch
long long freq; // counter frequency (ticks per 10ms)
void sync(int sleep);
};
init data_[2] = {};
const init* volatile init_ = &data_[0];
Here is code for initial calibration; it has to be given time (in milliseconds) to wait for the clock to move; I've found that 500 milliseconds give pretty good results (the shorter time, the less accurate calibration). For the purpose of callibration we are going to use QueryPerformanceCounter() etc. You only need to call it for data_[0], since data_[1] will be updated by periodic clock adjustment (below).
void init::sync(int sleep)
{
LARGE_INTEGER t1, t2, p1, p2, r1, r2, f;
int cpu[4] = {};
// prepare for rdtsc calibration - affinity and priority
SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_TIME_CRITICAL);
SetThreadAffinityMask(GetCurrentThread(), 2);
Sleep(10);
// frequency for time measurement during calibration
QueryPerformanceFrequency(&f);
// for explanation why RDTSC is safe on modern CPUs, look for "Constant TSC" and "Invariant TSC" in
// Intel(R) 64 and IA-32 Architectures Software Developer’s Manual (document 253668.pdf)
__cpuid(cpu, 0); // flush CPU pipeline
r1.QuadPart = __rdtsc();
__cpuid(cpu, 0);
QueryPerformanceCounter(&p1);
// sleep some time, doesn't matter it's not accurate.
Sleep(sleep);
// wait for the system clock to move, so we have exact epoch
GetSystemTimeAsFileTime((FILETIME*) (&t1.u));
do
{
Sleep(0);
GetSystemTimeAsFileTime((FILETIME*) (&t2.u));
__cpuid(cpu, 0); // flush CPU pipeline
r2.QuadPart = __rdtsc();
} while(t2.QuadPart == t1.QuadPart);
// measure how much time has passed exactly, using more expensive QPC
__cpuid(cpu, 0);
QueryPerformanceCounter(&p2);
stamp = t2.QuadPart;
epoch = t2.QuadPart;
start = r2.QuadPart;
// calculate counter ticks per 10ms
freq = f.QuadPart * (r2.QuadPart-r1.QuadPart) / 100 / (p2.QuadPart-p1.QuadPart);
SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_NORMAL);
SetThreadAffinityMask(GetCurrentThread(), 0xFF);
}
With good calibration data you can calculate exact time from cheap RDTSC (I measured the call and calculation to be ~25 nanoseconds on my machine). There are three things to note:
return type is binary compatible with FILETIME structure and is precise to 100ns , unlike GetSystemTimeAsFileTime (which increments in 10-30ms or so intervals, or 1 millisecond at best).
in order to avoid expensive conversions integer to double to integer, the whole calculation is performed in 64 bit integers. Even though these can hold huge numbers, there is real risk of integer overflow, and so start must be brought forward periodically to avoid it. This is done in clock adjustment.
we are making a copy of calibration data, because it might have been updated during our call by clock adjustement in another thread.
Here is the code to read current time with high precision. Return value is binary compatible with FILETIME, i.e. number of 100-nanosecond intervals since Jan 1, 1601.
long long now()
{
// must make a copy
const init* it = init_;
// __cpuid(cpu, 0) - no need to flush CPU pipeline here
const long long p = __rdtsc();
// time passed from epoch in counter ticks
long long d = (p - it->start);
if (d > 0x80000000000ll)
{
// closing to integer overflow, must adjust now
adjust();
}
// convert 10ms to 100ns periods
d *= 100000ll;
d /= it->freq;
// and add to epoch, so we have proper FILETIME
d += it->epoch;
return d;
}
For clock adjustment, we need to capture exact time (as provided by system clock) and compare it against our clock; this will give us drift value. Next we use simple formula to calculate "adjusted" CPU frequency, to make our clock meet system clock at the time of next adjustment. Thus it is important that adjustments are called on regular intervals; I've found that it works well when called in 15 minutes intervals. I use CreateTimerQueueTimer, called once at program startup to schedule adjustment calls (not demonstrated here).
The slight problem with capturing accurate system time (for the purpose of calculating drift) is that we need to wait for the system clock to move, and that can take up to 30 milliseconds or so (it's a long time). If adjustment is not performed, it would risk integer overflow inside function now(), not to mention uncorrected drift from system clock. There is builtin protection against overflow in now(), but we really don't want to trigger it synchronously in a thread which happened to call now() at the wrong moment.
Here is the code for periodic clock adjustment, clock drift is in r->epoch - r->stamp:
void adjust()
{
// must make a copy
const init* it = init_;
init* r = (init_ == &data_[0] ? &data_[1] : &data_[0]);
LARGE_INTEGER t1, t2;
// wait for the system clock to move, so we have exact time to compare against
GetSystemTimeAsFileTime((FILETIME*) (&t1.u));
long long p = 0;
int cpu[4] = {};
do
{
Sleep(0);
GetSystemTimeAsFileTime((FILETIME*) (&t2.u));
__cpuid(cpu, 0); // flush CPU pipeline
p = __rdtsc();
} while (t2.QuadPart == t1.QuadPart);
long long d = (p - it->start);
// convert 10ms to 100ns periods
d *= 100000ll;
d /= it->freq;
r->start = p;
r->epoch = d + it->epoch;
r->stamp = t2.QuadPart;
const long long dt1 = t2.QuadPart - it->epoch;
const long long dt2 = t2.QuadPart - it->stamp;
const double s1 = (double) d / dt1;
const double s2 = (double) d / dt2;
r->freq = (long long) (it->freq * (s1 + s2 - 1) + 0.5);
InterlockedExchangePointer((volatile PVOID*) &init_, r);
// if you have log output, here is good point to log calibration results
}
Lastly two utility functions. One will convert FILETIME (including output from now()) to SYSTEMTIME while preserving microseconds to separate int. Other will return frequency, so your program can use __rdtsc() directly for accurate measurements of time intervals (with nanosecond precision).
void convert(SYSTEMTIME& s, int &us, long long f)
{
LARGE_INTEGER i;
i.QuadPart = f;
FileTimeToSystemTime((FILETIME*) (&i.u), &s);
s.wMilliseconds = 0;
LARGE_INTEGER t;
SystemTimeToFileTime(&s, (FILETIME*) (&t.u));
us = (int) (i.QuadPart - t.QuadPart)/10;
}
long long frequency()
{
// must make a copy
const init* it = init_;
return it->freq * 100;
}
Well of course none of the above is more accurate than your system clock, which is unlikely to be more accurate than few hundred milliseconds. The purpose of precise clock (as opposed to accurate) as implemented above, is to provide single measure which can be used for both:
cheap and very accurate measurement of time intervals (not wall time),
much less accurate, but monotonous and consistent with the above, measure of wall time
I think it does it pretty well. Example use are logs, where one can use timestamps not only to find time of events, but also reason about internal program timings, latency (in microseconds) etc.
I leave the plumbing (call to initial calibration, scheduling adjustment) as an exercise for gentle readers.
You can use boost date time library.
You can use boost::posix_time::hours, boost::posix_time::minutes,
boost::posix_time::seconds, boost::posix_time::millisec, boost::posix_time::nanosec
http://www.boost.org/doc/libs/1_39_0/doc/html/date_time.html
One popular way is using the QueryPerformanceCounter() call. This is useful if you need high-precision timing, such as for measuring durations that only take on the order of microseconds. I believe this is implemented using the RDTSC machine instruction.
There might be issues though, such as the counter frequency varying with power-saving, and synchronization between multiple cores. See the Wikipedia link above for details on these issues.
Take a look at the Windows APIs GetSystemTime() / GetLocalTime() or GetSystemTimeAsFileTime().
GetSystemTimeAsFileTime() expresses time in 100 nanosecond intervals, that is 1/10 of a microsecond. All functions provide the current time with in millisecond accuracy.
EDIT:
Keep in mind, that on most Windows systems the system time is only updated about every 1 millisecond. So even representing your time with microsecond accuracy makes it still necessary to acquire the time with such a precision.
Take a look at this: http://www.decompile.com/cpp/faq/windows_timer_api.htm
May be this can help:
NTSTATUS WINAPI NtQuerySystemTime(__out PLARGE_INTEGER SystemTime);
SystemTime [out] - a pointer to a LARGE_INTEGER structure that receives the system time. This is a 64-bit value representing the number of 100-nanosecond intervals since January 1, 1601 (UTC).