C get system time to microsecond accuracy on windows? [duplicate] - c++

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
measuring time with resolution of microseconds in c++?
Hi,
Is there a simple way i can get the system time on a Windows machine, down to microsecond accuracy?

Look at GetSystemTimeAsFileTime
It gives you accuracy in 0.1 microseconds or 100 nanoseconds.
Note that it's Epoch different from POSIX Epoch.
So to get POSIX time in microseconds you need:
FILETIME ft;
GetSystemTimeAsFileTime(&ft);
unsigned long long tt = ft.dwHighDateTime;
tt <<=32;
tt |= ft.dwLowDateTime;
tt /=10;
tt -= 11644473600000000ULL;
So in such case time(0) == tt / 1000000

Like this
unsigned __int64 freq;
QueryPerformanceFrequency((LARGE_INTEGER*)&freq);
double timerFrequency = (1.0/freq);
unsigned __int64 startTime;
QueryPerformanceCounter((LARGE_INTEGER *)&startTime);
//do something...
unsigned __int64 endTime;
QueryPerformanceCounter((LARGE_INTEGER *)&endTime);
double timeDifferenceInMilliseconds = ((endTime-startTime) * timerFrequency);

What we really need is a high-resolution GetTickCount(). As far as I know, this doesn't really exist.
If you're willing to use a hackish way to solve this (that would probably only work on some versions of Windows like XP), look here at ReactOS. Then try this code:
long long GetTickCount64()
{
return (long long)
((((unsigned long long)*(unsigned long int*)0x7FFE0000
* (unsigned long long)*(unsigned long int*)0x7FFE0004)
* (unsigned long long)10000) >> 0x18);
}
Tweaking it might give you what you need in some versions of Windows.

Related

GetProcessTimes on Windows is imprecise

I'm trying to measure the CPU and Wall time for my program.
The code should run on Windows so it's alright to use platform specific functions.
For Wall time I use QueryPerformanceCounter() and it is precise.
When I use GetProcessTimes() I get a 15.625 millisecond precision.
On MSDN it says that the precision of the returned CPU time is 100 nanoseconds.
Here is the code I am using:
void getCPUtime(unsigned long long *pUser, unsigned long long *pKernel) {
FILETIME user, kernel, exit, start;
ULARGE_INTEGER userCPU, kernelCPU;
if (::GetProcessTimes(::GetCurrentProcess(), &start, &exit, &kernel, &user) != 0) {
userCPU.LowPart = user.dwLowDateTime;
userCPU.HighPart = user.dwHighDateTime;
kernelCPU.LowPart = kernel.dwLowDateTime;
kernelCPU.HighPart = kernel.dwHighDateTime;
}
*pUser = (unsigned long long)userCPU.QuadPart;
*pKernel = (unsigned long long)kernelCPU.QuadPart;
}
And I am calling it from:
void someFunction() {
unsigned long long *userStartCPU, *userEndCPU, *kernelStartCPU, *kernelEndCPU;
double userCPUTime, kernelCPUTime;
getCPUtime(userStartCPU, kernelStartCPU);
// Do stuff which takes longer than a millisecond
getCPUtime(userEndCPU, kernelEndCPU);
userCPUTime = (userEndCPU - userStartCPU) / (double)10000.00; // Convert to milliseconds
kernelCPUTime = (kernelEndCPU - kernelStartCPU) / (double)10000.00; // Convert to milliseconds
}
Does anyone know why this is happening, or has any other way to precisely measure CPU time on Windows?
MSDN has this page that outlines using a high resolution timer.
I would recommend looking at Google Benchmark]2. Looking at the Windows specific code, you might need to use double instead of integers as used in the MakeTime function here

How to get today's UTC Midnight time in Milliseconds in C++? [duplicate]

This question already has answers here:
How to get UTC seconds since epoch till midnight
(2 answers)
Closed 5 years ago.
For Windows C++, I am trying something like -
unsigned long long int Device::getCurrentUtcTimeinMiliSecond() {
time_t ltime;
time(&ltime);
std::tm* newtime = gmtime(&ltime);
newtime->tm_hour = 0;
newtime->tm_min = 0;
newtime->tm_sec = 0;
time_t timex = mktime(newtime);
// Want to convert tm* to total miliseconds since Midnight 1970
return (long long)timex * 1000;
}
Is there any other way or am I going in right direction? If yes, then how to convert tm* to total time millisecond since 1970 Midnight?
Or someone can suggest simpler way of doing it.
You can use clock_gettime; This is not available by default under windows, but there is an an example on stackoverflow
Once you have clock_gettime, you can do the following:
#define MS_PER_SEC 1000
#define NS_PER_MS 1000000
timespec ts;
clock_gettime(CLOCK_REALTIME, &ts);
unsigned long long msecs = (((unsigned long long) ts.tv_sec) * MS_PER_SEC + (ts.tv_nsec / NS_PER_MS);
return msecs;
You could clean that up a bit, but once you have clock_gettime() it should be fairly easy.

Get System Time In MilliSeconds as an int/double

I am new to c++ but I just cant get this to work at all. I am trying to get the system current time in ms and do something with it but it wont work what I have tried.
Qt
QDateTime qt = new QDateTime();
int x = qt.currentDateTimeUtc();
if(x%5 ==0){
//something
}
c++
double sysTime = time(0);
if(sysTime%5.00 ==0.00){
}
I get invalid operands of type double to binary operator error. I have no idea why? Can anyone point in the right direction
For QT, try using the function QDateTime::toMSecsSinceEpoch()
http://doc.qt.io/qt-5/qdatetime.html#toMSecsSinceEpoch
This will return a qint64 http://doc.qt.io/qt-5/qtglobal.html#qint64-typedef
If you're trying to get the unix timestamp in milliseconds in C you can try this code:
include "time.h"
...
time_t seconds;
time(&seconds);
unsigned long long millis = (unsigned long long)seconds * 1000;
Though please note this is multiplied by 1000 - it looks like milliseconds but the accuracy is that of seconds - which judging by your x % 5 code might be enough if you're trying to do something every 5 seconds, so the following should be enough:
time_t seconds; time(&seconds);

c++ \ Convert FILETIME to seconds

How can I convert FILETIME to seconds? I need to compare two FILETIME objects..
I found this,
but seems like it doesn't do the trick...
ULARGE_INTEGER ull;
ull.LowPart = lastWriteTimeLow1;
ull.HighPart = lastWriteTimeHigh1;
time_t lastModified = ull.QuadPart / 10000000ULL - 11644473600ULL;
ULARGE_INTEGER xxx;
xxx.LowPart = currentTimeLow1;
xxx.HighPart = currentTimeHigh1;
time_t current = xxx.QuadPart / 10000000ULL - 11644473600ULL;
unsigned long SecondsInterval = current - lastModified;
if (SecondsInterval > RequiredSecondsFromNow)
return true;
return false;
I compared to 2 FILETIME and expected diff of 10 seconds and it gave me ~7000...
Is that a good way to extract number of seconds?
The code you give seems correct, it converts a FILETIME to a UNIX timestamp (obviously losing precision, as FILETIME has a theoretical resolution of 100 nanoseconds). Are you sure that the FILETIMEs you compare indeed have only 10 seconds of difference?
I actually use a very similar code in some software:
double time_d()
{
FILETIME ft;
GetSystemTimeAsFileTime(&ft);
__int64* val = (__int64*) &ft;
return static_cast<double>(*val) / 10000000.0 - 11644473600.0; // epoch is Jan. 1, 1601: 134774 days to Jan. 1, 1970
}
This returns a UNIX-like timestamp (in seconds since 1970) with sub-second resolution.
For the sake of comparisom
double toSeconds(const FILETIME& t)
{
return LARGE_INTEGER{t.dwLowDateTime, (long)t.dwHighDateTime}.QuadPart * 1e-7;
}
is the simplest.
You can use this macro to get time in UNIX epochs:
#define windows_time_to_unix_epoch(x) ((x) - 116444736000000000LL) / 10000000LL

elapsed time with unsigned tickCount / Wrapping

i have a own GetTickCount() function returning an unsigned int (the count rolls over to zero on 0xFFFFFFFF)
i cant measure an elapsed time with:
unsigned int elapsed;
unsigned int start = GetTickCount();
LongOperation();
unsigned int stop = GetTickCount();
if (stop >= start )
elapsed = stop - start;
else
elapsed = (INT_MAX - start) + stop;
is this the same if i do a cast to signed (the time span i measure is always less than what can be represented in a signed integer - i think about 24 days) ? :
int start = (int)GetTickCount();
LongOperation();
int elapsedTime = (int)GetTickCount() - start;
if i look at the .net Environmet.TickCount property:
TickCount will increment from zero to Int32..::.MaxValue for approximately 24.9 days, then jump to Int32..::.MinValue, which is a negative number, then increment back to zero during the next 24.9 days.
so when i cast my GetTickCount() function to a signed integer i should get the behaviour from .net (wrapping occurs on 0x7FFFFFFF->0x80000000) ?
with this should be possible do measure the elapsed time as follow (seen in another post):
int start = Environment.TickCount;
DoLongRunningOperation();
int elapsedTime = Environment.TickCount - start;
The prototype for GetTickCount() in C++ in Windows is:
DWORD WINAPI GetTickCount(void);
So, I would code it like this (similar to the other answers):
DWORD start = GetTickCount();
dosomething();
DWORD elapsed = GetTickCount() - start;
Will measure elapsed times up to the maximum number DWORD can represent.
As others have said, with unsigned arithmetic, you don't need to worry about the counter wrapping around - try it yourself...
Also check GetTickCount64() and QueryPerformanceCounter()/QueryPerformanceFrequency(). GetTickCount64() will allow you to measure longer intervals, but it is not supported on all versions of Windows, while QueryPerformanceCounter() allows you to measure to much higher resolution and accuracy. For example, on some Windows versions, GetTickCount() may only be accurate to about 18ms while QueryPerformanceCounter() will be better than 1us.
I'm not sure GetTickCount() is the preferred function to your problem.
Can't you just use QueryPerformanceFrequency()? There's a nice example at
http://msdn.microsoft.com/en-us/library/ms644904%28v=VS.85%29.aspx
In C++, if you stick with unsigned, the code will work:
unsigned int start = gettickcount();
DoLongRunningOperation();
unsigned int elapsedTime = static_cast<unsigned int>(gettickcount()) - start;
The reason you want to stick with unsigned is that unsigned arithmetic is required to use modulo arithmetic which is what you want in this case.