/*
* Returns time in s.usec
*/
float mtime()
{
struct timeval stime;
gettimeofday(&stime,0x0);
return (float)stime.tv_sec+((float)stime.tv_usec)/1000000000;
}
main(){
while(true){
cout<<setprecision(15)<<mtime()<<endl;
// shows the same time irregularly for some reason and can mess up triggers
usleep(500000);
}
}
Why does it show the same time irregularly? (compiled on ubuntu 64bit and C++)
What other standard methods are available to generate a unix timestamp with millisecond accuracy?
A float has between 6 and 9 decimal digits of precision.
So if integer part is e.g. 1,391,432,494 (UNIX time when I write this; requiring 10 digits), you're already out of digits for the fractional part. Not so good, and this is why float is failing for this.
Jumping to double gives you 15 digits so it seems to suffice as long as you can assume that the integer part is a UNIX timestamp, i.e. seconds since 1970 since that means it's not likely to use drastically more digits any time soon.
Seems float doesn't have enough precision, replaced with double and all is ok now.
/*
* Returns time in s.usec
*/
double mtime()
{
struct timeval stime;
gettimeofday(&stime,0x0);
return (double)stime.tv_sec+((double)stime.tv_usec)/1000000000;
}
Still don't exactly understand the reason for the random behavior...
PS. I was capturing a mtime() and comparing it with current time to get duration.
Related
NOTE: This is likely a question about flawed math, rather then a question
about the a windows system call as described in the question.
We are working with with the GetSystemTimeAsFileTime() win32 call, and seeing what I think are strange results and was looking for some clarification. From MSDN on the FILETIME structure https://msdn.microsoft.com/en-us/library/windows/desktop/ms724284%28v=vs.85%29.aspx
Contains a 64-bit value representing the number of 100-nanosecond
intervals since January 1, 1601 (UTC).
According to our read of this description, the value returned is the number of 10e-8 interval seconds. Assuming this is correct, then the following function should return the system time in milliseconds.
DWORD get_milli_time() {
FILETIME f;
::GetSystemTimeAsFileTime(&f);
__int64 nano = (__int64(f.dwHighDateTime) << 32LL)
+ __int64(f.dwLowDateTime);
return DWORD(nano / 10e5);
}
A simple unittest however shows this is incorrect, the below code prints "Failed":
DWORD start = get_milli_time();
::Sleep(5000); // sleep for 5-seconds
DWORD end = get_milli_time();
// test for reasonable sleep variance (4.9 - 5.1 secs)
if ((end - start) < 4900 || (end - start) > 5100) {
printf("Failed\n");
}
According to this SO post
Getting the current time (in milliseconds) from the system clock in Windows?,
the correct results can be achieved by changing our division to:
return DWORD(nano / 10e3);
If we use this value, we get the correct result, but I can't understand why.
It seems to me that to convert from 10e-8 to 10e-3, we should divide by 10e5. This would seem to be borne out by the following calculation:
printf("%f\n", log10(10e-3 / 10e-8));
Which returns 5 (as I expected).
But somehow I'm wrong -- but I'll be darned if I can see where I've gone wrong.
Your math is indeed flawed, and so is your understanding of the "working" code.
There are 107 100-nanosecond intervals in a second, 104 in a millisecond. In floating-point notation, this is 1.0e4. 10e3 is a weird way of writing 1e4.
The "right" (in a sense of most efficient while remaining expressive) code would be
return DWORD(hundrednano * 1.0e-4);
I have found a function to get milliseconds since the Mac was started:
U32 Platform::getRealMilliseconds()
{
// Duration is a S32 value.
// if negative, it is in microseconds.
// if positive, it is in milliseconds.
Duration durTime = AbsoluteToDuration(UpTime());
U32 ret;
if( durTime < 0 )
ret = durTime / -1000;
else
ret = durTime;
return ret;
}
The problem is that after ~20 days AbsoluteToDuration returns INT_MAX all the time until the Mac is rebooted.
I have tried to use method below, it worked, but looks like gettimeofday takes more time and slows down the game a bit:
timeval tim;
gettimeofday(&tim, NULL);
U32 ret = ((tim.tv_sec) * 1000 + tim.tv_usec/1000.0) + 0.5;
Is there a better way to get number of milliseconds elapsed since some epoch (preferably since the app started)?
Thanks!
Your real problem is that you are trying to fit an uptime-in-milliseconds value into a 32-bit integer. If you do that your value will always wrap back to zero (or saturate) in 49 days or less, no matter how you obtain the value.
One possible solution would be to track time values with a 64-bit integer instead; that way the day of reckoning gets postponed for a few hundred years and so you don't have to worry about the problem. Here's a MacOS/X implementation of that:
uint64_t GetTimeInMillisecondsSinceBoot()
{
return UnsignedWideToUInt64(AbsoluteToNanoseconds(UpTime()))/1000000;
}
... or if you don't want to return a 64-bit time value, the next-best thing would be to record the current time-in-milliseconds value when your program starts, and then always subtract that value from the values you return. That way things won't break until your own program has been running for at least 49 days, which I suppose is unlikely for a game.
uint32_t GetTimeInMillisecondsSinceProgramStart()
{
static uint64_t _firstTimeMillis = GetTimeInMillisecondsSinceBoot();
uint64_t nowMillis = GetTimeInMillisecondsSinceBoot();
return (uint32_t) (nowMillis-_firstTimeMillis);
}
My preferred method is mach_absolute_time - see this tech note - I use the second method, i.e. mach_absolute_time to get time stamps and mach_timebase_info to get the constants needed to convert the difference between time stamps into an actual time value (with nanosecond resolution).
I have the following code:
typedef __int64 BIG_INT;
typedef double CUT_TYPE;
#define CUT_IT(amount, percent) (amount * percent)
void main()
{
CUT_TYPE cut_percent = 1;
BIG_INT bintOriginal = 0x1FFFFFFFFFFFFFF;
BIG_INT bintAfter = CUT_IT(bintOriginal, cut_percent);
}
bintAfter's value after the calculation is 144115188075855872 instead of 144115188075855871 (see the "2" in the end, instead of "1"??).
On smaller values such as 0xFFFFFFFFFFFFF I get the correct result.
How do I get it to work, on 32bit app? What do I have to take in account?
My aim is to cut a certain percentage of a very big number.
I use VC++ 2008, Vista.
double has a 52 bit mantissa, you're losing precision when you try to load a 60+ bit value into it.
Floating point calculations aren't guaranteed to be perfectly accurate, and you've defined CUT_TYPE as double.
See this answer for more info: Dealing with accuracy problems in floating-point numbers
timeGetTime seems to be quite good to query for system time. However, its return value is 32-bit only, so it wraps around every 49 days approx.
It's not too hard to detect the rollover in calling code, but it adds some complexity, and (worse) requires keeping a state.
Is there some replacement for timeGetTime that would not have this wrap-around problem (probably by returning a 64-bit value), and have roughly the same precision and cost?
Unless you need to time an event that is over 49 days, you can SAFELY ignore the wrap-around. Just always subtract the previous timeGetTime() from the current timeGetTime() and you will always obtain a delta measured time that is accurate, even across wrap-around -- provided that you are timing events whose total duration is under 49 days. This all works due to how unsigned modular math works inside the computer.
// this code ALWAYS works, even with wrap-around!
DWORD dwStart = timeGetTime();
// provided the event timed here has a duration of less than 49 days
DWORD dwDuration = timeGetTime()-dwStart;
TIP: look into TimeBeginPeriod(1L) to increase the accuracy of timeGetTime().
BUT... if you want a 64-bit version of timeGetTime, here it is:
__int64 timeGetTime64() {
static __int64 time64=0;
// warning: if multiple threads call this function, protect with a critical section!
return (time64 += (timeGetTime()-(DWORD)time64));
}
Please note that if this function is not called at least once every 49 days, that this function will fail to properly detect a wrap-around.
What platform?
You could use GetTickCount64() if you're running on Vista or later, or synthesise your own GetTickCount64() from GetTickCount() and a timer...
I deal with the rollover issue in GetTickCount() and synthesising a GetTickCount64() on platforms that don't support it here on my blog about testing non-trivial code: http://www.lenholgate.com/blog/2008/04/practical-testing-17---a-whole-new-approach.html
Nope, tracking roll-over requires state. It can be as simple as just incrementing your own 64-bit counter on each callback.
It is pretty unusual to want to track time periods to a resolution as low as 1 millisecond for up to 49 days. You'd have to worry that the accuracy is still there after such a long period. The next step is to use the clock, GetTickCount(64), GetSystemTimeAsFileTime have a resolution of 15.625 milliseconds and are kept accurate with a time server.
Have a look at GetSystemTimeAsFileTime(). It fills a FILETIME struct that contains a "64-bit value representing the number of 100-nanosecond intervals since January 1, 1601 (UTC)"
How are you trying to use it? I frequently use the Win32 equivalent when checking for durations that I know will be under 49 days. For example the following code will always work.
DWORD start = timeGetTime();
DoSomthingThatTakesLessThen49Days();
DWORD duration = timeGetTime() - start;
Even if timeGetTime rolled over while calling DoSomthingThatTakesLessThen49Days duration will still be correct.
Note the following code could fail on rollover.
DWORD start = timeGetTime();
DoSomthingThatTakesLessThen49Days();
if (now + 5000 < timeGetTime())
{
}
but can easy be re-written to work as follows
DWORD start = timeGetTime();
DoSomthingThatTakesLessThen49Days();
if (timeGetTime() - start < 5000)
{
}
Assuming you can guarantee that this function will called at least once every 49 days, something like this will work:
// Returns current time in milliseconds
uint64_t timeGetTime64()
{
static uint32_t _prevVal = 0;
static uint64_t _wrapOffset = 0;
uint32_t newVal = (uint32_t) timeGetTime();
if (newVal < _prevVal) _wrapOffset += (((uint64_t)1)<<32);
_prevVal = newVal;
return _wrapOffset+newVal;
}
Note that due to the use of static variables, this function isn't multithread-safe, so if you plan on calling it from multiple threads you should serialize it via a critical section or mutex or similar.
I'm not sure if this fully meets your needs, but
std::chrono::system_clock
might be along the lines of what you're looking for.
http://en.cppreference.com/w/cpp/chrono/system_clock
You could use RDTSC intrinsic. To get time in milliseconds you could get transform coefficient:
double get_rdtsc_coeff() {
static double coeff = 0.0;
if ( coeff < 1.0 ) { // count it only once
unsigned __int64 t00 = __rdtsc();
Sleep(1000);
unsigned __int64 t01 = __rdtsc();
coeff = (t01-t00)/1000.0;
}
return coeff; // transformation coefficient
}
Now you could get count of milliseconds from the last reset:
__int64 get_ms_from_start() {
return static_cast<__int64>(__rdtsc()/get_rdtsc_coeff());
}
If your system uses SpeedStep or similar technologies you could use QueryPerformanceCounter/QueryPerformanceFrequency functions. Windows gives guarantees then the frequency cannot change while the system is running.
I have a problem in using time.
I want to use and get microseconds on windows using C++.
I can't find the way.
The "canonical" answer was given by unwind :
One popular way is using the QueryPerformanceCounter() call.
There are however few problems with this method:
it's intended for measurement of time intervals, not time. This means you have to write code to establish "epoch time" from which you will measure precise intervals. This is called calibration.
As you calibrate your clock, you also need to periodically adjust it so it's never too much out of sync (this is called drift) with your system clock.
QueryPerformanceCounter is not implemented in user space; this means context switch is needed to call kernel side of implementation, and that is relatively expensive (around 0.7 microsecond). This seems to be required to support legacy hardware.
Not all is lost, though. Points 1. and 2. are something you can do with a bit of coding, 3. can be replaced with direct call to RDTSC (available in newer versions of Visual C++ via __rdtsc() intrinsic), as long as you know accurate CPU clock frequency. Although, on older CPUs, such call would be susceptible to changes in cpu internal clock speed, in all newer Intel and AMD CPUs it is guaranteed to give fairly accurate results and won't be affected by changes in CPU clock (e.g. power saving features).
Lets get started with 1. Here is data structure to hold calibration data:
struct init
{
long long stamp; // last adjustment time
long long epoch; // last sync time as FILETIME
long long start; // counter ticks to match epoch
long long freq; // counter frequency (ticks per 10ms)
void sync(int sleep);
};
init data_[2] = {};
const init* volatile init_ = &data_[0];
Here is code for initial calibration; it has to be given time (in milliseconds) to wait for the clock to move; I've found that 500 milliseconds give pretty good results (the shorter time, the less accurate calibration). For the purpose of callibration we are going to use QueryPerformanceCounter() etc. You only need to call it for data_[0], since data_[1] will be updated by periodic clock adjustment (below).
void init::sync(int sleep)
{
LARGE_INTEGER t1, t2, p1, p2, r1, r2, f;
int cpu[4] = {};
// prepare for rdtsc calibration - affinity and priority
SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_TIME_CRITICAL);
SetThreadAffinityMask(GetCurrentThread(), 2);
Sleep(10);
// frequency for time measurement during calibration
QueryPerformanceFrequency(&f);
// for explanation why RDTSC is safe on modern CPUs, look for "Constant TSC" and "Invariant TSC" in
// Intel(R) 64 and IA-32 Architectures Software Developer’s Manual (document 253668.pdf)
__cpuid(cpu, 0); // flush CPU pipeline
r1.QuadPart = __rdtsc();
__cpuid(cpu, 0);
QueryPerformanceCounter(&p1);
// sleep some time, doesn't matter it's not accurate.
Sleep(sleep);
// wait for the system clock to move, so we have exact epoch
GetSystemTimeAsFileTime((FILETIME*) (&t1.u));
do
{
Sleep(0);
GetSystemTimeAsFileTime((FILETIME*) (&t2.u));
__cpuid(cpu, 0); // flush CPU pipeline
r2.QuadPart = __rdtsc();
} while(t2.QuadPart == t1.QuadPart);
// measure how much time has passed exactly, using more expensive QPC
__cpuid(cpu, 0);
QueryPerformanceCounter(&p2);
stamp = t2.QuadPart;
epoch = t2.QuadPart;
start = r2.QuadPart;
// calculate counter ticks per 10ms
freq = f.QuadPart * (r2.QuadPart-r1.QuadPart) / 100 / (p2.QuadPart-p1.QuadPart);
SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_NORMAL);
SetThreadAffinityMask(GetCurrentThread(), 0xFF);
}
With good calibration data you can calculate exact time from cheap RDTSC (I measured the call and calculation to be ~25 nanoseconds on my machine). There are three things to note:
return type is binary compatible with FILETIME structure and is precise to 100ns , unlike GetSystemTimeAsFileTime (which increments in 10-30ms or so intervals, or 1 millisecond at best).
in order to avoid expensive conversions integer to double to integer, the whole calculation is performed in 64 bit integers. Even though these can hold huge numbers, there is real risk of integer overflow, and so start must be brought forward periodically to avoid it. This is done in clock adjustment.
we are making a copy of calibration data, because it might have been updated during our call by clock adjustement in another thread.
Here is the code to read current time with high precision. Return value is binary compatible with FILETIME, i.e. number of 100-nanosecond intervals since Jan 1, 1601.
long long now()
{
// must make a copy
const init* it = init_;
// __cpuid(cpu, 0) - no need to flush CPU pipeline here
const long long p = __rdtsc();
// time passed from epoch in counter ticks
long long d = (p - it->start);
if (d > 0x80000000000ll)
{
// closing to integer overflow, must adjust now
adjust();
}
// convert 10ms to 100ns periods
d *= 100000ll;
d /= it->freq;
// and add to epoch, so we have proper FILETIME
d += it->epoch;
return d;
}
For clock adjustment, we need to capture exact time (as provided by system clock) and compare it against our clock; this will give us drift value. Next we use simple formula to calculate "adjusted" CPU frequency, to make our clock meet system clock at the time of next adjustment. Thus it is important that adjustments are called on regular intervals; I've found that it works well when called in 15 minutes intervals. I use CreateTimerQueueTimer, called once at program startup to schedule adjustment calls (not demonstrated here).
The slight problem with capturing accurate system time (for the purpose of calculating drift) is that we need to wait for the system clock to move, and that can take up to 30 milliseconds or so (it's a long time). If adjustment is not performed, it would risk integer overflow inside function now(), not to mention uncorrected drift from system clock. There is builtin protection against overflow in now(), but we really don't want to trigger it synchronously in a thread which happened to call now() at the wrong moment.
Here is the code for periodic clock adjustment, clock drift is in r->epoch - r->stamp:
void adjust()
{
// must make a copy
const init* it = init_;
init* r = (init_ == &data_[0] ? &data_[1] : &data_[0]);
LARGE_INTEGER t1, t2;
// wait for the system clock to move, so we have exact time to compare against
GetSystemTimeAsFileTime((FILETIME*) (&t1.u));
long long p = 0;
int cpu[4] = {};
do
{
Sleep(0);
GetSystemTimeAsFileTime((FILETIME*) (&t2.u));
__cpuid(cpu, 0); // flush CPU pipeline
p = __rdtsc();
} while (t2.QuadPart == t1.QuadPart);
long long d = (p - it->start);
// convert 10ms to 100ns periods
d *= 100000ll;
d /= it->freq;
r->start = p;
r->epoch = d + it->epoch;
r->stamp = t2.QuadPart;
const long long dt1 = t2.QuadPart - it->epoch;
const long long dt2 = t2.QuadPart - it->stamp;
const double s1 = (double) d / dt1;
const double s2 = (double) d / dt2;
r->freq = (long long) (it->freq * (s1 + s2 - 1) + 0.5);
InterlockedExchangePointer((volatile PVOID*) &init_, r);
// if you have log output, here is good point to log calibration results
}
Lastly two utility functions. One will convert FILETIME (including output from now()) to SYSTEMTIME while preserving microseconds to separate int. Other will return frequency, so your program can use __rdtsc() directly for accurate measurements of time intervals (with nanosecond precision).
void convert(SYSTEMTIME& s, int &us, long long f)
{
LARGE_INTEGER i;
i.QuadPart = f;
FileTimeToSystemTime((FILETIME*) (&i.u), &s);
s.wMilliseconds = 0;
LARGE_INTEGER t;
SystemTimeToFileTime(&s, (FILETIME*) (&t.u));
us = (int) (i.QuadPart - t.QuadPart)/10;
}
long long frequency()
{
// must make a copy
const init* it = init_;
return it->freq * 100;
}
Well of course none of the above is more accurate than your system clock, which is unlikely to be more accurate than few hundred milliseconds. The purpose of precise clock (as opposed to accurate) as implemented above, is to provide single measure which can be used for both:
cheap and very accurate measurement of time intervals (not wall time),
much less accurate, but monotonous and consistent with the above, measure of wall time
I think it does it pretty well. Example use are logs, where one can use timestamps not only to find time of events, but also reason about internal program timings, latency (in microseconds) etc.
I leave the plumbing (call to initial calibration, scheduling adjustment) as an exercise for gentle readers.
You can use boost date time library.
You can use boost::posix_time::hours, boost::posix_time::minutes,
boost::posix_time::seconds, boost::posix_time::millisec, boost::posix_time::nanosec
http://www.boost.org/doc/libs/1_39_0/doc/html/date_time.html
One popular way is using the QueryPerformanceCounter() call. This is useful if you need high-precision timing, such as for measuring durations that only take on the order of microseconds. I believe this is implemented using the RDTSC machine instruction.
There might be issues though, such as the counter frequency varying with power-saving, and synchronization between multiple cores. See the Wikipedia link above for details on these issues.
Take a look at the Windows APIs GetSystemTime() / GetLocalTime() or GetSystemTimeAsFileTime().
GetSystemTimeAsFileTime() expresses time in 100 nanosecond intervals, that is 1/10 of a microsecond. All functions provide the current time with in millisecond accuracy.
EDIT:
Keep in mind, that on most Windows systems the system time is only updated about every 1 millisecond. So even representing your time with microsecond accuracy makes it still necessary to acquire the time with such a precision.
Take a look at this: http://www.decompile.com/cpp/faq/windows_timer_api.htm
May be this can help:
NTSTATUS WINAPI NtQuerySystemTime(__out PLARGE_INTEGER SystemTime);
SystemTime [out] - a pointer to a LARGE_INTEGER structure that receives the system time. This is a 64-bit value representing the number of 100-nanosecond intervals since January 1, 1601 (UTC).