Qt documentation about QTime::currentTime() says :
Note that the accuracy depends on the accuracy of the underlying
operating system; not all systems provide 1-millisecond accuracy.
But is there any way to get this time with milliseconds accuracy in windows 7?
You can use QDateTime class and convert the current time with the appropriate format:
QDateTime::currentDateTime().toString("yyyy/MM/dd hh:mm:ss,zzz")
where 'z' corresponds to miliseconds accuracy.
you can use the functionality provided by time.h header file in C/C++.
#include <time.h>
clock_t start, end;
double cpu_time_used;
int main()
{
start = clock();
/* Do the work. */
end = clock();
cpu_time_used = ((double)(end-start)/ CLOCKS_PER_SEC);
}
Timer resolution may vary on different platforms and readings may not be accurate. If you need to get high-resolution, accurate timestamps on Windows 7, it provides QPC API:
https://msdn.microsoft.com/en-us/library/windows/desktop/dn553408%28v=vs.85%29.aspx
GetSystemTimePreciseAsFileTime is claimed to provide system time with <1us resolution.
But that's only about accurate timestamp. If you need to actually do something with 1 ms latency (ex. handle an event), you need a RTOS, not a desktop clunker.
One common way would be to scale up whatever you are doing and do it 10-100 times in a row, that way you would be able get a more accurate time reading of whatever you are doing, by dividing the result by 10-100.
But getting millisecond precise readings of your time is pretty much useless because you don't have 100% of the cpu time, which means that your readings will have much greater variance than just 1 millisecond if the OS gives another process computing time while you are doing your actions.
Related
I am trying to measure time take by processes in C++ program with linux and Vxworks. I have noticed that clock_gettime(CLOCK_REALTIME, timespec ) is accurate enough (resolution about 1 ns) to do the job on many Oses. For a portability matter I am using this function and running it on both Vxworks 6.2 and linux 3.7.
I ve tried to measure the time taken by a simple print:
#define <timers.h<
#define <iostream>
#define BILLION 1000000000L
int main(){
struct timespec start, end; uint32_t diff;
for(int i=0; i<1000; i++){
clock_gettime(CLOCK_REALTME, &start);
std::cout<<"Do stuff"<<std::endl;
clock_gettime(CLOCK_REALTME, &end);
diff = BILLION*(end.tv_sec-start.tv_sec)+(end.tv_nsec-start.tv_nsec);
std::cout<<diff<<std::endl;
}
return 0;
}
I compiled this on linux and vxworks. For linux results seemed logic (average 20 µs). But for Vxworks, I ve got a lot of zeros , then 5000000 ns , then a lot of zeros...
PS , for vxwroks, I runned this app on ARM-cortex A8, and results seemed random
have anyone seen the same bug before,
In vxworks, the clock resolution is defined by the system scheduler frequency. By default, this is typically 60Hz, however may be different dependant on BSP, kernel configuration, or runtime configuration.
The VxWorks kernel configuration parameters SYS_CLK_RATE_MAX and SYS_CLK_RATE_MIN define the maximum and minimum values supported, and SYS_CLK_RATE defines the default rate, applied at boot.
The actual clock rate can be modified at runtime using sysClkRateSet, either within your code, or from the shell.
You can check the current rate by using sysClkRateGet.
Given that you are seeing either 0 or 5000000ns - which is 5ms, I would expect that your system clock rate is ~200Hz.
To get greater resolution, you can increase the system clock rate. However, this may have undesired side effects, as this will increase the frequency of certain system operations.
A better method of timing code may be to use sysTimestamp which is typically driven from a high frequency timer, and can be used to perform high-res timing of short-lived activities.
I think in vxworks by default the clock resolution is 16.66ms which you can get by calling clock_getres() function. You can change the resolution by calling sysclkrateset() function(max resolution supported is 200us i guess by passing 5000 as argument to sysclkrateset function). You can then calculate the difference between two timestamps using difftime() function
I am new to boost and chrono. I am writing a logger that logs the timestamps of API calls, entry and exit. I tried using boost::xtime first, but it wasn't giving the high resolution value I needed. Hence was thinking about using Chrono. I declared a boost::chrono::hight_resolution_clock::time_stamp x; variable for getting the timestamp and assigned it to boost::chrono::hight_resolution_clock::now ();. Now, I need to get the nanoseconds from this variable and put it in my log file (thats the requirement). So I cast it boost::chrono::duration_cast (x). But it just wouldn't let me do that. It needs 2 parameters apparently, and I only have one. Is there a way to get around this?. Is it possible to create another time_stamp variable and assign zero to it and use that variable?. I tried assigning zero, but its not working. Kindly help me out.
Thanks,
Sam
If tagged c++11, any reason why not to use std::chrono?
// Using std::chrono
auto start = std::chrono::high_resolution_clock::now(); // start timer
/* do some work */
auto diff = std::chrono::high_resolution_clock::now() - start; // get difference
auto nsec = std::chrono::duration_cast<std::chrono::nanoseconds>(diff);
std::cout << "it took: " << nsec.count() << " nanoseconds" << std::endl;
boost::chrono::duration_cast converts a duration into the specified units, but you've given it a boost::chrono::time_point, not a duration.
There's really no such thing as "the current time in nanoseconds". To get a duration, you need to specify the time since which you want to know how many nanoseconds have elapsed (an "epoch"). Different clocks will measure their time based on different epochs.
boost::chrono::system_clock (currently) uses the Unix epoch (midnight Jan 1, 1970) as its epoch, but it's not steady and it may not have the resolution you need (it's in nanoseconds on my Ubuntu box, but in 1/10,000,000ths of a second on my Windows box).
boost::chrono::high_resolution_clock uses boot up as its epoch, is steady, and measures time in nanoseconds on both boxes I tested on.
Boost also provides other clocks like process_cpu_clock that use other epochs and count in other units.
Thus you can get nanos since Jan 1, 1970 using system_clock, but it may not actually be nanosecond-accurate, and it may go backwards if the user changes the system time or the computer syncs with network time, or you can get nanos since some other point in time using high_resolution_clock.
I am using QueryPerformanceCounter() function for measuring time in my application. This function also used in order to supply time line of my application life. Recently I have noticed that there is a drift of the time regarding other time-functions. Finally, I have written a small test to check whether the drift is real or not. (I use VS2013 compiler)
#include <Windows.h>
#include <chrono>
#include <thread>
#include <cstdio>
static LARGE_INTEGER s_freq;
using namespace std::chrono;
inline double now_WinPerfCounter()
{
LARGE_INTEGER tt;
if (TRUE != ::QueryPerformanceCounter(&tt))
{
printf("Error in QueryPerformanceCounter() - Err=%d\n", GetLastError());
return -1.;
}
return (double)tt.QuadPart / s_freq.QuadPart;
}
inline double now_WinTick64()
{
return (double)GetTickCount64() / 1000.;
}
inline double now_WinFileTime()
{
FILETIME ft;
::GetSystemTimeAsFileTime(&ft);
long long * pVal = reinterpret_cast<long long *>(&ft);
return (double)(*pVal) / 10000000. - 11644473600LL;
}
int _tmain(int argc, _TCHAR* argv[])
{
if (TRUE != ::QueryPerformanceFrequency(&s_freq))
{
printf("Error in QueryPerformanceFrequency() - Err=#d\n", GetLastError());
return -1;
}
// save all timetags at the beginning
double t1_0 = now_WinPerfCounter();
double t2_0 = now_WinTick64();
double t3_0 = now_WinFileTime();
steady_clock::time_point t4_0 = steady_clock::now();
for (int i = 0;; ++i) // forever
{
double t1 = now_WinPerfCounter();
double t2 = now_WinTick64();
double t3 = now_WinFileTime();
steady_clock::time_point t4 = steady_clock::now();
printf("%03d\t %.3lf %.3lf %.3lf %.3lf \n",
i,
t1 - t1_0,
t2 - t2_0,
t3 - t3_0,
duration_cast<nanoseconds>(t4 - t4_0).count() * 1.e-9
);
std::this_thread::sleep_for(std::chrono::seconds(10));
}
return 0;
}
The output was, confusing :
000 0.000 0.000 0.000 0.000
...
001 10.001 10.000 10.002 10.002
...
015 150.006 150.010 150.010 150.010
...
024 240.009 240.007 240.015 240.015
...
025 250.010 250.007 250.015 250.015
...
026 260.010 260.007 260.016 260.016
...
070 700.027 700.039 700.041 700.041
Why is there a difference ? It looks like one second duration is not same when using different API functions ? Also , during a day the difference is not constant ...
This is normal, affordable clocks never have infinite precision. GetSystemTime() and GetTickCount() are periodically re-calibrated from an Internet time server, time.windows.com by default. Which uses an unaffordable atomic clock to keep time. Adjustments to catch up or slow down are gradual in order to avoid giving software a heart-attack. The time they report will be off from the Earth's rotation by at most a few seconds, the periodic re-calibration limits long-term drift error.
But QPF is not calibrated, its resolution is far too high to allow these kind of adjustments to not affect the short interval measurements it is normally used for. It is derived from a frequency source available on the chipset, picked by the system builder. Subject to typical electronic part tolerances, temperature in particular affects accuracy and causes variable drift over longer periods. QPF is therefore only suitable to measure relatively short intervals.
Inevitably both clocks can't be the same and you will see them drift apart. You'll have to pick one or the other to be the "master" clock. If long term accuracy is important then you'll have to pick the calibrated source, if high resolution but not accuracy is important then QPF. If both are important then you need custom hardware, like a GPS clock.
Edit: The main reason for the drift you observe is an inaccuracy of the performance counter frequency. The system provides you with a constant value returned by QueryPerformanceFrequency. This constant frequency is only near the true frequency. It is accompanied by an offset and possible drift. Modern Platforms (Windows > 7, invariant TSC) have little offset and little drift.
Example: A 5 ppm offset would cause the time to move forward by 5 us/s or 432 ms/day faster than expected if you use the performance counter frequency to scale performance counter values to times.
General:
QueryPerformanceCounterand GetSystemTimeAsFileTime are using different resources (hardware). Modern platforms derive QueryPerformanceCounter from the CPUs timestamp counter (TSC) and GetSystemTimeAsFileTime uses PIT timer, ACPI PM timer, or HPET hardware. See Intel 64® and IA-32 Architectures Software Developer's Manual, Volume 3B: System Programming Guide, Part 2 for details.
It is unavoidable to deal with drift when the two APIs are using different hardware. However, you may extend your test code to calibrate the drift.
Frequency generating hardware is typically temperature sensitive. Therefore, the drift may vary, depending on the load. See
The Windows Timestamp Project for details.
Probably, different functions measure time in different ways.
QueryPerformanceCounter - is a high-resolution time stamps or measure time intervals.
GetSystemTimeAsFileTime - retrieves the current system date and time in UTC format.
So this is not quite correct to use this functions and compare time from them.
It's probably a bad idea to use multiple methods of the measurement; the different functions have different resolutions, you'll effectively be seeing noise from values getting rounded to their precision. And of course, a small amount of time will pass while setting t1..t4 as the functions take time to execute, so drift is unavoidable.
You may want to look at GetSystemTimePreciseAsFileTime API. It is both high-precision and can adjust periodically from external time servers.
I am profiling CPU usage on a simple program I am writing. I have different algorithms I want to try, and I also want to know what's the impact on the total system performance.
Currently, I am using ualarm() to execute some instructions at 30Hz; every 15 of those interruptions (every 0.5s) I record the CPU time with getrusage() (in useconds), so I have an estimation on the total cpu time of cpu consumption on that point in time. But to get context, I also need to know the total time elapsed in the system in that time period, so I can have the % of which is used by my program.
/* Main Loop */
while(1)
{
alarm = 0;
/* Waiting Loop: */
for(i=0; !alarm; i++){
}
count++;
/* Do my things */
/* Check if it's time to store cpu log: */
if ((count%count_max) == 0)
{
getrusage(RUSAGE_SELF, &ru);
store_cpulog(f,
(int64_t) ru.ru_utime.tv_sec,
(int64_t) ru.ru_utime.tv_usec,
(int64_t) ru.ru_stime.tv_sec,
(int64_t) ru.ru_stime.tv_usec);
}
}
I have different options, but I don't know which one will provide the most exact result:
Use ualarm for the timing. Currently it's programmed to signal every 0.5 seconds, so I can take those 0.5 seconds as the CPU time. Seems quite obvious to use, but it's the best option?
Use clock_gettime(CLOCK_MONOTONIC): it provides readings with a nanosec resolution.
Use gettimeofday(): provides readings with a usec resolution. I've found opinions against using it.
Any recommendation? Thanks.
Possible solution is to use system function time and don't using busy loop (like #Hasturkun say) in your program. Call in console:
time /path/to/my/program
and after execution of it you get something like:
real 0m1.465s
user 0m0.000s
sys 0m1.210s
Not sure about precision, if it is enough for you.
Callgrind is possibly the best application for profiling C/C++ code under linux. Use it with pride:)
How can you obtain the system clock's current time of day (in milliseconds) in C++? This is a windows specific app.
The easiest (and most direct) way is to call GetSystemTimeAsFileTime(), which returns a FILETIME, a struct which stores the 64-bit number of 100-nanosecond intervals since midnight Jan 1, 1601.
At least at the time of Windows NT 3.1, 3.51, and 4.01, the GetSystemTimeAsFileTime() API was the fastest user-mode API able to retrieve the current time. It also offers the advantage (compared with GetSystemTime() -> SystemTimeToFileTime()) of being a single API call, that under normal circumstances cannot fail.
To convert a FILETIME ft_now; to a 64-bit integer named ll_now, use the following:
ll_now = (LONGLONG)ft_now.dwLowDateTime + ((LONGLONG)(ft_now.dwHighDateTime) << 32LL);
You can then divide by the number of 100-nanosecond intervals in a millisecond (10,000 of those) and you have milliseconds since the Win32 epoch.
To convert to the Unix epoch, subtract 116444736000000000LL to reach Jan 1, 1970.
You mentioned a desire to find the number of milliseconds into the current day. Because the Win32 epoch begins at a midnight, the number of milliseconds passed so far today can be calculated from the filetime with a modulus operation. Specifically, because there are 24 hours/day * 60 minutes/hour * 60 seconds/minute * 1000 milliseconds/second = 86,400,000 milliseconds/day, you could user the modulus of the system time in milliseconds modulus 86400000LL.
For a different application, one might not want to use the modulus. Especially if one is calculating elapsed times, one might have difficulties due to wrap-around at midnight. These difficulties are solvable, the best example I am aware is Linus Torvald's line in the Linux kernel which handles counter wrap around.
Keep in mind that the system time is returned as a UTC time (both in the case of GetSystemTimeAsFileTime() and simply GetSystemTime()). If you require the local time as configured by the Administrator, then you could use GetLocalTime().
To get the time expressed as UTC, use GetSystemTime in the Win32 API.
SYSTEMTIME st;
GetSystemTime(&st);
SYSTEMTIME is documented as having these relevant members:
WORD wYear;
WORD wMonth;
WORD wDayOfWeek;
WORD wDay;
WORD wHour;
WORD wMinute;
WORD wSecond;
WORD wMilliseconds;
As shf301 helpfully points out below, GetLocalTime (with the same prototype) will yield a time corrected to the user's current timezone.
You have a few good answers here, depending on what you're after. If you're looking for just time of day, my answer is the best approach -- if you need solid dates for arithmetic, consider Alex's. There's a lot of ways to skin the time cat on Windows, and some of them are more accurate than others (and nobody has mentioned QueryPerformanceCounter yet).
A cut-to-the-chase example of Jed's answer above:
const std::string currentDateTime() {
SYSTEMTIME st, lt;
GetSystemTime(&st);
char currentTime[84] = "";
sprintf(currentTime,"%d/%d/%d %d:%d:%d %d",st.wDay,st.wMonth,st.wYear, st.wHour, st.wMinute, st.wSecond , st.wMilliseconds);
return string(currentTime); }
Use GetSystemTime, first; then, if you need that, you can call SystemTimeToFileTime on the SYSTEMTIME structure that the former fills for you. A FILETIME is a 64-bit count of 100-nanosecs intervals since an epoch, and so more suitable for arithmetic; a SYSTEMTIME is a structure with all the expected fields (year, month, day, hour, etc, down to milliseconds). If you want to know "how many milliseconds have elapsed since midnight", for example, subtracting two FILETIME structures (one for the current time, one obtained by converting the same SYSTEMTIME after zeroing out the appropriate fields) and dividing by the appropriate power of ten is probably the simplest available approach.
Depending on the needs of your application there are six common options. This Dr Dobbs Journal article will give you all the information (and more) you need on choosing the best one.
In your specific case, from this article:
GetSystemTime() retrieves the current
system time and instantiates a
SYSTEMTIME structure, which is
composed of a number of separate
fields including year, month, day,
hours, minutes, seconds, and
milliseconds.
Here is some code that works in Windows which I've used in a Open Watcom C project. It should work in C++ It returns seconds (not milliseconds) using _dos_gettime or gettime
double seconds(void)
{
#ifdef __WATCOMC__
struct dostime_t t;
_dos_gettime(&t);
return ((double)t.hour * 3600 + (double)t.minute * 60 + (double)t.second + (double)t.hsecond * 0.01);
#else
struct time t;
gettime(&t);
return ((double)t.ti_hour * 3600 + (double)t.ti_min * 60 + (double)t.ti_sec + (double)t.ti_hund * 0.01);
#endif
}
While it's not what the question asks, it's worth considering why you want this info.
If all you want to do is keep track of how long something takes to calculate or the time past since the last user interaction, consider using the uptime (milliseconds since boot), which is much simpler to get: GetTickCount() or GetTickCount64(). This is all I wanted to do but I went down the epoch rabbit hole first because that's how you do it under unix.