Getting the current time (in milliseconds) from the system clock in Windows? - c++

How can you obtain the system clock's current time of day (in milliseconds) in C++? This is a windows specific app.

The easiest (and most direct) way is to call GetSystemTimeAsFileTime(), which returns a FILETIME, a struct which stores the 64-bit number of 100-nanosecond intervals since midnight Jan 1, 1601.
At least at the time of Windows NT 3.1, 3.51, and 4.01, the GetSystemTimeAsFileTime() API was the fastest user-mode API able to retrieve the current time. It also offers the advantage (compared with GetSystemTime() -> SystemTimeToFileTime()) of being a single API call, that under normal circumstances cannot fail.
To convert a FILETIME ft_now; to a 64-bit integer named ll_now, use the following:
ll_now = (LONGLONG)ft_now.dwLowDateTime + ((LONGLONG)(ft_now.dwHighDateTime) << 32LL);
You can then divide by the number of 100-nanosecond intervals in a millisecond (10,000 of those) and you have milliseconds since the Win32 epoch.
To convert to the Unix epoch, subtract 116444736000000000LL to reach Jan 1, 1970.
You mentioned a desire to find the number of milliseconds into the current day. Because the Win32 epoch begins at a midnight, the number of milliseconds passed so far today can be calculated from the filetime with a modulus operation. Specifically, because there are 24 hours/day * 60 minutes/hour * 60 seconds/minute * 1000 milliseconds/second = 86,400,000 milliseconds/day, you could user the modulus of the system time in milliseconds modulus 86400000LL.
For a different application, one might not want to use the modulus. Especially if one is calculating elapsed times, one might have difficulties due to wrap-around at midnight. These difficulties are solvable, the best example I am aware is Linus Torvald's line in the Linux kernel which handles counter wrap around.
Keep in mind that the system time is returned as a UTC time (both in the case of GetSystemTimeAsFileTime() and simply GetSystemTime()). If you require the local time as configured by the Administrator, then you could use GetLocalTime().

To get the time expressed as UTC, use GetSystemTime in the Win32 API.
SYSTEMTIME st;
GetSystemTime(&st);
SYSTEMTIME is documented as having these relevant members:
WORD wYear;
WORD wMonth;
WORD wDayOfWeek;
WORD wDay;
WORD wHour;
WORD wMinute;
WORD wSecond;
WORD wMilliseconds;
As shf301 helpfully points out below, GetLocalTime (with the same prototype) will yield a time corrected to the user's current timezone.
You have a few good answers here, depending on what you're after. If you're looking for just time of day, my answer is the best approach -- if you need solid dates for arithmetic, consider Alex's. There's a lot of ways to skin the time cat on Windows, and some of them are more accurate than others (and nobody has mentioned QueryPerformanceCounter yet).

A cut-to-the-chase example of Jed's answer above:
const std::string currentDateTime() {
SYSTEMTIME st, lt;
GetSystemTime(&st);
char currentTime[84] = "";
sprintf(currentTime,"%d/%d/%d %d:%d:%d %d",st.wDay,st.wMonth,st.wYear, st.wHour, st.wMinute, st.wSecond , st.wMilliseconds);
return string(currentTime); }

Use GetSystemTime, first; then, if you need that, you can call SystemTimeToFileTime on the SYSTEMTIME structure that the former fills for you. A FILETIME is a 64-bit count of 100-nanosecs intervals since an epoch, and so more suitable for arithmetic; a SYSTEMTIME is a structure with all the expected fields (year, month, day, hour, etc, down to milliseconds). If you want to know "how many milliseconds have elapsed since midnight", for example, subtracting two FILETIME structures (one for the current time, one obtained by converting the same SYSTEMTIME after zeroing out the appropriate fields) and dividing by the appropriate power of ten is probably the simplest available approach.

Depending on the needs of your application there are six common options. This Dr Dobbs Journal article will give you all the information (and more) you need on choosing the best one.
In your specific case, from this article:
GetSystemTime() retrieves the current
system time and instantiates a
SYSTEMTIME structure, which is
composed of a number of separate
fields including year, month, day,
hours, minutes, seconds, and
milliseconds.

Here is some code that works in Windows which I've used in a Open Watcom C project. It should work in C++ It returns seconds (not milliseconds) using _dos_gettime or gettime
double seconds(void)
{
#ifdef __WATCOMC__
struct dostime_t t;
_dos_gettime(&t);
return ((double)t.hour * 3600 + (double)t.minute * 60 + (double)t.second + (double)t.hsecond * 0.01);
#else
struct time t;
gettime(&t);
return ((double)t.ti_hour * 3600 + (double)t.ti_min * 60 + (double)t.ti_sec + (double)t.ti_hund * 0.01);
#endif
}

While it's not what the question asks, it's worth considering why you want this info.
If all you want to do is keep track of how long something takes to calculate or the time past since the last user interaction, consider using the uptime (milliseconds since boot), which is much simpler to get: GetTickCount() or GetTickCount64(). This is all I wanted to do but I went down the epoch rabbit hole first because that's how you do it under unix.

Related

What's the most efficient way to programmatically check if the year is changed

I am trying to capture packets from the NIC and save part of the packet payload as a string.
On part of packet that must be stored is its Log Time known as SysLog. Each packets has a SysLog with the following Format:
Nov 01 03 14:50:25 TCP...[other parts of packet Payload]
As it can be seen, the packet SysLog has no Year Number. My program must be running all over the year, so I need to add Year Number to the packet SysLog and convert SysLog to epoch time. The final string that I have to store is like this:
1478175389-TCP, ….
I use the following peace of code to convert Syslog to EpochTime.
tm* tm_date = new tm();
Std ::string time = Current_Year;
time += " ";
time += packet.substr(0,18);
strptime(time.c_str(), "%Y %b %d %T", tm_date);
EpochTime = timegm(tm_date);
The currentYear Method:
std::string currentYear() {
std::stringstream now;
auto tp = std::chrono::system_clock::now();
auto ms = std::chrono::duration_cast<std::chrono::milliseconds>(tp.time_since_epoch());
size_t modulo = ms.count() % 1000;
time_t seconds = std::chrono::duration_cast<std::chrono::seconds>(ms).count();
#if HAS_STD_PUT_TIME
#else
char buffer[25]; // holds "2013-12-01 21:31:42"
if (strftime(buffer, 25, "%Y", localtime(&seconds))) {
now << buffer;
}
#endif // HAS_STD_PUT_TIME
return now.str();
}
The above operations are what i have to do for every packets. The packet rate is 100000-1000000 pps and the above peace of code is very time consuming specially on currentYear().
One possible optimization is to remove currentYear() Method and save the
Year number as a constant value. As said earlier my program must be run all over the year and as you know 2017 is comming. We can not change our binary at 31/12/2016 23:59:00 and also we don’t want to waste our time for calculating Year Number!!
I need a more efficient way to calculate the current year number without running it for each packet.
Is it possible? What is your suggestion for me?
Once you have obtained the current date and time, based on this it shouldn't be too difficult to calculate what the epoch time will be for midnight of next January 1st.
After calculating the expected epoch time for when the year rolls around, going forward all you have to do is compare it to the current time, when making a log entry. If it hasn't reached the precalculated Jan 1 midnight time, you know that the year hasn't rolled around yet.
So, you don't need to calculate the year for every packet at all. Just need to check the current time against the precalculated January 1st midnight time, which shouldn't change unless the politicians decide to change your timezone, while all of this is running...
The year is changed for log entries beginning with Jan, and only those log entries.
Log entries sometimes come out of order, or carry a timestamp saved during previous processing.
Attaching the year from the PC clock will give bad results, such as
2016 Dec 31 23:59:58 normal
2016 Jan 01 00:01:01 printing time placed in packet by remote device, remote clock is running a bit fast
2017 Dec 31 23:59:59 printing timestamp saved locally two seconds before logging occurred
2017 Jan 01 00:00:03 back to normal
You can't just concatenate the year of local clock with the month...second of the log message. You have to assign the year that avoids large clock jumps.
Since you're trying to produce Unix time (seconds since epoch) anyway, start by turning the log message time into Julian (seconds since start of year) and test whether the Julian is less than or greater than say 10 million (roughly 4 months).
You can "cache" the string you generate and only change it when the year changes. It may be though just a "little" improvement depending on what operations take the most time.
//somewhere
static int currentYear = 0;
static std::string yearStr = "";
//in your function
auto now = std::chrono::system_clock::now();
auto tnow = system_clock::to_time_t(now);
auto lt = localtime(&tnow); //or gmtime depends on your needs.
if(currentYear != lt.tm_year)
{
yearStr = std::to_string(lt.tm_year + 1900);
currentYear = t.tm_year;
}
return yearStr;
I am not sure if static has any negative/positive aspects on the performance of reading the string or a member variable may be better here due to cache locality. You have to test this.
If you use this in multiple threads you have to use a mutex here which probably will reduce performance though (again you have to measure this).
First, you might consider currentYear() returning an int (e.g. 2016), probably with time(2), localtime_r(3), the tm_year field.... You'll then avoid making C++ strings.
Then, you speak of high packet rate, so you probably have some event loop. You don't explain how it is done (hopefully you use some library à la libevent, or at least your own loop around poll(2)....), but you might compute the current year only once every tenth of second in that event loop. Or have some other thread computing the current year once in a while (you'll probably need a mutex, or use std::atomic<int> as the type of current year...)

Strategy to reduce time of gettimeofday?

I write a stat server to count visit data of each day, therefore I have to clear data in db (memcached) every day.
Currently, I'll call gettimeofday to get date and compare it with the cached date to check if there are of the same day frequently.
Sample code as belows:
void report_visits(...) {
std::string date = CommonUtil::GetStringDate(); // through gettimeofday
if (date != static_cached_date_) {
flush_db_date();
static_cached_date_ = date;
}
}
The problem is that I have to call gettimeofday every time the client reports visit information. And gettimeofday is time-consuming.
Any solution for this problem ?
The gettimeofday system call (now obsolete in favor of clock_gettime) is among the shortest system calls to execute. The last time I measured that was on an Intel i486 and lasted around 2us. The kernel internal version is used to timestamp network packets, read, write, and chmod system calls to update the timestamps in the filesystem inodes, and the like. If you want to measure how many time you spent in gettimeofday system call you just have to do several (the more, the better) pairs of calls, one inmediately after the other, annotating the timestamp differences between them and getting finally the minimum value of the samples as the proper value. That will be a good aproximation to the ideal value.
Think that if the kernel uses it to timestamp each read you do to a file, you can freely use it to timestamp each service request without serious penalty.
Another thing, don't use (as suggested by other responses) a routine to convert gettimeofday result to a string, as this indeed consumes a lot more resources. You can compare timestamps (suppose them t1 and t2) and,
gettimeofday(&t2, NULL);
if (t2.tv_sec - t1.tv_sec > 86400) { /* 86400 is one day in seconds */
erase_cache();
t1 = t2;
}
or, if you want it to occur everyday at the same time
gettimeofday(&t2, NULL);
if (t2.tv_sec / 86400 > t1.tv_sec / 86400) {
/* tv_sec / 86400 is the number of whole days since 1/1/1970, so
* if it varies, a change of date has occured */
erase_cache();
}
t1 = t2; /* now, we made it outside, so we tie to the change of date */
Even, you can use the time() system call for this, as it has second resolution (and you don't need to cope with the usecs or with the overhead of the struct timeval structure).
(This is an old question, but there is an important answer missing:)
You need to define the TZ environment variable and export it to your program. If it is not set, you will incur a stat(2) call of /etc/localtime... for every single call to gettimeofday(2), localtime(3), etc.
Of course these will get answered without going to disk, but the frequency of the calls and the overhead of the syscall is enough to make an appreciable difference in some situations.
Supporting documentation:
How to avoid excessive stat(/etc/localtime) calls in strftime() on linux?
https://blog.packagecloud.io/eng/2017/02/21/set-environment-variable-save-thousands-of-system-calls/
To summarise:
The check, as you say, is done up to a few thousand times per seconds.
You're flushing a cache once every day.
Assuming that the exact time at which you flush is not critical and can be seconds (or even minutes perhaps) late, there is a very simple/practical solution:
void report_visits(...)
{
static unsigned int counter;
if ((counter++ % 1000) == 0)
{
std::string date = CommonUtil::GetStringDate();
if (date != static_cached_date_)
{
flush_db_date();
static_cached_date_ = date;
}
}
}
Just do the check once every N-times that report_visits() is called. In the above example N is 1000. With up to a few thousand checks per seconds, you'll be less than a second (or 0.001% of a day) late.
Don't worry about counter wrap-around, it only happens once in about 20+ days (assuming a few thousand checks/s maximum, with 32-bit int), and does not hurt.

Getting milliseconds accuracy current time in Qt

Qt documentation about QTime::currentTime() says :
Note that the accuracy depends on the accuracy of the underlying
operating system; not all systems provide 1-millisecond accuracy.
But is there any way to get this time with milliseconds accuracy in windows 7?
You can use QDateTime class and convert the current time with the appropriate format:
QDateTime::currentDateTime().toString("yyyy/MM/dd hh:mm:ss,zzz")
where 'z' corresponds to miliseconds accuracy.
you can use the functionality provided by time.h header file in C/C++.
#include <time.h>
clock_t start, end;
double cpu_time_used;
int main()
{
start = clock();
/* Do the work. */
end = clock();
cpu_time_used = ((double)(end-start)/ CLOCKS_PER_SEC);
}
Timer resolution may vary on different platforms and readings may not be accurate. If you need to get high-resolution, accurate timestamps on Windows 7, it provides QPC API:
https://msdn.microsoft.com/en-us/library/windows/desktop/dn553408%28v=vs.85%29.aspx
GetSystemTimePreciseAsFileTime is claimed to provide system time with <1us resolution.
But that's only about accurate timestamp. If you need to actually do something with 1 ms latency (ex. handle an event), you need a RTOS, not a desktop clunker.
One common way would be to scale up whatever you are doing and do it 10-100 times in a row, that way you would be able get a more accurate time reading of whatever you are doing, by dividing the result by 10-100.
But getting millisecond precise readings of your time is pretty much useless because you don't have 100% of the cpu time, which means that your readings will have much greater variance than just 1 millisecond if the OS gives another process computing time while you are doing your actions.

Timestamp in milliseconds gives me 10 digit in C++?

I am trying to retrieve Current Time in milliseconds using boost library.. Below is my code which I am using to get the current time in milliseconds.
boost::posix_time::ptime time = boost::posix_time::microsec_clock::local_time();
boost::posix_time::time_duration duration( time.time_of_day() );
std::cout << duration.total_milliseconds() << std::endl;
uint64_t timestampInMilliseconds = duration.total_milliseconds() // will this work or not?
std::cout << timestampInMilliseconds << std::endl;
But this prints out in 10 digit which is like 17227676.. I am running my code on my ubuntu machine.. And I believe it is always 13 digit long value? Isn't so?
After computing the timestamp in milliseconds, I need to use below formula on that -
int end = (timestampInMilliseconds / (60 * 60 * 1000 * 24)) % 14
But somehow I am not sure whether timestampInMilliseconds which I am getting is right or not?
First of all should I be using boost::posix or not? I am assuming there might be some better way.. I am running code on my ubuntu machine..
Update:-
As this piece of bash script prints out timestampInMilliseconds which is of 13 digit..
date +%s%N | cut -b1-13
The problem here is that you use time_of_day() which returns (from this reference)
Get the time offset in the day.
So from the value you provided in the question I can deduce that you ran this program at 4:47 am.
Instead you might want to use e.g. the to_tm() to get a struct tm and construct your time in milliseconds from there.
Also note that the %s format to the date command (and the strftime function) is the number of seconds since the epoch, not the number of milliseconds.
If you look at the tm structure, you will see that it has the number of years (since 1900, so subtract 70 here), days into the year, and then hours,, minutes and seconds into the day. All these can be used to calculate the time in seconds easily.
And that in seconds is the problem here. If you look at e.g. the POSIX time function you see that
shall return the value of time in seconds since the Epoch
If you want an accurate millisecond resolution you simply can't use the ptime (where the p stands for POSIX). If you want millisecond resolution you either have to use e.g. system functions that returns the time in higher resolutions (like gettimeofday), or you can see e.g. this old SO answer.

Time Converter - time(NULL) to WINDOWS time

Is there a tool that converts the time(NULL) value to the WINDOWS time.
The time(NULL) will give time in seconds since Jan 1, 1970. Now if i enter that value to this tool it must give me the time in date and hours, minutes and seconds.
In C++ we use the time(NULL) object a lot to send time.
See this KB from Microsoft, and chain with this function.
If by Windows time you mean the 64-bit time used in NTFS, you can use the conversion:
int64 wintime = 100000000uL * time(NULL) + 0x19db1ded53e8000uLL
where
int64 is the type used by your compiler for 64-bit integers.
NT time is based on the origin at 1601-01-01 00:00:00 utc and counts ten million units per second—a timing precision of 100 ns. It assumes a simple leap year sequence and ignores the calendar complexities around 1752.
So, by multiplying the Unix time by ten million, and adding 116444736000000000 (decimal) or 0x19DB1DED53E8000, which is the difference between 1970-01-01 and 1601-01-01, one can easily convert from one to the other.