I'm using the Windows::Foundation::DateTime structure at the moment, and the value it's giving me for the UTC time (its UniversalTime member) is not a UNIX timestamp, and I can't find ANY documention on how to read it. So I did a few tests:
Let
A equal 129862800600000000 where A is UniversalTime's value at 23:00 on 11/7/2012
and let
B equal 129862476000000000 where B is UniversalTime's value at 15:00 on 11/7/2012
We can there for assume that 8 hours of time in whatever format UniversalTime takes can be interpreted as A-B. We therefore have
A-B = 3246000000 = 8 hours
(A-B)/8 = 405750000 = 1 hour
((A-B)/8)/60 = 6762500 = 1 minute
(((A-B)/8)/60)/60 = 112708.(3...) = 1 second
This turned out to be completely incorrect. If you add 405750000 to a DateTime object's UniversalTime member, for example, it most certainly does not add an hour to it. Instead, it seems to add only 40 seconds.
Basically I just need to be able to determine the number of days that have passed since the unix epoch.
In any event, if anyone has any advice or help, that would be great.
Edit:
I've also thought about the possibility that they're using a bitmask to get/set everything. But I'm not sure how to go about checking that, at the moment. (It's 4 AM, and I need to sleep. rofl)
Edit 2:
Example for what I'm currently trying to do:
if((post_date.UniversalTime/(60*60*24))>num_seconds_since_unix_epoch_for_current_day){
date_formatter=ref new DateTimeFormatter("{month.abbreviated} {day.integer(1)}, {year.full} at {hour.integer(1)}:{minute.integer(2)}:{second.integer(2)}");
}else{
date_formatter=ref new DateTimeFormatter("Today at {hour.integer(1)}:{minute.integer(2)}");
}
date_string = date_formatter->format(post_date);
The UniversalTime field of a Windows::Foundation::DateTime is the number of 100ns units since 1/1/1601. It's exactly the same as a Windows FILETIME structure. Note that the UniversalTime is UTC, which is often different from the local time.
According to this MS tutorial, you can format the DateTime with a DateTimeFormatter.
Windows::Foundation::DateTime dt = (Windows::Foundation::DateTime) value;
Windows::Globalization::DateTimeFormatting::DateTimeFormatter^ dtf =
Windows::Globalization::DateTimeFormatting::DateTimeFormatter::LongDate::get();
dtf->Format(dt);
Related
I'm fiddling around with time representation in C++.
I would like to have a strictly monotonic representation of time, that handles leap seconds well. The utc_clock in C++20 should be able to do that, and since my compiler doesn't support this version yet, I'm using HowardHinnant/date.
To understand the library better I have started making small test cases, but got stuck on one.
I take two dates, before and after insertion of a leap second and check that duration between those two dates actually has the extra second.
This is the test case:
TEST(DateTime, TimeLeap)
{
using namespace std::chrono;
using namespace date;
// Two dates with a leap second in between
// https://en.wikipedia.org/wiki/Leap_second
auto t1 = clock_cast<utc_clock>(static_cast<sys_days>(2016_y/December/31));
auto t2 = clock_cast<utc_clock>(static_cast<sys_days>(2017_y/January/1));
EXPECT_EQ(duration_cast<seconds>(t2 - t1).count(), 24 * 3600 + 1);
}
but it fails for me:
common/tests/datetime.cpp:39: Failure
Expected: duration_cast<seconds>(t2 - t1).count()
Which is: 86400
To be equal to: 24 * 3600 + 1
Which is: 86401
It seems that the conversion between sys_clock and utc_clock doesn't add the leap second.
Suspecting that the problem is the resolution of sys_days, I've also tried doing a time_point_cast<seconds>(...) before the clock_cast<utc_clock>, but the result didn't change.
I've also tried using 2017-01-02 as the second date, in case there was an issue with distinction between 2016-12-31 23:59:60 and 2017-01-01 00:00 -- the leap second also didn't appear there.
It looks like you're using the OS supplied timezone database (USE_OS_TZDB=1), and that the leapseconds aren't being read. This can be confirmed with:
cout << get_tzdb().leap_seconds.size() << '\n';
This should output 27 (currently), but for you I imagine it is outputting 0. This means leapsecond data is missing.
With a recent (2020-09-11) commit: https://github.com/HowardHinnant/date/commit/ba99134b8a7c4a6e7d28d738a0234a85dc6bd827, the leapsecond data is read from either one of these files:
zoneinfo/leapseconds
zoneinfo/leap-seconds.list
Both of these files are IANA-supplied, but have slightly different formats. Either file will do as they have duplicate information in them. tz.cpp will search for both. If your platform doesn't ship either one of these files, you can download it from the IANA data download and copy it into place manually.
I am trying to capture packets from the NIC and save part of the packet payload as a string.
On part of packet that must be stored is its Log Time known as SysLog. Each packets has a SysLog with the following Format:
Nov 01 03 14:50:25 TCP...[other parts of packet Payload]
As it can be seen, the packet SysLog has no Year Number. My program must be running all over the year, so I need to add Year Number to the packet SysLog and convert SysLog to epoch time. The final string that I have to store is like this:
1478175389-TCP, ….
I use the following peace of code to convert Syslog to EpochTime.
tm* tm_date = new tm();
Std ::string time = Current_Year;
time += " ";
time += packet.substr(0,18);
strptime(time.c_str(), "%Y %b %d %T", tm_date);
EpochTime = timegm(tm_date);
The currentYear Method:
std::string currentYear() {
std::stringstream now;
auto tp = std::chrono::system_clock::now();
auto ms = std::chrono::duration_cast<std::chrono::milliseconds>(tp.time_since_epoch());
size_t modulo = ms.count() % 1000;
time_t seconds = std::chrono::duration_cast<std::chrono::seconds>(ms).count();
#if HAS_STD_PUT_TIME
#else
char buffer[25]; // holds "2013-12-01 21:31:42"
if (strftime(buffer, 25, "%Y", localtime(&seconds))) {
now << buffer;
}
#endif // HAS_STD_PUT_TIME
return now.str();
}
The above operations are what i have to do for every packets. The packet rate is 100000-1000000 pps and the above peace of code is very time consuming specially on currentYear().
One possible optimization is to remove currentYear() Method and save the
Year number as a constant value. As said earlier my program must be run all over the year and as you know 2017 is comming. We can not change our binary at 31/12/2016 23:59:00 and also we don’t want to waste our time for calculating Year Number!!
I need a more efficient way to calculate the current year number without running it for each packet.
Is it possible? What is your suggestion for me?
Once you have obtained the current date and time, based on this it shouldn't be too difficult to calculate what the epoch time will be for midnight of next January 1st.
After calculating the expected epoch time for when the year rolls around, going forward all you have to do is compare it to the current time, when making a log entry. If it hasn't reached the precalculated Jan 1 midnight time, you know that the year hasn't rolled around yet.
So, you don't need to calculate the year for every packet at all. Just need to check the current time against the precalculated January 1st midnight time, which shouldn't change unless the politicians decide to change your timezone, while all of this is running...
The year is changed for log entries beginning with Jan, and only those log entries.
Log entries sometimes come out of order, or carry a timestamp saved during previous processing.
Attaching the year from the PC clock will give bad results, such as
2016 Dec 31 23:59:58 normal
2016 Jan 01 00:01:01 printing time placed in packet by remote device, remote clock is running a bit fast
2017 Dec 31 23:59:59 printing timestamp saved locally two seconds before logging occurred
2017 Jan 01 00:00:03 back to normal
You can't just concatenate the year of local clock with the month...second of the log message. You have to assign the year that avoids large clock jumps.
Since you're trying to produce Unix time (seconds since epoch) anyway, start by turning the log message time into Julian (seconds since start of year) and test whether the Julian is less than or greater than say 10 million (roughly 4 months).
You can "cache" the string you generate and only change it when the year changes. It may be though just a "little" improvement depending on what operations take the most time.
//somewhere
static int currentYear = 0;
static std::string yearStr = "";
//in your function
auto now = std::chrono::system_clock::now();
auto tnow = system_clock::to_time_t(now);
auto lt = localtime(&tnow); //or gmtime depends on your needs.
if(currentYear != lt.tm_year)
{
yearStr = std::to_string(lt.tm_year + 1900);
currentYear = t.tm_year;
}
return yearStr;
I am not sure if static has any negative/positive aspects on the performance of reading the string or a member variable may be better here due to cache locality. You have to test this.
If you use this in multiple threads you have to use a mutex here which probably will reduce performance though (again you have to measure this).
First, you might consider currentYear() returning an int (e.g. 2016), probably with time(2), localtime_r(3), the tm_year field.... You'll then avoid making C++ strings.
Then, you speak of high packet rate, so you probably have some event loop. You don't explain how it is done (hopefully you use some library à la libevent, or at least your own loop around poll(2)....), but you might compute the current year only once every tenth of second in that event loop. Or have some other thread computing the current year once in a while (you'll probably need a mutex, or use std::atomic<int> as the type of current year...)
I have a counter "numberOrders" and i want to reset it everyday at midnight, to know how many orders I get in one day, what I have right now is this:
val system = akka.actor.ActorSystem("system")
system.scheduler.schedule(86400000 milliseconds, 0 milliseconds){(numberOrders = 0)}
This piece of code is inside a def which is called every time i get a new order, so want it does is: reset numberOrders after 24hours from the first order or from every order, I'm not really sure if every time there's a new order is going to reset after 24 hours, which is not what I want. I want to rest the variable everyday at midnight, any idea? Thanks!
To further increase pushy's answer. Since you might not always be sure when the site started and if you want to be exactly sure it runs at midnight you can do the following
val system = akka.actor.ActorSystem("system")
val wait = (24 hours).toMillis - System.currentTimeMillis
system.scheduler.schedule(Duration.apply(wait, MILLISECONDS), 24 hours, orderActor, ResetCounterMessage)
Might not be the tidiest of solutions but it does the job.
As schedule supports repeated executions, you could just set the interval parameter to 24 hours, the initial delay to the amount of time between now and midnight, and initiate the code at startup. You seem to be creating a new actorSystem every time you get an order right now, that does not seem quite right, and you would be rid of that as well.
Also I would suggest using the schedule method which sends messages to actors instead. This way the actor that processes the order could keep count, and if it receives a ResetCounter message it would simply reset the counter. You could simply write:
system.scheduler.schedule(x seconds, 24 hours, orderActor, ResetCounterMessage)
when you start up your actor system initially, and be done with it.
I am trying to retrieve Current Time in milliseconds using boost library.. Below is my code which I am using to get the current time in milliseconds.
boost::posix_time::ptime time = boost::posix_time::microsec_clock::local_time();
boost::posix_time::time_duration duration( time.time_of_day() );
std::cout << duration.total_milliseconds() << std::endl;
uint64_t timestampInMilliseconds = duration.total_milliseconds() // will this work or not?
std::cout << timestampInMilliseconds << std::endl;
But this prints out in 10 digit which is like 17227676.. I am running my code on my ubuntu machine.. And I believe it is always 13 digit long value? Isn't so?
After computing the timestamp in milliseconds, I need to use below formula on that -
int end = (timestampInMilliseconds / (60 * 60 * 1000 * 24)) % 14
But somehow I am not sure whether timestampInMilliseconds which I am getting is right or not?
First of all should I be using boost::posix or not? I am assuming there might be some better way.. I am running code on my ubuntu machine..
Update:-
As this piece of bash script prints out timestampInMilliseconds which is of 13 digit..
date +%s%N | cut -b1-13
The problem here is that you use time_of_day() which returns (from this reference)
Get the time offset in the day.
So from the value you provided in the question I can deduce that you ran this program at 4:47 am.
Instead you might want to use e.g. the to_tm() to get a struct tm and construct your time in milliseconds from there.
Also note that the %s format to the date command (and the strftime function) is the number of seconds since the epoch, not the number of milliseconds.
If you look at the tm structure, you will see that it has the number of years (since 1900, so subtract 70 here), days into the year, and then hours,, minutes and seconds into the day. All these can be used to calculate the time in seconds easily.
And that in seconds is the problem here. If you look at e.g. the POSIX time function you see that
shall return the value of time in seconds since the Epoch
If you want an accurate millisecond resolution you simply can't use the ptime (where the p stands for POSIX). If you want millisecond resolution you either have to use e.g. system functions that returns the time in higher resolutions (like gettimeofday), or you can see e.g. this old SO answer.
How can you obtain the system clock's current time of day (in milliseconds) in C++? This is a windows specific app.
The easiest (and most direct) way is to call GetSystemTimeAsFileTime(), which returns a FILETIME, a struct which stores the 64-bit number of 100-nanosecond intervals since midnight Jan 1, 1601.
At least at the time of Windows NT 3.1, 3.51, and 4.01, the GetSystemTimeAsFileTime() API was the fastest user-mode API able to retrieve the current time. It also offers the advantage (compared with GetSystemTime() -> SystemTimeToFileTime()) of being a single API call, that under normal circumstances cannot fail.
To convert a FILETIME ft_now; to a 64-bit integer named ll_now, use the following:
ll_now = (LONGLONG)ft_now.dwLowDateTime + ((LONGLONG)(ft_now.dwHighDateTime) << 32LL);
You can then divide by the number of 100-nanosecond intervals in a millisecond (10,000 of those) and you have milliseconds since the Win32 epoch.
To convert to the Unix epoch, subtract 116444736000000000LL to reach Jan 1, 1970.
You mentioned a desire to find the number of milliseconds into the current day. Because the Win32 epoch begins at a midnight, the number of milliseconds passed so far today can be calculated from the filetime with a modulus operation. Specifically, because there are 24 hours/day * 60 minutes/hour * 60 seconds/minute * 1000 milliseconds/second = 86,400,000 milliseconds/day, you could user the modulus of the system time in milliseconds modulus 86400000LL.
For a different application, one might not want to use the modulus. Especially if one is calculating elapsed times, one might have difficulties due to wrap-around at midnight. These difficulties are solvable, the best example I am aware is Linus Torvald's line in the Linux kernel which handles counter wrap around.
Keep in mind that the system time is returned as a UTC time (both in the case of GetSystemTimeAsFileTime() and simply GetSystemTime()). If you require the local time as configured by the Administrator, then you could use GetLocalTime().
To get the time expressed as UTC, use GetSystemTime in the Win32 API.
SYSTEMTIME st;
GetSystemTime(&st);
SYSTEMTIME is documented as having these relevant members:
WORD wYear;
WORD wMonth;
WORD wDayOfWeek;
WORD wDay;
WORD wHour;
WORD wMinute;
WORD wSecond;
WORD wMilliseconds;
As shf301 helpfully points out below, GetLocalTime (with the same prototype) will yield a time corrected to the user's current timezone.
You have a few good answers here, depending on what you're after. If you're looking for just time of day, my answer is the best approach -- if you need solid dates for arithmetic, consider Alex's. There's a lot of ways to skin the time cat on Windows, and some of them are more accurate than others (and nobody has mentioned QueryPerformanceCounter yet).
A cut-to-the-chase example of Jed's answer above:
const std::string currentDateTime() {
SYSTEMTIME st, lt;
GetSystemTime(&st);
char currentTime[84] = "";
sprintf(currentTime,"%d/%d/%d %d:%d:%d %d",st.wDay,st.wMonth,st.wYear, st.wHour, st.wMinute, st.wSecond , st.wMilliseconds);
return string(currentTime); }
Use GetSystemTime, first; then, if you need that, you can call SystemTimeToFileTime on the SYSTEMTIME structure that the former fills for you. A FILETIME is a 64-bit count of 100-nanosecs intervals since an epoch, and so more suitable for arithmetic; a SYSTEMTIME is a structure with all the expected fields (year, month, day, hour, etc, down to milliseconds). If you want to know "how many milliseconds have elapsed since midnight", for example, subtracting two FILETIME structures (one for the current time, one obtained by converting the same SYSTEMTIME after zeroing out the appropriate fields) and dividing by the appropriate power of ten is probably the simplest available approach.
Depending on the needs of your application there are six common options. This Dr Dobbs Journal article will give you all the information (and more) you need on choosing the best one.
In your specific case, from this article:
GetSystemTime() retrieves the current
system time and instantiates a
SYSTEMTIME structure, which is
composed of a number of separate
fields including year, month, day,
hours, minutes, seconds, and
milliseconds.
Here is some code that works in Windows which I've used in a Open Watcom C project. It should work in C++ It returns seconds (not milliseconds) using _dos_gettime or gettime
double seconds(void)
{
#ifdef __WATCOMC__
struct dostime_t t;
_dos_gettime(&t);
return ((double)t.hour * 3600 + (double)t.minute * 60 + (double)t.second + (double)t.hsecond * 0.01);
#else
struct time t;
gettime(&t);
return ((double)t.ti_hour * 3600 + (double)t.ti_min * 60 + (double)t.ti_sec + (double)t.ti_hund * 0.01);
#endif
}
While it's not what the question asks, it's worth considering why you want this info.
If all you want to do is keep track of how long something takes to calculate or the time past since the last user interaction, consider using the uptime (milliseconds since boot), which is much simpler to get: GetTickCount() or GetTickCount64(). This is all I wanted to do but I went down the epoch rabbit hole first because that's how you do it under unix.