Following up from here
I am trying to see whether my data is 120 second old or not by looking at the timestamp of the data so I have below small code in my library project which is using std::chrono package:
uint64_t now = duration_cast<milliseconds>(steady_clock::now().time_since_epoch()).count();
bool is_old = (120 * 1000 < (now - data_holder->getTimestamp()));
// some logging to print out above values
LOG4CXX_WARN(logger, "data logging, now: " << now << ", data holder timestamp: " << data_holder->getTimestamp() << ", is_old: " << is_old << ", difference: " << (now - data_holder->getTimestamp()));
In the above code data_holder->getTimestamp() is uint64_t which returns timestamp in milliseconds.
Now when I print out now variable value, I am seeing this 433425679 and when I print out data_holder->getTimestamp() value which is 1437943796841 and the difference of now and data holder timestamp is coming as 18446742636199180454 as shown below in the logs:
2015-07-26 13:49:56,850 WARN 0x7fd050bc9700 simple_process - data logging, now: 433425679 , data holder timestamp: 1437943796841 , is_old: 1 , difference: 18446742636199180454
Now if I convert data holder timestamp 1437943796841 using epoch converter, I see this:
Your time zone: 7/26/2015, 1:49:56 PM
which is exactly same as the timestamp shown in the logs 2015-07-26 13:49:56,850 WARN so that means my data doesn't look to be 120 second old data. If yes, then why I am seeing is_old value as 1?
It looks like data_holder->getTimestamp() value is coming from this below code in our code base and then we are comparing it for 120 second old data check.
// is this the problem?
struct timeval val;
gettimeofday(&val, NULL);
uint64_t time_ms = uint64_t(val.tv_sec) * 1000 + val.tv_usec / 1000;
Now after carefully reading about various clock implementation in C++, it looks like we should use same clock to do the comparison.
Does my above code in which I am calculating data_holder->getTimestamp() value is the problem? since I am not using steady_clock there so epoch time will be different and that's why I see this issue?
Now my question is - what code should I use then to fix this issue? Should I use steady_clock as well for data_holder->getTimestamp() code? If yes, then what's the right way?
Also same code works fine in Ubuntu 12 box but it doesn't work fine in Ubuntu 14. I am running all statically linked libraries. For Ubuntu 12, code is compiled on Ubuntu 12 running 4.7.3 compiler and for Ubuntu 14, code is compiled on Ubuntu 14 running 4.8.2 compiler.
Use the same clock for both. If your timestamps need to maintain meaning across runs of your application, you must use system_clock, not steady_clock. If your timestamps only have meaning within a single run you can use steady_clock.
steady_clock is like a "stopwatch". You can time stuff with it, but you can't get the current time of day with it.
DataHolder::DataHolder()
: timestamp_{system_clock::now()}
{}
system_clock::time_point
DataHolder::getTimestamp()
{
return timestamp_;
}
bool is_old = minutes{2} < system_clock::now() - data_holder->getTimestamp();
In C++14 you can shorten this to:
bool is_old = 2min < system_clock::now() - data_holder->getTimestamp();
Do use <chrono>.
Don't use count() or time_since_epoch() (except for debugging purposes).
Don't use conversion factors such as 1000 or 120.
Violation of the guidelines above will turn compile-time errors into run-time errors. Compile-time errors are your friend. <chrono> catches many errors at compile-time. Once you escape the type-safety of <chrono> (e.g. by using count()), you are programming in the assembly language equivalent of time-keeping. And the space/time overhead of <chrono>'s type-safety system is zero.
You should definetely use the same time function for both.
I would recommend changing either the way the getTimestamp() value is created (e.g. by using chrono::system_clock) or the way you compare the timestamp.
The clean way would be to change it like this:
struct timeval val;
gettimeofday(&val, NULL);
uint64_t now = uint64_t(val.tv_sec) * 1000 + val.tv_usec / 1000;
bool is_old = (120 * 1000 < (now - data_holder->getTimestamp()));
Or the other way around
1.Change the way the getTimestamp() value is created
long long time_ms = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::system_clock::now().time_since_epoch()).count();
2.Adjust the compare function
long long now = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::system_clock::now().time_since_epoch()).count();
bool is_old = (120 * 1000 < (now - data_holder->getTimestamp()));
Related
How can I write a C++ function which takes a long long value representing a VMS timestamp and returns the corresponding time_t value, assuming the conversion yields a valid time_t? (I'll be parsing binary data sent over network on a commodity CentOS server, if that makes any differences.)
I've had a look into a document titled "Why Is Wednesday November 17, 1858 The Base Time For VAX/VMS" but I don't think I can write a correct implementation without testing with actual data which I don't have at hand right now, unfortunately.
If I'm not mistaken, it should be a simple arithmetic in this form:
time_t vmsTimeToTimeT(long long v) {
return v/10'000'000 - OFFSET;
}
Could somebody tell me what value to put into OFFSET ?
Things I'm concerned about:
I don't want to be bitten by my local timezone
I don't want to be bitten by the 0.5 thing (afternoon vs midnight) in the definition of modified Julian date (though it should be helping me here; modified Julian epoch and Unix Epoch should differ by a multiple of 24 hours thanks to the definition)
I tried to compute it by myself with the help from Boost.DateTime, only to get a mysterious negative value...
int main() {
boost::posix_time::ptime x(
boost::gregorian::date(1858, boost::gregorian::Nov, 17),
boost::posix_time::time_duration(0, 0, 0) );
boost::posix_time::ptime y(
boost::gregorian::date(1970, boost::gregorian::Jan, 1),
boost::posix_time::time_duration(0, 0, 0) );
std::cout << (y - x).total_seconds() << std::endl;
std::cout << (y > x ? "y is after x" : "y is before x") << std::endl;
}
-788250496
y is after x
I used Boost 1.60 for it:
The current implementation supports dates in the range 1400-Jan-01 to 9999-Dec-31.
Update
Crap, sizeof(total_seconds()) was 4, dispite what the document says
So I got 3506716800 from
auto diff = y - x;
std::cout << diff.ticks() / diff.ticks_per_second() << std::endl;
which doesn't look too wrong but... who can assure this is really correct?
Wow, you guys make it all appear to be so difficult with libraries and all.
So you read up on November-17 1858 and found out that VMS stores the time as 100nS 'clunks' since that date. Right?
Unix times are Seconds (or microseconds) since 1-jan-1970. Right?
So all you need to do is to subtract the OpenVMS time value 'offset' for 1-jan-1970 from the reported OpenVMS times ad divide by 10,000,000 (seconds) or 10 (microseconds).
You only need to find that value once using a trivial OpenVMS program.
Below I did not even use a dedicated program, just used the OpenVMS interactive debugger running a random executable program:
$ run tmp/debug
DBG> set rad hex
DBG> dep/date 10000 = "01-JAN-1970 00:00:00" ! Local time
DBG> examin/quad 10000
TMP\main: 007C95674C3DA5C0
DBG> examin/quad/dec 10000
TMP\main: 35067168005400000
So there is you offset, both in HEX and DECIMAL to use as you see fit.
In the simplest form you pre-divide the incoming OpenVMS time by 10,000,000 and subtract 3506716800 (decimal) to get Epoch seconds.
Be sure to keep the math, including the subtract to long-long int's
hth,
Hein.
According to this:
https://www.timeanddate.com/date/durationresult.html?d1=17&m1=11&y1=1858&d2=1&m2=jan&y2=1970
you'd want 40587 days, times 86400 seconds, makes 3506716800 as the offset in your calculation.
Using this free open-source library which extends <chrono> to calendrical computations, I can confirm your figure of the offset in seconds:
#include "chrono_io.h"
#include "date.h"
#include <iostream>
int
main()
{
using namespace date;
using namespace std::chrono;
using namespace std;
seconds offset = sys_days{jan/1/1970} - sys_days{nov/17/1858};
cout << offset << '\n';
}
Output:
3506716800s
I am trying to see whether my data is 120 second old or not by looking at the timestamp of the data so I have below small code in my library project which is using std::chrono package:
uint64_t now = duration_cast<milliseconds>(steady_clock::now().time_since_epoch()).count();
bool is_old = (120 * 1000 < (now - data_holder->getTimestamp()));
// some logging to print out above values
LOG4CXX_WARN(logger, "data logging, now: " << now << ", data holder timestamp: " << data_holder->getTimestamp() << ", is_old: " << is_old << ", difference: " << (now - data_holder->getTimestamp()));
In the above code data_holder->getTimestamp() is uint64_t which returns timestamp in milliseconds.
Now when I print out now variable value, I am seeing this 433425679 and when I print out data_holder->getTimestamp() value which is 1437943796841 and the difference of now and data holder timestamp is coming as 18446742636199180454 as shown below in the logs:
2015-07-26 13:49:56,850 WARN 0x7fd050bc9700 simple_process - data logging, now: 433425679 , data holder timestamp: 1437943796841 , is_old: 1 , difference: 18446742636199180454
Now if I convert data holder timestamp 1437943796841 using epoch converter, I see this:
Your time zone: 7/26/2015, 1:49:56 PM
which is exactly same as the timestamp shown in the logs 2015-07-26 13:49:56,850 WARN so that means my data doesn't look to be 120 second old data. If yes, then why I am seeing is_old value as 1?
Now if I run this simple main program, I see is_old is returning 0 instead of 1:
#include <iostream>
int main ()
{
bool is_old = (120 * 1000 < (433425679 - 1437943796841));
std::cout<<"is_old: " << is_old << std::endl;
}
What is going on? Why it is returning 0 when I run from main method as compared to 1 when I log it from my project. Does that mean the way I am running my main code is different as compared to way my library was build using cmake command? I am building my project executable using cmake command which is generating a tar file and then I am running it by untarring the tar file. I am running on Ubuntu 14.04 box.
Earlier, I was thinking I need to use system_clock instead of steady_clock but after I run it from main method, it looks like something else is going on.
Unsigned integer arithmetic is defined as arithmetic modulo 2^numBits.
That means when you do now - data_holder->getTimestamp() with your unsigned variables and getTimestamp() is greater than now as in your example, the operation will wrap and you will not get a negative value, but a (usually pretty big) one of the same unsigned integer type as the inputs.
If you use literals instead, their types will be signed and thus the result will be negative, as expected.
Now whether subtracting some timestamp from whatever source and the value returned by steady_clock::now makes sense in the first place is a different question. It most likely does not. You should compare the current time with some creation time you got from the same source (e.g. both from std::steady_clock) instead.
I have a 32 bit Linux system in which I have to record data that is timestamped with a UINT32 second offset from an epoch of 1901-01-01 00:00:00.
Calculating the timestamp is ok for me as I can use the 64 bit ticks() counter and ticks_per_second() functions to generate the seconds since epoch as follows (I only require second level resolution)
const ptime ptime_origin(time_from_string("1901-01-01 00:00:00"));
time_duration my_utc = microsec_clock::universal_time() - ptime_origin;
boost::int64_t tick_per_sec = my_utc.ticks_per_second();
boost::int64_t tick_count = my_utc.ticks();
boost::int64_t sec_since_epoch = tick_count/tick_per_sec;
This works for me since I know that as an unsigned integer, the seconds count will not exceed the maximum UINT32 value (well not for many years anyway).
The problem I have is that my application can receive a modbus message containing a UINT32 value for which I have to set the hardware and system clock with an ioctl call using RTC_SET_TIME. This UINT32 is again the offset in seconds since my epoch 1901-01-01 00:00:00.
My problem now is that I have no way to create a ptime object using 64 bit integers - the ticks part of the time_duration objects is private and I am restricted to using long which on my 32 bit system is just a 4-byte signed integer not large enough to store the seconds offset from my epoch.
I have no control over the value of the epoch and so I am really stumped as to how I can create my required boost::posix_time::ptime object from the data I have.
I can probably obtain a dirty solution by calculating hard second counts to particular time intervals and using an additional epoch to make a bridge to allow this but I was wondering if there is something in the boost code that will allow me to solve the problem entirely using the boost datetime library.
I have read all the documentation I can find but I cannot see any obvious way to do this.
EDIT: I found this related question Convert int64_t to time_duration but the accepted answer there does NOT work for my epoch
Although boost::posix_time::seconds cannot be used if the seconds represent a number greater than 32 bits (as of Oct 2014), it turns out that boost::posix_time::milliseconds can easily be used (without workarounds), as follows:
inline std::string convertMsSinceEpochToString(std::int64_t const ms)
{
boost::posix_time::ptime time_epoch(boost::gregorian::date(1970, 1, 1));
boost::posix_time::ptime t = time_epoch + boost::posix_time::milliseconds(ms);
return boost::posix_time::to_simple_string(t);
}
So, just convert your 64-bit seconds to (64-bit) milliseconds, and you're good to go!
Note Be /very/ aware of compiler dependent behaviour with the capacity of builting integral types:
uint64_t offset = 113ul*365ul*24ul*60ul*60ul*1000ul; // 113 years give or take some leap seconds/days etc.?
would work on GCC or Clang, but it would simply overflow the calculations in MSVC2013. You'd need to explicitly coerce the calulation to 64 bits:
uint64_t offset = uint64_t(113ul)*365*24*60*60*1000;
You could apply time_durations in the maximum allowable increments (which is std::numeric_limits<long>::max()) since the total_seconds field is limited to long (signed).
Note: I worded it as int32_t below so that it will still work correctly if compiled on a 64-bit platform.
Here's a small demonstration:
#include "boost/date_time.hpp"
#include <iostream>
using namespace boost::gregorian;
using namespace boost::posix_time;
int main()
{
uint64_t offset = 113ul*365ul*24ul*60ul*60ul; // 113 years give or take some leap seconds/days etc.?
static const ptime time_t_epoch(date(1901,1,1));
static const uint32_t max_long = std::numeric_limits<int32_t>::max();
std::cout << "epoch: " << time_t_epoch << "\n";
ptime accum = time_t_epoch;
while (offset > max_long)
{
accum += seconds(max_long);
offset -= max_long;
std::cout << "accumulating: " << accum << "\n";
}
accum += seconds(offset);
std::cout << "final: " << accum << "\n";
}
Prints:
epoch: 1901-Jan-01 00:00:00
accumulating: 1969-Jan-19 03:14:07
final: 2013-Dec-04 00:00:00
See it Live on Coliru
It's unbelievable how difficult the above is to accomplish in C++. I'm looking for a way to do this as efficiently as possible while still maintaining millisecond precision.
The solutions I have so far have either required a lot of code and function calls making the implementation slow, or they require me to change the code twice a year to account for daylight savings time.
The computer this will be running on is synced using ntp and should have direct access to the local time adjusted for DST. Can somebody with expertise on this share some solutions?
My platform is CentOS5, g++ 4.1.2, Boost 1.45, solution doesn't need to be portable, can be platform specific. It just needs to be quick and avoid twice a year code changing.
New answer for old question.
Rationale for new answer: We have better tools now.
I'm assuming the desired result is "actual" milliseconds since the local midnight (getting the correct answer when there has been a UTC offset change since midnight).
A modern answer based on <chrono> and using this free, open-source library is very easy. This library has been ported to VS-2013, VS-2015, clang/libc++, macOS, and linux/gcc.
In order to make the code testable, I'm going to enable an API to get the time since midnight (in milliseconds) from any std::chrono::system_clock::time_point in any IANA time zone.
std::chrono::milliseconds
since_local_midnight(std::chrono::system_clock::time_point t,
const date::time_zone* zone);
And then to get the current time since midnight in the local time zone is easy to write on top of this testable primitive:
inline
std::chrono::milliseconds
since_local_midnight()
{
return since_local_midnight(std::chrono::system_clock::now(),
date::current_zone());
}
Writing the meat of the matter is relatively straight-forward:
std::chrono::milliseconds
since_local_midnight(std::chrono::system_clock::time_point t,
const date::time_zone* zone)
{
using namespace date;
using namespace std::chrono;
auto zt = make_zoned(zone, t);
zt = floor<days>(zt.get_local_time());
return floor<milliseconds>(t - zt.get_sys_time());
}
The first thing to do is create a zoned_time which really does nothing at all but pair zone and t. This pairing is mainly just to make the syntax nicer. It actually doesn't do any computation.
The next step is to get the local time associated with t. That is what zt.get_local_time() does. This will have whatever precision t has, unless t is coarser than seconds, in which case the local time will have a precision of seconds.
The call to floor<days> truncates the local time to a precision of days. This effectively creates a local_time equal to the local midnight. By assigning this local_time back to zt, we don't change the time zone of zt at all, but we change the local_time of zt to midnight (and thus change its sys_time as well).
We can get the corresponding sys_time out of zt with zt.get_sys_time(). This is the UTC time which corresponds to the local midnight. It is then an easy process to subtract this from the input t and truncate the results to the desired precision.
If the local midnight is non-existent, or ambiguous (there are two of them), this code will throw an exception derived from std::exception with a very informative what().
The current time since the local midnight can be printed out with simply:
std::cout << since_local_midnight().count() << "ms\n";
To ensure that our function is working, it is worthwhile to output a few example dates. This is most easily done by specifying a time zone (I'll use "America/New_York"), and some local date/times where I know the right answer. To facilitate nice syntax in the test, another since_local_midnight helps:
inline
std::chrono::milliseconds
since_local_midnight(const date::zoned_seconds& zt)
{
return since_local_midnight(zt.get_sys_time(), zt.get_time_zone());
}
This simply extracts the system_clock::time_point and time zone from a zoned_time (with seconds precision), and forwards it on to our implementation.
auto zt = make_zoned(locate_zone("America/New_York"), local_days{jan/15/2016} + 3h);
std::cout << zt << " is "
<< since_local_midnight(zt).count() << "ms after midnight\n";
This is 3am in the middle of the Winter which outputs:
2016-01-15 03:00:00 EST is 10800000ms after midnight
and is correct (10800000ms == 3h).
I can run the test again just by assigning a new local time to zt. The following is 3am just after the "spring forward" daylight saving transition (2nd Sunday in March):
zt = local_days{sun[2]/mar/2016} + 3h;
std::cout << zt << " is "
<< since_local_midnight(zt).count() << "ms after midnight\n";
This outputs:
2016-03-13 03:00:00 EDT is 7200000ms after midnight
Because the local time from 2am to 3am was skipped, this correctly outputs 2 hours since midnight.
An example from the middle of Summer gets us back to 3 hours after midnight:
zt = local_days{jul/15/2016} + 3h;
std::cout << zt << " is "
<< since_local_midnight(zt).count() << "ms after midnight\n";
2016-07-15 03:00:00 EDT is 10800000ms after midnight
And finally an example just after the Fall transition from daylight saving back to standard gives us 4 hours:
zt = local_days{sun[1]/nov/2016} + 3h;
std::cout << zt << " is "
<< since_local_midnight(zt).count() << "ms after midnight\n";
2016-11-06 03:00:00 EST is 14400000ms after midnight
If you want, you can avoid an exception in the case that midnight is non-existent or ambiguous. You have to decide before hand in the ambiguous case: Do you want to measure from the first midnight or the second?
Here is how you would measure from the first:
std::chrono::milliseconds
since_local_midnight(std::chrono::system_clock::time_point t,
const date::time_zone* zone)
{
using namespace date;
using namespace std::chrono;
auto zt = make_zoned(zone, t);
zt = make_zoned(zt.get_time_zone(), floor<days>(zt.get_local_time()),
choose::earliest);
return floor<milliseconds>(t - zt.get_sys_time());
}
If you want to measure from the second midnight, use choose::latest instead. If midnight is non-existent, you can use either choose, and it will measure from the single UTC time point that borders the local time gap that midnight is in. This can all be very confusing, and that's why the default behavior is to just throw an exception with a very informative what():
zt = make_zoned(locate_zone("America/Asuncion"), local_days{sun[1]/oct/2016} + 3h);
std::cout << zt << " is "
<< since_local_midnight(zt).count() << "ms after midnight\n";
what():
2016-10-02 00:00:00.000000 is in a gap between
2016-10-02 00:00:00 PYT and
2016-10-02 01:00:00 PYST which are both equivalent to
2016-10-02 04:00:00 UTC
If you use the choose::earliest/latest formula, instead of an exception with the above what(), you get:
2016-10-02 03:00:00 PYST is 7200000ms after midnight
If you want to do something really tricky like use choose for non-existent midnights, but throw an exception for ambiguous midnights, that too is possible:
auto zt = make_zoned(zone, t);
try
{
zt = floor<days>(zt.get_local_time());
}
catch (const date::nonexistent_local_time&)
{
zt = make_zoned(zt.get_time_zone(), floor<days>(zt.get_local_time()),
choose::latest);
}
return floor<milliseconds>(t - zt.get_sys_time());
Because hitting such a condition is truly rare (exceptional), the use of try/catch is justified. However if you want to do it without throwing at all, there exists a low-level API within this library to achieve that.
Finally note that this long winded answer is really about 3 lines of code, and everything else is about testing, and taking care of rare exceptional cases.
It really depends on why you need "milliseconds since midnight" and what you plan to use it for.
Having said that, you need to take into account the fact that 3am doesn't really mean 3 hours since midnight, when DST is involved. If you really need "milliseconds since midnight" for some reason, you can get one Epoch time at midnight, another at 3am, and subtract the two.
But again, the notion of "midnight" may not be that stable in some cases; if a region's rule is to fall back from 1am to midnight when DST ends, you have two midnights within a day.
So I'm really doubtful of your dependence on "midnight". Typically, those broken-down times are for display and human understanding only, and all internal timekeeping is done with Epoch times.
If you're on Linux, gettimeofday gives the number of seconds/microseconds since the Epoch, which may help. But this really doesn't have anything to do with DST, since DST matters only with broken-down times (i.e. year, month, day, hour, minute, second).
To get the broken-down time, use gmtime or localtime with the "seconds" part of the result of gettimeofday:
struct timeval tv;
gettimeofday(&tv, 0);
struct tm *t = localtime(&tv.tv_sec); // t points to a statically allocated struct
localtime gives the broken-down time in your local timezone, but it may be susceptible to DST. gmtime gives the broken-down time in UTC, which is immune to DST.
None of the answers provided really does what I need it to do. I've come up with something standalone that I think should work. If anybody spots any errors or can think of a faster method, please let me know. Present code takes 15 microseconds to run. I challenge SO to make something quicker (and I really hope SO succeeds =P)
inline int ms_since_midnight()
{
//get high precision time
timespec now;
clock_gettime(CLOCK_REALTIME,&now);
//get low precision local time
time_t now_local = time(NULL);
struct tm* lt = localtime(&now_local);
//compute time shift utc->est
int sec_local = lt->tm_hour*3600+lt->tm_min*60+lt->tm_sec;
int sec_utc = static_cast<long>(now.tv_sec) % 86400;
int diff_sec; //account for fact utc might be 1 day ahead
if(sec_local<sec_utc) diff_sec = sec_utc-sec_local;
else diff_sec = sec_utc+86400-sec_local;
int diff_hour = (int)((double)diff_sec/3600.0+0.5); //round to nearest hour
//adjust utc to est, round ns to ms, add
return (sec_utc-(diff_hour*3600))*1000+(int)((static_cast<double>(now.tv_nsec)/1000000.0)+0.5);
}
You can run localtime_r, and mktime after adjusting the result of localtime_r to compute the value of "midnight" relative to the Epoch.
Edit: Pass now into the routine to avoid an unnecessary call to time.
time_t global_midnight;
bool checked_2am;
void update_global_midnight (time_t now, bool dst_check) {
struct tm tmv;
localtime_r(&now, &tmv);
tmv.tm_sec = tmv.tm_min = tmv.tm_hour = 0;
global_midnight = mktime(&tmv);
checked_2am = dst_check || (now >= (global_midnight + 2*3600));
}
Assume global_midnight is initially 0. Then, you would adjust it's value at 2am, and the next day, so that it stays in sync with DST. When you call clock_gettime, you can compute the difference against global_midnight.
Edit: Since the OP wants to benchmark the routine, tweaking code for compilers that assume true to be the fast path, and round to nearest msec.
unsigned msecs_since_midnight () {
struct timespec tsv;
clock_gettime(CLOCK_REALTIME, &tsv);
bool within_a_day = (tsv.tv_sec < (global_midnight + 24*3600));
if (within_a_day)
if (checked_2am || (tsv.tv_sec < (global_midnight + 2*3600))
return ((tsv.tv_sec - global_midnight)*1000
+ (tsv.tv_nsec + 500000)/1000000);
update_global_midnight(tsv.tv_sec, within_a_day);
return ((tsv.tv_sec - global_midnight)*1000
+ (tsv.tv_nsec + 500000)/1000000);
}
I have referred to the post [here] and made a change so that the below function can return the milliseconds since midnight in GMT time.
int GetMsSinceMidnightGmt(std::chrono::system_clock::time_point tpNow) {
time_t tnow = std::chrono::system_clock::to_time_t(tpNow);
tm * tmDate = std::localtime(&tnow);
int gmtoff = tmDate->tm_gmtoff;
std::chrono::duration<int> durTimezone(gmtoff); // 28800 for HKT
// because mktime assumes local timezone, we shift the time now to GMT, then fid mid
time_t tmid = std::chrono::system_clock::to_time_t(tpNow-durTimezone);
tm * tmMid = std::localtime(&tmid);
tmMid->tm_hour = 0;
tmMid->tm_min = 0;
tmMid->tm_sec = 0;
auto tpMid = std::chrono::system_clock::from_time_t(std::mktime(tmMid));
auto durSince = tpNow - durTimezone - tpMid;
auto durMs = std::chrono::duration_cast<std::chrono::milliseconds>(durSince);
return durMs.count();
}
If you want to have local time, it is much more easier.
I have found a function to get milliseconds since the Mac was started:
U32 Platform::getRealMilliseconds()
{
// Duration is a S32 value.
// if negative, it is in microseconds.
// if positive, it is in milliseconds.
Duration durTime = AbsoluteToDuration(UpTime());
U32 ret;
if( durTime < 0 )
ret = durTime / -1000;
else
ret = durTime;
return ret;
}
The problem is that after ~20 days AbsoluteToDuration returns INT_MAX all the time until the Mac is rebooted.
I have tried to use method below, it worked, but looks like gettimeofday takes more time and slows down the game a bit:
timeval tim;
gettimeofday(&tim, NULL);
U32 ret = ((tim.tv_sec) * 1000 + tim.tv_usec/1000.0) + 0.5;
Is there a better way to get number of milliseconds elapsed since some epoch (preferably since the app started)?
Thanks!
Your real problem is that you are trying to fit an uptime-in-milliseconds value into a 32-bit integer. If you do that your value will always wrap back to zero (or saturate) in 49 days or less, no matter how you obtain the value.
One possible solution would be to track time values with a 64-bit integer instead; that way the day of reckoning gets postponed for a few hundred years and so you don't have to worry about the problem. Here's a MacOS/X implementation of that:
uint64_t GetTimeInMillisecondsSinceBoot()
{
return UnsignedWideToUInt64(AbsoluteToNanoseconds(UpTime()))/1000000;
}
... or if you don't want to return a 64-bit time value, the next-best thing would be to record the current time-in-milliseconds value when your program starts, and then always subtract that value from the values you return. That way things won't break until your own program has been running for at least 49 days, which I suppose is unlikely for a game.
uint32_t GetTimeInMillisecondsSinceProgramStart()
{
static uint64_t _firstTimeMillis = GetTimeInMillisecondsSinceBoot();
uint64_t nowMillis = GetTimeInMillisecondsSinceBoot();
return (uint32_t) (nowMillis-_firstTimeMillis);
}
My preferred method is mach_absolute_time - see this tech note - I use the second method, i.e. mach_absolute_time to get time stamps and mach_timebase_info to get the constants needed to convert the difference between time stamps into an actual time value (with nanosecond resolution).