Converting steady_clock::time_point to time_t - c++

I'm using the steady_clock for saving the time stamp of some messages. For debug purpose is usefull to have the calendar (or something similar).
For other clocks ther's the static function to_time_t, but on GCC (MinGW 4.8.0) this function is not present.
Now i print something like:
Timestamp: 26735259098242
For timestamp i need a steady_clock so I cannot use system_clock or others.
Edit
The previous print is given from the time_since_epoch().count()

Assuming you need the steady behavior for internal calculations, and not for display, here's a function you can use to convert to time_t for display.
using std::chrono::steady_clock;
using std::chrono::system_clock;
time_t steady_clock_to_time_t( steady_clock::time_point t )
{
return system_clock::to_time_t(system_clock::now()
+ duration_cast<system_clock::duration>(t - steady_clock::now()));
}
If you need steady behavior for logging, you'd want to get one ( system_clock::now(), steady_clock::now() ) pair at startup and use that forever after.

Related

Parsing time from C++ DLL to Matlab

I'm writing a C++ DLL which will be accessed using MATLAB's loadlibrary. I need a specific function to return the current time in milliseconds precision, and parse it correctly in matlab. Specifically i'll need to extract the year, month, day, hours, minutes, seconds and milliseconds.
I currently have something like
long long time_since_epoch()
{
return std::chrono::system_clock::now().time_since_epoch().count();
}
which MATLAB calls using t = calllib('myDLL', 'time_since_epoch');.
Then I tried parsing it using dt = datetime(t, 'convertfrom', 'epochtime');, which didn't work.
But, I compared it with the time given from posixtime(datetime) and found out I'm getting a correct answer by using dt = datetime(t / 10000000, 'convertfrom', 'epochtime');, which is very odd.
I don't fully understand what's going on here, and I somehow lost the milliseconds in the process.
Your system_clock is not counting milliseconds, it is counting something finer than a millisecond. Exactly what isn't important because you can specifically ask for the count in milliseconds:
long long time_since_epoch()
{
using namespace std::chrono;
return time_point_cast<milliseconds>(system_clock::now()).time_since_epoch().count();
}

How can I convert a UTC timestamp to local time, seconds past the hour?

I have a large data set with timestamps that are in UTC time in milliseconds. I'm synchronizing this data set with another's who has timestamps of microseconds past the hour, so I need to convert the first to local time, in seconds past the hour.
Most of the similar questions I've read on this subject get UTC time from the time() function which gets the current time.
I've tried implementing the following which was pulled from C++ Reference.
The timestamp I'm trying to convert is a double, but I'm not sure how to actually use this value.
An example ts from my data set: 1512695257869
int main ()
{
double my_utc_ts; //value acquired from the data set
time_t rawtime;
struct tm * ptm;
time ( &rawtime ); //getting current time
//rawtime = my_utc_ts; //this is what I tried and is wrong, and results in a nullptr
ptm = gmtime ( &rawtime );
puts ("Current time around the World:");
printf ("Phoenix, AZ (U.S.) : %2d:%02d\n", (ptm->tm_hour+MST)%24, ptm->tm_min);
return 0;
}
After I'm able to convert it to a usable gmtime object or whatever, I need to get seconds past the hour... I think I'll be able to figure this part out if I can get the UTC timestamps to successfully convert, but I haven't thought this far ahead.
Guidance would be much appreciated. Thanks in advance.
After I'm able to convert it to a usable gmtime object or whatever, I need to get seconds past the hour...
Here is how you can convert a double representing milliseconds since 1970-01-01 00:00:00 UTC to seconds past the local hour using Howard Hinnant's, free, open-source, C++11/14/17 timezone library which is based on <chrono>:
#include "date/tz.h"
#include <iostream>
int
main()
{
using namespace std::chrono;
using namespace date;
double my_utc_ts = 1512695257869;
using ms = duration<double, std::milli>;
sys_time<milliseconds> utc_ms{round<milliseconds>(ms{my_utc_ts})};
auto loc_s = make_zoned(current_zone(), floor<seconds>(utc_ms)).get_local_time();
auto sec_past_hour = loc_s - floor<hours>(loc_s);
std::cout << utc_ms << " UTC\n";
std::cout << sec_past_hour << " past the local hour\n";
}
This outputs for me:
2017-12-08 01:07:37.869 UTC
457s past the local hour
If your local time zone is not an integral number of hours offset from UTC, the second line of output will be different for you.
Explanation of code:
We start with your input my_utc_ts.
The next line creates a custom std::chrono::duration that has double as the representation and milliseconds as the precision. This type-alias is named ms.
The next line constructs utc_ms which is a std::chrono::time_point<system_clock, milliseconds> holding 1512695257869, and represents the time point 2017-12-08 01:07:37.869 UTC. So far, no actual computation has been performed. Simply the double 1512695257869 has been cast into a type which represents an integral number of milliseconds since 1970-01-01 00:00:00 UTC.
This line starts the computation:
auto loc_s = make_zoned(current_zone(), floor<seconds>(utc_ms)).get_local_time();
This creates a {time_zone, system_time} pair capable of mapping between UTC and a local time, using time_zone as that map. It uses current_zone() to find the computer's current time zone, and truncates the time point utc_ms from a precision of milliseconds to a precision of seconds. Finally the trailing .get_local_time() extracts the local time from this mapping, with a precision of seconds, and mapped into the current time zone. That is, loc_s is a count of seconds since 1970-01-01 00:00:00 UTC, offset by your-local-time-zone's UTC offset that was in effect at 2017-12-08 01:07:37 UTC.
Now if you truncate loc_s to a precision of hours, and subtract that truncated time point from loc_s, you'll get the seconds past the local hour:
auto sec_past_hour = loc_s - floor<hours>(loc_s);
The entire computation is just the two lines of code above. The next two lines simply stream out utc_ms and sec_past_hour.
Assuming that your local time zone was offset from UTC by an integral number of hours at 2017-12-08 01:07:37 UTC, you can double-check that:
457 == 7*60 + 37
Indeed, if you can assume that your local time zone is always offset from UTC by an integral number of hours, the above program can be simplified by not mapping into local time at all:
sys_time<milliseconds> utc_ms{round<milliseconds>(ms{my_utc_ts})};
auto utc_s = floor<seconds>(utc_ms);
auto sec_past_hour = utc_s - floor<hours>(utc_s);
The results will be identical.
(Warning: Not all time zones are offset from UTC by an integral number of hours)
And if your database is known to be generated with a time zone that is not the computer's current local time zone, that can be taken into account by replacing current_zone() with the IANA time zone identifier that your database was generated with, for example:
auto loc_s = make_zoned("America/New_York", floor<seconds>(utc_ms)).get_local_time();
Update
This entire library is based on the std <chrono> library introduced with C++11. The types above utc_ms and loc_s are instantiations of std::chrono::time_point, and sec_past_hour has type std::chrono::seconds (which is itself an instantiation of std::chrono::duration).
durations can be converted to their "representation" type using the .count() member function. For seconds, this representation type will be a signed 64 bit integer.
For a more detailed video tutorial on <chrono>, please see this Cppcon 2016 presentation. This presentation will encourage you to avoid using the .count() member function as much as humanly possible.
For example instead of converting sec_past_hour to a long so that you can compare it to other values of your dataset, convert other values of your dataset to std::chrono::durations so that you can compare them to sec_past_hour.
For example:
long other_data = 123456789; // past the hour in microseconds
if (microseconds{other_data} < sec_past_hour)
// ...
This snippet shows how <chrono> will take care of units conversions for you. This means you won't make mistakes like dividing by 1,000,000 when you should have multiplied, or spelling "million" with the wrong number of zeroes.
I'd start by converting the floating point number to a time_t. A time_t is normally a count of seconds since an epoch (most often the POSIX epoch--midnight, 1 Jan 1970), so it sounds like that's going to take little more than a bit of fairly simple math.
So let's assume for the sake of argument that your input uses a different epoch. Just for the sake of argument let's assume it's using an epoch of midnight, 1 jan 1900 instead (and, as noted, it's in milliseconds instead of seconds).
So, to convert that to a time_t, you'd start by dividing by 1000 to convert from milliseconds to seconds. Then you'd subtract off the number of seconds between midnight 1 jan 1900 and midnight 1 jan 1970. Now you have a value you can treat as a time_t that the standard library can deal with1.
Then use localtime to get that same time as a struct tm.
Then zero out the minutes and seconds from that tm, and use mktime to get a time_t representing that time.
Finally, use difftime to get the difference between the two.
1. For the moment, I'm assuming your standard library is based around a standard POSIX epoch, but that's a pretty safe assumption.

How to get nanoseconds from boost::chrono::hight_resolution_clock::time_point?

I am new to boost and chrono. I am writing a logger that logs the timestamps of API calls, entry and exit. I tried using boost::xtime first, but it wasn't giving the high resolution value I needed. Hence was thinking about using Chrono. I declared a boost::chrono::hight_resolution_clock::time_stamp x; variable for getting the timestamp and assigned it to boost::chrono::hight_resolution_clock::now ();. Now, I need to get the nanoseconds from this variable and put it in my log file (thats the requirement). So I cast it boost::chrono::duration_cast (x). But it just wouldn't let me do that. It needs 2 parameters apparently, and I only have one. Is there a way to get around this?. Is it possible to create another time_stamp variable and assign zero to it and use that variable?. I tried assigning zero, but its not working. Kindly help me out.
Thanks,
Sam
If tagged c++11, any reason why not to use std::chrono?
// Using std::chrono
auto start = std::chrono::high_resolution_clock::now(); // start timer
/* do some work */
auto diff = std::chrono::high_resolution_clock::now() - start; // get difference
auto nsec = std::chrono::duration_cast<std::chrono::nanoseconds>(diff);
std::cout << "it took: " << nsec.count() << " nanoseconds" << std::endl;
boost::chrono::duration_cast converts a duration into the specified units, but you've given it a boost::chrono::time_point, not a duration.
There's really no such thing as "the current time in nanoseconds". To get a duration, you need to specify the time since which you want to know how many nanoseconds have elapsed (an "epoch"). Different clocks will measure their time based on different epochs.
boost::chrono::system_clock (currently) uses the Unix epoch (midnight Jan 1, 1970) as its epoch, but it's not steady and it may not have the resolution you need (it's in nanoseconds on my Ubuntu box, but in 1/10,000,000ths of a second on my Windows box).
boost::chrono::high_resolution_clock uses boot up as its epoch, is steady, and measures time in nanoseconds on both boxes I tested on.
Boost also provides other clocks like process_cpu_clock that use other epochs and count in other units.
Thus you can get nanos since Jan 1, 1970 using system_clock, but it may not actually be nanosecond-accurate, and it may go backwards if the user changes the system time or the computer syncs with network time, or you can get nanos since some other point in time using high_resolution_clock.

Strategy to reduce time of gettimeofday?

I write a stat server to count visit data of each day, therefore I have to clear data in db (memcached) every day.
Currently, I'll call gettimeofday to get date and compare it with the cached date to check if there are of the same day frequently.
Sample code as belows:
void report_visits(...) {
std::string date = CommonUtil::GetStringDate(); // through gettimeofday
if (date != static_cached_date_) {
flush_db_date();
static_cached_date_ = date;
}
}
The problem is that I have to call gettimeofday every time the client reports visit information. And gettimeofday is time-consuming.
Any solution for this problem ?
The gettimeofday system call (now obsolete in favor of clock_gettime) is among the shortest system calls to execute. The last time I measured that was on an Intel i486 and lasted around 2us. The kernel internal version is used to timestamp network packets, read, write, and chmod system calls to update the timestamps in the filesystem inodes, and the like. If you want to measure how many time you spent in gettimeofday system call you just have to do several (the more, the better) pairs of calls, one inmediately after the other, annotating the timestamp differences between them and getting finally the minimum value of the samples as the proper value. That will be a good aproximation to the ideal value.
Think that if the kernel uses it to timestamp each read you do to a file, you can freely use it to timestamp each service request without serious penalty.
Another thing, don't use (as suggested by other responses) a routine to convert gettimeofday result to a string, as this indeed consumes a lot more resources. You can compare timestamps (suppose them t1 and t2) and,
gettimeofday(&t2, NULL);
if (t2.tv_sec - t1.tv_sec > 86400) { /* 86400 is one day in seconds */
erase_cache();
t1 = t2;
}
or, if you want it to occur everyday at the same time
gettimeofday(&t2, NULL);
if (t2.tv_sec / 86400 > t1.tv_sec / 86400) {
/* tv_sec / 86400 is the number of whole days since 1/1/1970, so
* if it varies, a change of date has occured */
erase_cache();
}
t1 = t2; /* now, we made it outside, so we tie to the change of date */
Even, you can use the time() system call for this, as it has second resolution (and you don't need to cope with the usecs or with the overhead of the struct timeval structure).
(This is an old question, but there is an important answer missing:)
You need to define the TZ environment variable and export it to your program. If it is not set, you will incur a stat(2) call of /etc/localtime... for every single call to gettimeofday(2), localtime(3), etc.
Of course these will get answered without going to disk, but the frequency of the calls and the overhead of the syscall is enough to make an appreciable difference in some situations.
Supporting documentation:
How to avoid excessive stat(/etc/localtime) calls in strftime() on linux?
https://blog.packagecloud.io/eng/2017/02/21/set-environment-variable-save-thousands-of-system-calls/
To summarise:
The check, as you say, is done up to a few thousand times per seconds.
You're flushing a cache once every day.
Assuming that the exact time at which you flush is not critical and can be seconds (or even minutes perhaps) late, there is a very simple/practical solution:
void report_visits(...)
{
static unsigned int counter;
if ((counter++ % 1000) == 0)
{
std::string date = CommonUtil::GetStringDate();
if (date != static_cached_date_)
{
flush_db_date();
static_cached_date_ = date;
}
}
}
Just do the check once every N-times that report_visits() is called. In the above example N is 1000. With up to a few thousand checks per seconds, you'll be less than a second (or 0.001% of a day) late.
Don't worry about counter wrap-around, it only happens once in about 20+ days (assuming a few thousand checks/s maximum, with 32-bit int), and does not hurt.

Getting milliseconds accuracy current time in Qt

Qt documentation about QTime::currentTime() says :
Note that the accuracy depends on the accuracy of the underlying
operating system; not all systems provide 1-millisecond accuracy.
But is there any way to get this time with milliseconds accuracy in windows 7?
You can use QDateTime class and convert the current time with the appropriate format:
QDateTime::currentDateTime().toString("yyyy/MM/dd hh:mm:ss,zzz")
where 'z' corresponds to miliseconds accuracy.
you can use the functionality provided by time.h header file in C/C++.
#include <time.h>
clock_t start, end;
double cpu_time_used;
int main()
{
start = clock();
/* Do the work. */
end = clock();
cpu_time_used = ((double)(end-start)/ CLOCKS_PER_SEC);
}
Timer resolution may vary on different platforms and readings may not be accurate. If you need to get high-resolution, accurate timestamps on Windows 7, it provides QPC API:
https://msdn.microsoft.com/en-us/library/windows/desktop/dn553408%28v=vs.85%29.aspx
GetSystemTimePreciseAsFileTime is claimed to provide system time with <1us resolution.
But that's only about accurate timestamp. If you need to actually do something with 1 ms latency (ex. handle an event), you need a RTOS, not a desktop clunker.
One common way would be to scale up whatever you are doing and do it 10-100 times in a row, that way you would be able get a more accurate time reading of whatever you are doing, by dividing the result by 10-100.
But getting millisecond precise readings of your time is pretty much useless because you don't have 100% of the cpu time, which means that your readings will have much greater variance than just 1 millisecond if the OS gives another process computing time while you are doing your actions.