Convert std::chrono::system_clock::time_point to struct timeval and back - c++

I´m writing a C++ code that needs to access an old C library that uses timeval as a representation of the current time.
In the old package to get the current date/time we used:
struct timeval dateTime;
gettimeofday(&dateTime, NULL);
function(dateTime); // The function will do its task
Now I need to use C++ chrono, something as:
system_clock::time_point now = system_clock::now();
struct timeval dateTime;
dateTime.tv_sec = ???? // Help appreaciated here
dateTime.tv_usec = ???? // Help appreaciated here
function(dateTime);
Later in code I need the way back, building a time_point variable from the returned struct timeval:
struct timeval dateTime;
function(&dateTime);
system_clock::time_point returnedDateTime = ?? // Help appreacited
I´m using C++11.

[Edited to use time_val instead of free vars]
Assuming you trust your system_clock with milliseconds accuracy, you can go like this:
struct timeval dest;
auto now=std::chrono::system_clock::now();
auto millisecs=
std::chrono::duration_cast<std::chrono::milliseconds>(
now.time_since_epoch()
);;
dest.tv_sec=millisecs.count()/1000;
dest.tv_usec=(millisecs.count()%1000)*1000;
std::cout << "s:" << dest.tv_sec << " usec:" << dest.tv_usec << std::endl;
Use std::chrono::microseconds in duration_cast and adjust your (div/mod) code accordingly for the higher precision - take care on how much you trust the accuracy of the values you obtain.
The conversion back is:
timeval src;
// again, trusting the value with only milliseconds accuracy
using dest_timepoint_type=std::chrono::time_point<
std::chrono::system_clock, std::chrono::milliseconds
>;
dest_timepoint_type converted{
std::chrono::milliseconds{
src.tv_sec*1000+src.tv_usec/1000
}
};
// this is to make sure the converted timepoint is indistinguishable by one
// issued by the system_clock
std::chrono::system_clock::time_point recovered =
std::chrono::time_point_cast<std::chrono::system_clock::duration>(converted)
;

Here is how to do the conversion both without using manual conversion factors, or depending upon the unspecified rounding mode of time_t:
timeval
to_timeval(std::chrono::system_clock::time_point tp)
{
using namespace std::chrono;
auto s = time_point_cast<seconds>(tp);
if (s > tp)
s = s - seconds{1};
auto us = duration_cast<microseconds>(tp - s);
timeval tv;
tv.tv_sec = s.time_since_epoch().count();
tv.tv_usec = us.count();
return tv;
}
std::chrono::system_clock::time_point
to_time_point(timeval tv)
{
using namespace std::chrono;
return system_clock::time_point{seconds{tv.tv_sec} + microseconds{tv.tv_usec}};
}
to_timeval takes care to round the tp down (in case it is negative). The POSIX spec is a bit vague on this but I'm assuming that timeval represents time points prior to the epoch with negative tv_sec values, and then positive tv_usec values. Then it is a simple operation to find the microseconds since the last second.
If I'm incorrect about my assumption (and a more precise POSIX spec can be found), <chrono> has the power to model whatever the heck it does.
The reverse conversion, assuming the conventions above, is incredibly readable. It requires no comment.
This can all be tested like this:
timeval
make_timeval(time_t s, long us)
{
timeval tv;
tv.tv_sec = s;
tv.tv_usec = us;
return tv;
}
bool
operator==(timeval x, timeval y)
{
return x.tv_sec == y.tv_sec && x.tv_usec == y.tv_usec;
}
int
main()
{
using namespace std::chrono;
assert(make_timeval(0, 0) == to_timeval(system_clock::time_point{}));
assert(make_timeval(1, 0) == to_timeval(system_clock::time_point{seconds{1}}));
assert(make_timeval(1, 400000) == to_timeval(system_clock::time_point{seconds{1} + microseconds{400000}}));
assert(make_timeval(-1, 400000) == to_timeval(system_clock::time_point{seconds{-1} + microseconds{400000}}));
assert(to_time_point(make_timeval(0, 0)) == system_clock::time_point{});
assert(to_time_point(make_timeval(1, 0)) == system_clock::time_point{seconds{1}});
assert(to_time_point(make_timeval(1, 400000)) == system_clock::time_point{seconds{1} + microseconds{400000}});
assert(to_time_point(make_timeval(-1, 400000)) == system_clock::time_point{seconds{-1} + microseconds{400000}});
}
This is all predicated on the assumption that the epoch for timeval and system_clock are identical. This is not specified, but is true for all existing implementations. With any luck we can standardize this existing practice in the near future.
Be aware that in POSIX timeval is used both as a time_point and a duration. So to_time_point could result in a run time error if the timeval is currently representing a time duration. And to_timeval could result in a run time error if the client interprets the result as a time duration.

See std::chrono::system_clock::to_time_t(), which converts the time_point to a time_t, which becomes your tv_sec. You don't get tv_usec, you can set it 0; or you could fiddle with a few other things, including duration_cast, in order to extract fractions of a second from your time_point.
from_time_t() does the reverse.

Related

Convert timestamp string into local time

How to convert timestamp string, e.g. "1997-07-16T19:20:30.45+01:00" into UTC time. The result of conversion should be timespec structure as in utimensat input arguments.
// sorry, should be get_utc_time
timespec get_local_time(const char* ts);
P.S. I need solution using either standard Linux/C/C++ facilities (whatever that means) or Boost C++ library.
Assumption: You want the "+01:00" to be subtracted from the "1997-07-16T19:20:30.45" to get a UTC timestamp and then convert that into a timespec.
Here is a C++20 solution that will automatically handle the centisecond precision and the [+/-]hh:mm UTC offset for you:
#include <chrono>
#include <ctime>
#include <sstream>
std::timespec
get_local_time(const char* ts)
{
using namespace std;
using namespace chrono;
istringstream in{ts};
in.exceptions(ios::failbit);
sys_time<nanoseconds> tp;
in >> parse("%FT%T%Ez", tp);
auto tps = floor<seconds>(tp);
return {.tv_sec = tps.time_since_epoch().count(),
.tv_nsec = (tp - tps).count()};
}
When used like this:
auto r = get_local_time("1997-07-16T19:20:30.45+01:00");
std::cout << '{' << r.tv_sec << ", " << r.tv_nsec << "}\n";
The result is:
{869077230, 450000000}
std::chrono::parse will subtract the +/-hh:mm UTC offset from the parsed local value to obtain a UTC timestamp (to up to nanosecond precision).
If the input has precision seconds, this code will handle it. If the precision is as fine as nanoseconds, this code will handle it.
If the input does not conform to this syntax, an exception will be thrown. If this is not desired, remove in.exceptions(ios::failbit);, and then you must check in.fail() to see if the parse failed.
This code will also handle dates prior to the UTC epoch of 1970-01-01 by putting a negative value into .tv_sec, and a positive value ([0, 999'999'999]) into .tv_nsec. Note that handling pre-epoch dates is normally outside of the timespec specification, and so most C utilities will not handle such a timespec value.
If you can not use C++20, or if your vendor has yet to implement this part of C++20, there exists a header-only library which implements this part of C++20, and works with C++11/14/17. I have not linked to it here as it is not in the set: "standard Linux/C/C++ facilities (whatever that means) or Boost C++ library". I'm happy to add a link if requested.
For comparison, here's how you could do this in mostly-standard C. It's somewhat cumbersome, because C's date/time support is still rather fragmented, unlike the much more complete support which C++ has, as illustrated in Howard Hinnant's answer. (Also, two of the functions I'm going to use are not specified by the C Standard, although they're present on many/most systems.)
If you have the semistandard strptime function, and if you didn't care about subseconds and explicit time zones, it would be relatively straightforward. strptime is a (partial) inverse of strftime, parsing a time string under control of a format specifier, and constructing a struct tm. Then you can call mktime to turn that struct tm into a time_t. Then you can use the time_t to populate a struct timespec.
char *inpstr = "1997-07-16T19:20:30.45+01:00";
struct tm tm;
memset(&tm, 0, sizeof(tm));
char *p = strptime(inpstr, "%Y-%m-%dT%H:%M:%S", &tm);
if(p == NULL) {
printf("strptime failed\n");
exit(1);
}
tm.tm_isdst = -1;
time_t t = mktime(&tm);
if(t == -1) {
printf("mktime failed\n");
exit(1);
}
struct timespec ts;
ts.tv_sec = t;
ts.tv_nsec = 0;
printf("%ld %ld\n", ts.tv_sec, ts.tv_nsec);
printf("%s", ctime(&ts.tv_sec));
printf("rest = %s\n", p);
In my time zone, currently UTC+4, this prints
869095230 0
Wed Jul 16 19:20:30 1997
rest = .45+01:00
But you did have subsecond information, and you did have an explicit time zone, and there's no built-in support for those in any of the basic C time-conversion functions, so you have to do things "by hand". Here's one way to do it. I'm going to use sscanf to separate out the year, month, day, hour, minute, second, and other components. I'm going to use those components to populate a struct tm, then use the semistandard timegm function to convert them straight to a UTC time. (That is, I temporarily assume that the HH:MM:SS part was UTC.) Then I'm going to manually correct for the time zone. Finally, I'm going to populate the tv_nsec field of the struct timesec with the subsecond information I extracted back in the beginning.
int y, m, d;
int H, M, S;
int ss; /* subsec */
char zs; /* zone sign */
int zh, zm; /* zone hours, minutes */
int r = sscanf(inpstr, "%d-%d-%dT%d:%d:%d.%2d%c%d:%d",
&y, &m, &d, &H, &M, &S, &ss, &zs, &zh, &zm);
if(r != 10 || (zs != '+' && zs != '-')) {
printf("parse failed\n");
exit(1);
}
struct tm tm;
memset(&tm, 0, sizeof(tm));
tm.tm_year = y - 1900;
tm.tm_mon = m - 1;
tm.tm_mday = d;
tm.tm_hour = H;
tm.tm_min = M;
tm.tm_sec = S;
time_t t = timegm(&tm);
if(t == -1) {
printf("timegm failed\n");
exit(1);
}
long int z = ((zh * 60L) + zm) * 60;
if(zs == '+') /* East of Greenwich */
t -= z;
else t += z;
struct timespec ts;
ts.tv_sec = t;
ts.tv_nsec = ss * (1000000000 / 100);
printf("%ld %ld\n", ts.tv_sec, ts.tv_nsec);
printf("%s", ctime(&ts.tv_sec));
printf(".%02ld\n", ts.tv_nsec / (1000000000 / 100));
For me this prints
869077230 450000000
Wed Jul 16 14:20:30 1997
.45
The time zone and subsecond information have been honored.
This code makes no special provision for dates prior to 1970. I think it will work if mktime/timegm work.
As mentioned, two of these functions — strptime and timegm — are not specified by the ANSI/ISO C Standard and are therefore not guaranteed to be available everywhere.

How to get exactly the same result of timeval with C++11 chrono

In my C++ project, there is a very old function, which used the system function of Linux to do some calculation about time.
Here is a picec of code:
struct timeval tv;
gettimeofday(&tv, NULL);
uint32_t seqId = (tv.tv_sec % 86400)*10000 + tv.tv_usec / 100;
* 10000 and / 100 are not wrong, because this piece of code is about to generate a seqId.
Now, I'm trying to use std::chrono of C++11 to replace it. Here is my code:
std::chrono::high_resolution_clock::duration duration_since_midnight() {
auto now = std::chrono::high_resolution_clock::now();
std::time_t tnow = std::chrono::high_resolution_clock::to_time_t(now);
tm *date = std::localtime(&tnow);
date->tm_hour = 0;
date->tm_min = 0;
date->tm_sec = 0;
auto midnight = std::chrono::system_clock::from_time_t(std::mktime(date));
return now - midnight;
}
auto res = duration_since_midnight();
auto sec = std::chrono::duration_cast<std::chrono::seconds>(res);
auto mil = std::chrono::duration_cast<std::chrono::milliseconds>(res - sec);
std::cout << sec.count() * 10000 + mil.count() << std::endl;
However, the result is always kind of wrong. For example, the old version may give me 360681491 but my version would give me 360680149. You see they are not exactly the same.
I don't know why. Or it's not possible to do so with std::chrono?
You do not have to stick to the duration types that the library provides. You can define your own that has the representation that you need:
using tenthmillis = duration<long, ratio<1,10000>>;
Then you can follow up with the following calculation.
auto now = high_resolution_clock::now().time_since_epoch();
auto oneDay = duration_cast<tenthmillis>(days{1});
auto sinceMidnight = duration_cast<tenthmillis>(now) % oneDay.count();
cout << sinceMidnight.count();
Note: Add namespace qualifications as required yourself.
[Update: Instead of high_resolution_clock, choose a clock that uses gettimeofday in its implementation to get results that are comparable to your gettimeofday implementation.]

Time offset calculation is off by one minute

I am trying to replace a number of different time classes with a single consistent API. However I have recently run into a problem whereby I cannot serialise the timezone offset correctly. Note that I am attempting to replicate an existing format that is already in wide use in the system.
The format should be YYYY-mm-DD HH:MM:SS.xxxxxxx -HHMM, where the x represents the sub-second precision and the last -HHMM is the TZ offset from UTC.
Code:
using namespace My::Time;
namespace chrn = std::chrono;
time_point now = clock::now();
time_point lclNow = getDefaultCalendarProvider()->toLocal(now);
duration diff{ lclNow - now };
std::wstring sign = diff > duration::zero() ? L" +" : L" -";
duration ms{ now.time_since_epoch().count() % duration::period::den };
int diffHrs = popDurationPart<chrn::hours>(diff).count();
int diffMins{ abs(chrn::duration_cast<chrn::minutes>(diff).count()) };
std::cout << Format{ lclNow, TimeZone::UTC, L" %Y-%m-%d %H:%M:%S." } << ms.count()
<< sign << std::setfill(L'0') << std::setw(2) << diffHrs
<< std::setfill(L'0') << std::setw(2) << diffMins << std::endl;
Problem:
Expected:<2016-05-25 09:45:18.1970000 +0100> Actual:< 2016-05-25
09:45:18.1964787 +0059>
The expected value is what you get when I use the old class to do the same operation. The problem appears to be at the point where I attempt to get the difference between lclNow and now.
Currently I am in UTC +1 (due to DST being in effect). However the diff value is always 35999995635. Being on Visual C++ in Windows the tick is 100 ns, so there are 10000000 ticks per second, meaning the diff value is 3599.9995 seconds, which is just short of the 3600 seconds I would need to make an hour.
When I print the two time values using the same format then I can see that they are exactly one hour apart. So it appears that the time-zone translation is not the issue.
The issue appears to have come from the time-zone conversions as I was attempting (as SamVarshavchik pointed out). Unfortunately I am unable to use Howard Hinnant's very complete date and tz libraries because they require a mechanism to update the IANA time-zone DB that is required for them to work, so I resorted to wrapping the Windows native calls for the time-zone conversions; namely the TzSpecificLocalTimeToSystemTime and SystemTimeToTzSpecificLocalTime functions.
However these only work with SYSTEMTIME and not time_point. This meant I took the quick and easy option of converting the time_point to a FILETIME (just modify the "epoch") and the FILETIME to a SYSTEMTIME before passing it to one of the two above functions. This resulted in truncation of the time value when it was pushed into the SYSTEMTIME struct (which only holds millisecond resolution). The outcome is that while I was accurate for dates, I was not entirely accurate when converting the date back into the original value.
The new solution does no calendar mapping for the basic time_point to time_point translations. It uses the following code to work out the offset in std::chrono::minutes (where zoneInfo is a TIME_ZONE_INFORMATION):
time_point WindowsTzDateProvider::doToUtc(const time_point& inLocal) const {
return inLocal + getBias(inLocal);
}
time_point WindowsTzDateProvider::doToLocal(const time_point& inUtc) const {
return inUtc - getBias(inUtc);
}
std::chrono::minutes WindowsTzDateProvider::doGetBias(const time_point& input) const {
bool isDst = CalendarDateProvider::isDstInEffect(input);
minutes baseBias{ zoneInfo.Bias };
minutes extraBias{ isDst ? zoneInfo.DaylightBias : zoneInfo.StandardBias };
return baseBias + extraBias;
}
bool CalendarDateProvider::isDstInEffect(const time_point& t) {
time_t epochTime = clock::to_time_t(t);
tm out;
#ifdef WIN32
localtime_s(&out, &epochTime);
#else
localtime_r(&out, &epochTime);
#endif
return out.tm_isdst > 0;
}
Note: I'm using the non-virtual interface idiom for the classes, hence the "do..." versions of the methods.
Consider using this free, open source time zone library which does exactly what you want with very simple syntax, and works on VS-2013 and later:
#include "tz.h"
#include <iostream>
int
main()
{
using namespace date;
using namespace std::chrono;
auto t = make_zoned(current_zone(), system_clock::now());
std::cout << format("%F %T %z", t) << '\n';
}
This should output for you:
2016-05-25 09:45:18.1970000 +0100

fully separated date with milliseconds from std::chrono::system_clock

My current pattern (for unix) is to call gettimeofday, cast the tv_sec field to a time_t, pass that through localtime, and combine the results with tv_usec. That gives me a full date (year, month, day, hour, minute, second, nanoseconds).
I'm trying to update my code to C++11 for portability and general good practice. I'm able to do the following:
auto currentTime = std::chrono::system_clock::now( );
const time_t time = std::chrono::system_clock::to_time_t( currentTime );
const tm *values = localtime( &time );
// read values->tm_year, etc.
But I'm stuck on the milliseconds/nanoseconds. For one thing, to_time_t claims that rounding is implementation defined (!) so I don't know if a final reading of 22.6 seconds should actually be 21.6, and for another I don't know how to get the number of milliseconds since the previous second (are seconds guaranteed by the standard to be regular? i.e. could I get the total milliseconds since the epoch and just modulo it? Even if that is OK it feels ugly).
How should I get the current date from std::chrono::system_clock with milliseconds?
I realised that I can use from_time_t to get a "rounded" value, and check which type of rounding occurred. This also doesn't rely on every second being exactly 1000 milliseconds, and works with out-of-the-box C++11:
const auto currentTime = std::chrono::system_clock::now( );
time_t time = std::chrono::system_clock::to_time_t( currentTime );
auto currentTimeRounded = std::chrono::system_clock::from_time_t( time );
if( currentTimeRounded > currentTime ) {
-- time;
currentTimeRounded -= std::chrono::seconds( 1 );
}
const tm *values = localtime( &time );
int year = values->tm_year + 1900;
// etc.
int milliseconds = std::chrono::duration_cast<std::chrono::duration<int,std::milli> >( currentTime - currentTimeRounded ).count( );
Using this free, open-source library you can get the local time with millisecond precision like this:
#include "tz.h"
#include <iostream>
int
main()
{
using namespace date;
using namespace std::chrono;
std::cout << make_zoned(current_zone(),
floor<milliseconds>(system_clock::now())) << '\n';
}
This just output for me:
2016-09-06 12:35:09.102 EDT
make_zoned is a factory function that creates a zoned_time<milliseconds>. The factory function deduces the desired precision for you. A zoned_time is a pairing of a time_zone and a local_time. You can get the local time out with:
local_time<milliseconds> lt = zt.get_local_time();
local_time is a chrono::time_point. You can break this down into date and time field types if you want like this:
auto zt = make_zoned(current_zone(), floor<milliseconds>(system_clock::now()));
auto lt = zt.get_local_time();
local_days ld = floor<days>(lt); // local time truncated to days
year_month_day ymd{ld}; // {year, month, day}
time_of_day<milliseconds> time{lt - ld}; // {hours, minutes, seconds, milliseconds}
// auto time = make_time(lt - ld); // another way to create time_of_day
auto y = ymd.year(); // 2016_y
auto m = ymd.month(); // sep
auto d = ymd.day(); // 6_d
auto h = time.hours(); // 12h
auto min = time.minutes(); // 35min
auto s = time.seconds(); // 9s
auto ms = time.subseconds(); // 102ms
Instead of using to_time_t which rounds off you can instead do like this
auto tp = std::system_clock::now();
auto s = std::chrono::duration_cast<std::chrono::seconds>(tp.time_since_epoch());
auto t = (time_t)(s.count());
That way you get the seconds without the round-off. It is more effective than checking difference between to_time_t and from_time_t.
I read the standard like this:
It is implementation defined whether the value is rounder or truncated, but naturally the rounding or truncation only occurs on the most detailed part of the resulting time_t. That is: the combined information you get from time_t is never more wrong than 0.5 of its granularity.
If time_t on your system only supported seconds, you would be right that there could be 0.5 seconds systematic uncertainty (unless you find out how things were implemented).
tv_usec is not standard C++, but an accessor of time_t on posix. To conclude, you should not expect any rounding effects bigger than half of the smallest time value difference your system supports, so certainly not more than 0.5 micro seconds.
The most straight forward way is to use boost ptime. It has methods such as fractional_seconds()
http://www.boost.org/doc/libs/1_53_0/doc/html/date_time/posix_time.html#date_time.posix_time.ptime_class
For interop with std::chrono, you can convert as described here: https://stackoverflow.com/a/4918873/1149664
Or, have a look at this question: How to convert std::chrono::time_point to calendar datetime string with fractional seconds?

How to subtract two gettimeofday instances?

I want to subtract two gettimeofday instances, and present the answer in milliseconds.
The idea is:
static struct timeval tv;
gettimeofday(&tv, NULL);
static struct timeval tv2;
gettimeofday(&tv2, NULL);
static struct timeval tv3=tv2-tv;
and then convert 'tv3' into milliseconds resolution.
You can use the timersub() function provided by glibc, then convert the result to milliseconds (watch out for overflows when doing this, though!).
Here's how to do it manually (since timersub isn't a standard function offered elsewhere)
struct timeval tv;
gettimeofday(&tv, NULL);
// ...
struct timeval tv2;
gettimeofday(&tv2, NULL);
int microseconds = (tv2.tv_sec - tv.tv_sec) * 1000000 + ((int)tv2.tv_usec - (int)tv.tv_usec);
int milliseconds = microseconds/1000;
struct timeval tv3;
tv3.tv_sec = microseconds/1000000;
tv3.tv_usec = microseconds%1000000;
(and you have to watch for overflow, which makes it even worse)
The current version of C++ offers a better option though:
#include <chrono> // new time utilities
// new type alias syntax
using Clock = std::chrono::high_resolution_clock;
// the above is the same as "typedef std::chrono::high_resolution_clock Clock;"
// but easier to read and the syntax supports being templated
using Time_point = Clock::time_point;
Time_point tp = Clock::now();
// ...
Time_point tp2 = Clock::now();
using std::chrono::milliseconds;
using std::chrono::duration_cast;
std::cout << duration_cast<milliseconds>(tp2 - tp).count() << '\n';