#include <chrono>
int main()
{
using clock = std::chrono::system_clock;
using time_point = std::chrono::time_point<clock>;
auto tp_now = clock::now();
auto tp_min = time_point::min();
bool b1 = tp_now > tp_min;
bool b2 = (tp_now - tp_min) > std::chrono::seconds{ 0 };
cout << boolalpha << b1 << endl << b2 << endl;
}
The expected output is:
true
true
But the actual output is:
true
false
Why does std::chrono::time_point not behave as expected?
With:
using clock = std::chrono::system_clock;
using time_point = std::chrono::time_point<clock>;
time_point is implemented as if it stores a value of type Duration indicating the time interval from the start of the Clock's epoch. (See std::chrono::time_point)
The duration member type of clock (and of time_point) is capable of representing negative durations.
Thus duration in your implementation may be implemented with a back-end signed integer, (it can be implemented with unsigned integer but with a complicated comparison).
In that particular implementation,
time_point::min();
time_point t(clock::duration::min());
time_point t(clock::duration(std::numeric_limits<Rep>::lowest()));
and tp_now is greater than zero, thus when you subtract them, you get an integer overflow because the result is larger than std::numeric_limits<Rep>::max(). In implementation with signed back-end, it's undefined behavior, in implementation with unsigned back-end, I don't know about it, but I guess its special comparison will make its false.
In this example, tp_min is -9223372036854775808 ticks from its epoch, that number is the same with std::numeric_limits<duration::rep>::lowest()
TL;DR; It's integer overflow. Don't use
(tp1 - tp2) > std::chrono::duration<whatever_rep>::zero
Instead, use
tp1 > tp2
Related
Basically I need a function that makes x decrement to 0 over a certain time period (40 seconds)
This seems pretty simple in theory but I haven't been able to do it for a bit now.
static auto decrement = [](int start_value, int end_value, int time) {
//i need this function to decrement start_value until it reaches end_value
//this should happen over a set time as well, in this case 40 seconds.
};
int cool_variable = decrement(2000, 0, 40); //40 seconds, the time should be expected in seconds
#DavidSchwartz has a great comment that should be considered a serious solution:
Why not just compute the correct value of cool_variable based on the clock whenever you need its value?
That being said, this is an answer to the actual question: How to write this function:
decrement = [](int start_value, int end_value, int time)
Where cool_variable starts with the value start_value and decrements at a steady rate of time until it equals end_value, and the total amount of time for this multi-decrement operation is time seconds.
This is a function with a time deadline. It is well-established that for problems with a deadline, one should lean towards *_until solutions as opposed to *_for solutions in handling the time aspect. This implies that instead of sleeping for some time duration between decrements, we need to sleep until it is time to decrement from some value to the next lower value.
The use of sleep_until allows a somewhat varying time for each iteration of the decrement loop, while ensuing that the total time of the full loop closely approximates the total desired time.
To achieve the use of sleep_until, we need a (presumably) linear function:
duration next_time(int value) {return a0 + a1 * value;}
where next_time(start_value) == 0s and next_time(end_value) == seconds{time}.
We have two equations, and two unknowns: a0 and a1. We can solve for the two unknowns to create our desired next_time function:
auto next_time = [&](int value)
{
return (value - start_value) * time / (end_value - start_value);
};
Now for each value of cool_variable, one can sleep_until(t0 + next_time(cool_variable)) where t0 is the time where you want cool_variable == start_value (and thus want to sleep for 0 seconds).
The next most important thing (after use of sleep_until) is to use <chrono>. int time is an error-prone API that has no place in modern C++. The type of time should be a <chrono> duration such as seconds (or perhaps some other unit of time). Let's start with seconds:
#include <atomic>
#include <chrono>
#include <iostream>
#include <thread>
std::atomic<int> cool_variable = 0;
void
decrement(int start_value, int end_value, std::chrono::seconds time)
{
using namespace std;
using namespace std::chrono;
auto next_time = [&](int value)
{
return (value - start_value) * nanoseconds{time} / (end_value - start_value);
};
auto t0 = steady_clock::now();
for (cool_variable = start_value; cool_variable >= end_value; --cool_variable)
{
this_thread::sleep_until(t0 + next_time(cool_variable));
cout << cool_variable << endl;
}
}
cool_variable is stored as an atomic<int> so that it can be concurrently read by other threads to avoid undefined behavior.
The input time variable is converted to nanoseconds precision in the computation so that the argument to sleep_until can be as precise as is practical.
Note that the current time need only be computed once, prior to the decrement loop.
Just as an example, cool_variable is printed to the terminal on each iteration. This is of course not necessary, and just used for demonstration purposes.
This can now be called like so:
decrement(2000, 0, 40s);
It can also be instructive to wrap the call to decrement with timing information in order to ensure that it is behaving as intended:
auto t0 = system_clock::now();
decrement(2000, 0, 40s);
auto t1 = system_clock::now();
std::cout << (t1-t0)/1s << '\n';
This will output each value of cool_variable between 2000 and 0 (inclusive), and then say how many seconds it took to do the operation (hopefully 40 in this example).
Finally, one minor simplification can be made:
Since we desire time to be nanoseconds in the computation, it is actually simpler to simply accept nanoseconds in the API, relieving us of the need to convert seconds to nanoseconds internally:
void
decrement(int start_value, int end_value, std::chrono::nanoseconds time)
{
using namespace std;
using namespace std::chrono;
auto next_time = [&](int value)
{
return (value - start_value) * time / (end_value - start_value);
};
auto t0 = steady_clock::now();
for (cool_variable = start_value; cool_variable >= end_value; --cool_variable)
{
this_thread::sleep_until(t0 + next_time(cool_variable));
cout << cool_variable << endl;
}
}
The client code need not change at all:
decrement(2000, 0, 40s);
The 40s argument will implicitly convert to 40'000'000'000ns at the call site. And this is why it is so important to use <chrono> types for time. Had we not done this, this final (minor) simplification would not have been minor at all. It would have required changing client code at the call site, which in real-world applications is often impractical.
In Summary
Use sleep_until.
Use <chrono>.
I'm trying to represent NTP timestamps (including the NTP epoch) in C++ using std::chrono. Therefore, I decided to use a 64-bit unsigned int (unsigned long long) for the ticks and divide it such that the lowest 28-bit represent the fraction of a second (accepting trunction of 4 bits in comparison to the original standard timestamps), the next 32-bit represent the seconds of an epoch and the highest 4-bit represent the epoch. This means that every tick takes 1 / (2^28 - 1) seconds.
I now have the following simple implementation:
#include <chrono>
/**
* Implements a custom C++11 clock starting at 1 Jan 1900 UTC with a tick duration of 2^(-28) seconds.
*/
class NTPClock
{
public:
static constexpr bool is_steady = false;
static constexpr unsigned int era_bits = 4; // epoch uses 4 bits
static constexpr unsigned int fractional_bits = 32-era_bits; // fraction uses 28 bits
static constexpr unsigned int seconds_bits = 32; // second uses 32 bits
using duration = std::chrono::duration<unsigned long long, std::ratio<1, (1<<fractional_bits)-1>>;
using rep = typename duration::rep;
using period = typename duration::period;
using time_point = std::chrono::time_point<NTPClock>;
/**
* Return the current time of this. Note that the implementation is based on the assumption
* that the system clock starts at 1 Jan 1970, which is not defined with C++11 but seems to be a
* standard in most compilers.
*
* #return The current time as represented by an NTP timestamp
*/
static time_point now() noexcept
{
return time_point
(
std::chrono::duration_cast<duration>(std::chrono::system_clock::now().time_since_epoch())
+ std::chrono::duration_cast<duration>(std::chrono::hours(24*25567)) // 25567 days have passed between 1 Jan 1900 and 1 Jan 1970
);
};
}
Unfortunately, a simple test reveals this does not work as expected:
#include <chrono>
#include <iostream>
#include <catch2/catch.hpp>
#include "NTPClock.h"
using namespace std::chrono;
TEST_CASE("NTPClock_now")
{
auto ntp_dur = NTPClock::now().time_since_epoch();
auto sys_dur = system_clock::now().time_since_epoch();
std::cout << duration_cast<hours>(ntp_dur) << std::endl;
std::cout << ntp_dur << std::endl;
std::cout << duration_cast<hours>(sys_dur) << std::endl;
std::cout << sys_dur << std::endl;
REQUIRE(duration_cast<hours>(ntp_dur)-duration_cast<hours>(sys_dur) == hours(24*25567));
}
Output:
613612h
592974797620267184[1/268435455]s
457599h
16473577714886015[1/10000000]s
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
PackageTest.exe is a Catch v2.11.1 host application.
Run with -? for options
-------------------------------------------------------------------------------
NTPClock_now
-------------------------------------------------------------------------------
D:\Repos\...\TestNTPClock.cpp(10)
...............................................................................
D:\Repos\...\TestNTPClock.cpp(18): FAILED:
REQUIRE( duration_cast<hours>(ntp_dur)-duration_cast<hours>(sys_dur) == hours(24*25567) )
with expansion:
156013h == 613608h
===============================================================================
test cases: 1 | 1 failed
assertions: 1 | 1 failed
I also removed the offset of 25567 days in NTPClock::now asserting equality without success. I'm not sure what is going wrong here. Can anybody help?
Your tick period: 1/268'435'455 is unfortunately both extremely fine and also doesn't lend itself to much of a reduced fraction when your desired conversions are used (i.e. between system_clock::duration and NTPClock::duration. This is leading to internal overflow of your unsigned long long NTPClock::rep.
For example, on Windows the system_clock tick period is 1/10,000,000 seconds. The current value of now() is around 1.6 x 1016. To convert this to NTPClock::duration you have to compute 1.6 x 1016 times 53,687,091/2,000,000. The first step in that is the value times the numerator of the conversion factor which is about 8 x 1023, which overflows unsigned long long.
There's a couple of ways to overcome this overflow, and both involve using at least an intermediate representation with a larger range. One could use a 128 bit integral type, but I don't believe that is available on Windows, except perhaps by a 3rd party library. long double is another option. This might look like:
static time_point now() noexcept
{
using imd = std::chrono::duration<long double, period>;
return time_point
(
std::chrono::duration_cast<duration>(imd{std::chrono::system_clock::now().time_since_epoch()
+ std::chrono::hours(24*25567)})
);
};
That is, perform the offset shift with no conversion (system_clock::duration units), then convert that to the intermediate representation imd which has a long double rep, and the same period as NTPClock. This will use long double to compute 1.6 x 1016 times 53,687,091/2,000,000. Then finally duration_cast that to NTPClock::duration. This final duration_cast will end up doing nothing but casting long double to unsigned long long as the conversion factor is simply 1/1.
Another way to accomplish the same thing is:
static time_point now() noexcept
{
return time_point
(
std::chrono::duration_cast<duration>(std::chrono::system_clock::now().time_since_epoch()
+ std::chrono::hours(24*25567)*1.0L)
);
};
This takes advantage of the fact that you can multiply any duration by 1, but with alternate units and the result will have a rep with the common_type of the two arguments, but otherwise have the same value. I.e. std::chrono::hours(24*25567)*1.0L is a long double-based hours. And that long double carries through the rest of the computation until the duration_cast brings it back to NTPClock::duration.
This second way is simpler to write, but code reviewers may not understand the significance of the *1.0L, at least until it becomes a more common idiom.
I need to convert std::chrono::time_point to and from a long type (integer 64 bits). I´m starting working with std::chrono ...
Here is my code:
int main ()
{
std::chrono::time_point<std::chrono::system_clock> now = std::chrono::system_clock::now();
auto epoch = now.time_since_epoch();
auto value = std::chrono::duration_cast<std::chrono::milliseconds>(epoch);
long duration = value.count();
std::chrono::duration<long> dur(duration);
std::chrono::time_point<std::chrono::system_clock> dt(dur);
if (dt != now)
std::cout << "Failure." << std::endl;
else
std::cout << "Success." << std::endl;
}
This code compiles, but does not show success.
Why is dt different than now at the end?
What is missing on that code?
std::chrono::time_point<std::chrono::system_clock> now = std::chrono::system_clock::now();
This is a great place for auto:
auto now = std::chrono::system_clock::now();
Since you want to traffic at millisecond precision, it would be good to go ahead and covert to it in the time_point:
auto now_ms = std::chrono::time_point_cast<std::chrono::milliseconds>(now);
now_ms is a time_point, based on system_clock, but with the precision of milliseconds instead of whatever precision your system_clock has.
auto epoch = now_ms.time_since_epoch();
epoch now has type std::chrono::milliseconds. And this next statement becomes essentially a no-op (simply makes a copy and does not make a conversion):
auto value = std::chrono::duration_cast<std::chrono::milliseconds>(epoch);
Here:
long duration = value.count();
In both your and my code, duration holds the number of milliseconds since the epoch of system_clock.
This:
std::chrono::duration<long> dur(duration);
Creates a duration represented with a long, and a precision of seconds. This effectively reinterpret_casts the milliseconds held in value to seconds. It is a logic error. The correct code would look like:
std::chrono::milliseconds dur(duration);
This line:
std::chrono::time_point<std::chrono::system_clock> dt(dur);
creates a time_point based on system_clock, with the capability of holding a precision to the system_clock's native precision (typically finer than milliseconds). However the run-time value will correctly reflect that an integral number of milliseconds are held (assuming my correction on the type of dur).
Even with the correction, this test will (nearly always) fail though:
if (dt != now)
Because dt holds an integral number of milliseconds, but now holds an integral number of ticks finer than a millisecond (e.g. microseconds or nanoseconds). Thus only on the rare chance that system_clock::now() returned an integral number of milliseconds would the test pass.
But you can instead:
if (dt != now_ms)
And you will now get your expected result reliably.
Putting it all together:
int main ()
{
auto now = std::chrono::system_clock::now();
auto now_ms = std::chrono::time_point_cast<std::chrono::milliseconds>(now);
auto value = now_ms.time_since_epoch();
long duration = value.count();
std::chrono::milliseconds dur(duration);
std::chrono::time_point<std::chrono::system_clock> dt(dur);
if (dt != now_ms)
std::cout << "Failure." << std::endl;
else
std::cout << "Success." << std::endl;
}
Personally I find all the std::chrono overly verbose and so I would code it as:
int main ()
{
using namespace std::chrono;
auto now = system_clock::now();
auto now_ms = time_point_cast<milliseconds>(now);
auto value = now_ms.time_since_epoch();
long duration = value.count();
milliseconds dur(duration);
time_point<system_clock> dt(dur);
if (dt != now_ms)
std::cout << "Failure." << std::endl;
else
std::cout << "Success." << std::endl;
}
Which will reliably output:
Success.
Finally, I recommend eliminating temporaries to reduce the code converting between time_point and integral type to a minimum. These conversions are dangerous, and so the less code you write manipulating the bare integral type the better:
int main ()
{
using namespace std::chrono;
// Get current time with precision of milliseconds
auto now = time_point_cast<milliseconds>(system_clock::now());
// sys_milliseconds is type time_point<system_clock, milliseconds>
using sys_milliseconds = decltype(now);
// Convert time_point to signed integral type
auto integral_duration = now.time_since_epoch().count();
// Convert signed integral type to time_point
sys_milliseconds dt{milliseconds{integral_duration}};
// test
if (dt != now)
std::cout << "Failure." << std::endl;
else
std::cout << "Success." << std::endl;
}
The main danger above is not interpreting integral_duration as milliseconds on the way back to a time_point. One possible way to mitigate that risk is to write:
sys_milliseconds dt{sys_milliseconds::duration{integral_duration}};
This reduces risk down to just making sure you use sys_milliseconds on the way out, and in the two places on the way back in.
And one more example: Let's say you want to convert to and from an integral which represents whatever duration system_clock supports (microseconds, 10th of microseconds or nanoseconds). Then you don't have to worry about specifying milliseconds as above. The code simplifies to:
int main ()
{
using namespace std::chrono;
// Get current time with native precision
auto now = system_clock::now();
// Convert time_point to signed integral type
auto integral_duration = now.time_since_epoch().count();
// Convert signed integral type to time_point
system_clock::time_point dt{system_clock::duration{integral_duration}};
// test
if (dt != now)
std::cout << "Failure." << std::endl;
else
std::cout << "Success." << std::endl;
}
This works, but if you run half the conversion (out to integral) on one platform and the other half (in from integral) on another platform, you run the risk that system_clock::duration will have different precisions for the two conversions.
I would also note there are two ways to get the number of ms in the time point. I'm not sure which one is better, I've benchmarked them and they both have the same performance, so I guess it's a matter of preference. Perhaps Howard could chime in:
auto now = system_clock::now();
//Cast the time point to ms, then get its duration, then get the duration's count.
auto ms = time_point_cast<milliseconds>(now).time_since_epoch().count();
//Get the time point's duration, then cast to ms, then get its count.
auto ms = duration_cast<milliseconds>(tpBid.time_since_epoch()).count();
The first one reads more clearly in my mind going from left to right.
as a single line:
long value_ms = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::time_point_cast<std::chrono::milliseconds>(std::chrono::high_resolution_clock::now()).time_since_epoch()).count();
time_point objects only support arithmetic with other time_point or duration objects.
You'll need to convert your long to a duration of specified units, then your code should work correctly.
I have a 32 bit Linux system in which I have to record data that is timestamped with a UINT32 second offset from an epoch of 1901-01-01 00:00:00.
Calculating the timestamp is ok for me as I can use the 64 bit ticks() counter and ticks_per_second() functions to generate the seconds since epoch as follows (I only require second level resolution)
const ptime ptime_origin(time_from_string("1901-01-01 00:00:00"));
time_duration my_utc = microsec_clock::universal_time() - ptime_origin;
boost::int64_t tick_per_sec = my_utc.ticks_per_second();
boost::int64_t tick_count = my_utc.ticks();
boost::int64_t sec_since_epoch = tick_count/tick_per_sec;
This works for me since I know that as an unsigned integer, the seconds count will not exceed the maximum UINT32 value (well not for many years anyway).
The problem I have is that my application can receive a modbus message containing a UINT32 value for which I have to set the hardware and system clock with an ioctl call using RTC_SET_TIME. This UINT32 is again the offset in seconds since my epoch 1901-01-01 00:00:00.
My problem now is that I have no way to create a ptime object using 64 bit integers - the ticks part of the time_duration objects is private and I am restricted to using long which on my 32 bit system is just a 4-byte signed integer not large enough to store the seconds offset from my epoch.
I have no control over the value of the epoch and so I am really stumped as to how I can create my required boost::posix_time::ptime object from the data I have.
I can probably obtain a dirty solution by calculating hard second counts to particular time intervals and using an additional epoch to make a bridge to allow this but I was wondering if there is something in the boost code that will allow me to solve the problem entirely using the boost datetime library.
I have read all the documentation I can find but I cannot see any obvious way to do this.
EDIT: I found this related question Convert int64_t to time_duration but the accepted answer there does NOT work for my epoch
Although boost::posix_time::seconds cannot be used if the seconds represent a number greater than 32 bits (as of Oct 2014), it turns out that boost::posix_time::milliseconds can easily be used (without workarounds), as follows:
inline std::string convertMsSinceEpochToString(std::int64_t const ms)
{
boost::posix_time::ptime time_epoch(boost::gregorian::date(1970, 1, 1));
boost::posix_time::ptime t = time_epoch + boost::posix_time::milliseconds(ms);
return boost::posix_time::to_simple_string(t);
}
So, just convert your 64-bit seconds to (64-bit) milliseconds, and you're good to go!
Note Be /very/ aware of compiler dependent behaviour with the capacity of builting integral types:
uint64_t offset = 113ul*365ul*24ul*60ul*60ul*1000ul; // 113 years give or take some leap seconds/days etc.?
would work on GCC or Clang, but it would simply overflow the calculations in MSVC2013. You'd need to explicitly coerce the calulation to 64 bits:
uint64_t offset = uint64_t(113ul)*365*24*60*60*1000;
You could apply time_durations in the maximum allowable increments (which is std::numeric_limits<long>::max()) since the total_seconds field is limited to long (signed).
Note: I worded it as int32_t below so that it will still work correctly if compiled on a 64-bit platform.
Here's a small demonstration:
#include "boost/date_time.hpp"
#include <iostream>
using namespace boost::gregorian;
using namespace boost::posix_time;
int main()
{
uint64_t offset = 113ul*365ul*24ul*60ul*60ul; // 113 years give or take some leap seconds/days etc.?
static const ptime time_t_epoch(date(1901,1,1));
static const uint32_t max_long = std::numeric_limits<int32_t>::max();
std::cout << "epoch: " << time_t_epoch << "\n";
ptime accum = time_t_epoch;
while (offset > max_long)
{
accum += seconds(max_long);
offset -= max_long;
std::cout << "accumulating: " << accum << "\n";
}
accum += seconds(offset);
std::cout << "final: " << accum << "\n";
}
Prints:
epoch: 1901-Jan-01 00:00:00
accumulating: 1969-Jan-19 03:14:07
final: 2013-Dec-04 00:00:00
See it Live on Coliru
My current pattern (for unix) is to call gettimeofday, cast the tv_sec field to a time_t, pass that through localtime, and combine the results with tv_usec. That gives me a full date (year, month, day, hour, minute, second, nanoseconds).
I'm trying to update my code to C++11 for portability and general good practice. I'm able to do the following:
auto currentTime = std::chrono::system_clock::now( );
const time_t time = std::chrono::system_clock::to_time_t( currentTime );
const tm *values = localtime( &time );
// read values->tm_year, etc.
But I'm stuck on the milliseconds/nanoseconds. For one thing, to_time_t claims that rounding is implementation defined (!) so I don't know if a final reading of 22.6 seconds should actually be 21.6, and for another I don't know how to get the number of milliseconds since the previous second (are seconds guaranteed by the standard to be regular? i.e. could I get the total milliseconds since the epoch and just modulo it? Even if that is OK it feels ugly).
How should I get the current date from std::chrono::system_clock with milliseconds?
I realised that I can use from_time_t to get a "rounded" value, and check which type of rounding occurred. This also doesn't rely on every second being exactly 1000 milliseconds, and works with out-of-the-box C++11:
const auto currentTime = std::chrono::system_clock::now( );
time_t time = std::chrono::system_clock::to_time_t( currentTime );
auto currentTimeRounded = std::chrono::system_clock::from_time_t( time );
if( currentTimeRounded > currentTime ) {
-- time;
currentTimeRounded -= std::chrono::seconds( 1 );
}
const tm *values = localtime( &time );
int year = values->tm_year + 1900;
// etc.
int milliseconds = std::chrono::duration_cast<std::chrono::duration<int,std::milli> >( currentTime - currentTimeRounded ).count( );
Using this free, open-source library you can get the local time with millisecond precision like this:
#include "tz.h"
#include <iostream>
int
main()
{
using namespace date;
using namespace std::chrono;
std::cout << make_zoned(current_zone(),
floor<milliseconds>(system_clock::now())) << '\n';
}
This just output for me:
2016-09-06 12:35:09.102 EDT
make_zoned is a factory function that creates a zoned_time<milliseconds>. The factory function deduces the desired precision for you. A zoned_time is a pairing of a time_zone and a local_time. You can get the local time out with:
local_time<milliseconds> lt = zt.get_local_time();
local_time is a chrono::time_point. You can break this down into date and time field types if you want like this:
auto zt = make_zoned(current_zone(), floor<milliseconds>(system_clock::now()));
auto lt = zt.get_local_time();
local_days ld = floor<days>(lt); // local time truncated to days
year_month_day ymd{ld}; // {year, month, day}
time_of_day<milliseconds> time{lt - ld}; // {hours, minutes, seconds, milliseconds}
// auto time = make_time(lt - ld); // another way to create time_of_day
auto y = ymd.year(); // 2016_y
auto m = ymd.month(); // sep
auto d = ymd.day(); // 6_d
auto h = time.hours(); // 12h
auto min = time.minutes(); // 35min
auto s = time.seconds(); // 9s
auto ms = time.subseconds(); // 102ms
Instead of using to_time_t which rounds off you can instead do like this
auto tp = std::system_clock::now();
auto s = std::chrono::duration_cast<std::chrono::seconds>(tp.time_since_epoch());
auto t = (time_t)(s.count());
That way you get the seconds without the round-off. It is more effective than checking difference between to_time_t and from_time_t.
I read the standard like this:
It is implementation defined whether the value is rounder or truncated, but naturally the rounding or truncation only occurs on the most detailed part of the resulting time_t. That is: the combined information you get from time_t is never more wrong than 0.5 of its granularity.
If time_t on your system only supported seconds, you would be right that there could be 0.5 seconds systematic uncertainty (unless you find out how things were implemented).
tv_usec is not standard C++, but an accessor of time_t on posix. To conclude, you should not expect any rounding effects bigger than half of the smallest time value difference your system supports, so certainly not more than 0.5 micro seconds.
The most straight forward way is to use boost ptime. It has methods such as fractional_seconds()
http://www.boost.org/doc/libs/1_53_0/doc/html/date_time/posix_time.html#date_time.posix_time.ptime_class
For interop with std::chrono, you can convert as described here: https://stackoverflow.com/a/4918873/1149664
Or, have a look at this question: How to convert std::chrono::time_point to calendar datetime string with fractional seconds?