Convert float seconds to chrono::duration - c++

What is the easiest and elegant way to convert float time in seconds to std::chrono::duration<int64_t, std::nano>?
Is it just converting seconds to nanoseconds and passing to the std::chrono::duration constructor?
I have tried this code:
constexpr auto durationToDuration(const float time_s)
{
// need to convert the input in seconds to nanoseconds that duration takes
const std::chrono::duration<int64_t, std::nano> output{static_cast<int64_t>(time_s * 1000000000.0F)};
return output;
}
But it isn't converting well on many values of the input time_s.

The best way is also the easiest and safest. Safety is a key aspect of using chrono. Safety translates to: Least likely to contain programming errors.
There's two steps for this:
Convert the float to a chrono::duration that is represented by a float and has the period of seconds.
Convert the resultant duration of step 1 to nanoseconds (which is the same thing as duration<int64_t, std::nano>).
This might look like this:
constexpr
auto
durationToDuration(const float time_s)
{
using namespace std::chrono;
using fsec = duration<float>;
return round<nanoseconds>(fsec{time_s});
}
fsec is the resultant type of step 1. It does absolutely no computation, and just changes the type from float to a chrono::duration. Then the chrono engine is used to do the actual computation, changing one duration into another duration.
The round utility is used because floating point types are vulnerable to round-off error. So if a floating point value is close to an integral number of nanoseconds, but not exact, one usually desires that close value.
But std::chrono::round is really a C++17 facility. For C++14, just use one of the free, open-source versions floating around the web (http://howardhinnant.github.io/duration_io/chrono_util.html or https://github.com/HowardHinnant/date/blob/master/include/date/date.h).

Related

Converting from std::chrono:: to 32 bit seconds and nanoseconds?

This could be the inverse of Converting from struct timespec to std::chrono::?
I am getting my time as
const std::Chrono::CRealTimeClock::time_point RealTimeClockTime = std::Chrono::CRealTimeClock::now();
and I have to convert it to a struct timespec.
Actually, I don't, if there is an altrerntive; what I have to do is get the number of seconds since the epoch and the number of nanoseconds since the last last second.
I chose struct timespec becuase
struct timespec
{
time_t tv_sec; // Seconds - >= 0
long tv_nsec; // Nanoseconds - [0, 999999999]
};
The catch is that I need to shoehorn the seconds and nonseconds into uint32_t.
I am aware theat there is a danger of loss of precision, but reckon that we don't care too much about the nanoseconds while the year 208 problem gives me cause for concern.
However, I have to bang out some code now and we can update it later if necessary. The code has to meet another manufacturer's specification and it is likely to take weeks or months to get this problem resolved and use uint64_t.
So, how can I, right now, obtain 32 bit values of second and nanosecond from std::Chrono::CRealTimeClock::now()?
I'm going to ignore std::Chrono::CRealTimeClock::now() and just pretend you wrote std::chrono::system_clock::now(). Hopefully that will give you the tools to deal with whatever clock you actually have.
Assume:
#include <cstdint>
struct my_timespec
{
std::uint32_t tv_sec; // Seconds - >= 0
std::uint32_t tv_nsec; // Nanoseconds - [0, 999999999]
};
Now you can write:
#include <chrono>
my_timespec
now()
{
using namespace std;
using namespace std::chrono;
auto tp = system_clock::now();
auto tp_sec = time_point_cast<seconds>(tp);
nanoseconds ns = tp - tp_sec;
return {static_cast<uint32_t>(tp_sec.time_since_epoch().count()),
static_cast<uint32_t>(ns.count())};
}
Explanation:
I've used function-local using directives to reduce code verbosity and increase readability. If you prefer you can use using declarations instead to bring individual names into scope, or you can explicitly qualify everything.
The first job is to get now() from whatever clock you're using.
Next use std::chrono::time_point_cast to truncate the precision of tp to seconds precision. One important note is that time_point_cast truncates towards zero. So this code assumes that now() is after the clock's epoch and returns a non-negative time_point. If this is not the case, then you should use C++17's floor instead. floor always truncates towards negative infinity. I chose time_point_cast over floor only because of the [c++14] tag on the question.
The expression tp - tp_sec is a std::chrono::duration representing the time duration since the last integral second. This duration is implicitly converted to have units of nanoseconds. This implicit conversion is typically fine as all implementations of system_clock::duration have units that are either nanoseconds or coarser (and thus implicitly convertible to) nanoseconds. If your clock tracks units of picoseconds (for example), then you will need a duration_cast<nanoseconds>(tp - tp_sec) here to truncate picoseconds to nanoseconds precision.
Now you have the {seconds, nanoseconds} information in {tp_sec, ns}. It's just that they are still in std::chrono types and not uint32_t as desired. You can extract the internal integral values with the member functions .time_since_epoch() and .count(), and then static_cast those resultant integral types to uint32_t. The final static_cast are optional as integral conversions can be made implicitly. However their use is considered good style.

Time-stamping using std::chrono - How to 'filter' data based on relative time?

I want to time-tag a stream of data I produce, for which I want to use std::chrono::steady_clock.
These time-stamps are stored with the data ( as array of uint64 values?), and I will later need to process these time-stamps again.
Now, I haven't been using the std::chrono library at all so far, so I do need a bit of help on the syntax and best practices with this library.
I can get & store values using:
uint64_t timestamp = std::chrono::steady_clock::now().time_since_epoch().count();
but how do I best:
On reading the data create a timepoint from the uint64 ?
Get the ticks-per-second (uint64) value for the steady_clock?
Find a "cut-off" timepoint (as uint64) that lies a certain time (in seconds) prior a given timepoint?
Code snippets for the above would be appreciated.
I want to combine the three above essentially to do the following: Having an array of (increasing) time-stamp values (as uint64), I want to truncate it such that all data 'older' than last-time-stamp minus X seconds is thrown away.
Let's have a look at the features you might use in the cppreference documentation for chrono.
First off, you need to decide which clock you want to use. There is the steady_clock which you suggested, the high_resolution_clock and the system_clock.
high_resolution_clock is implementation dependent, so let's put this away unless we really need it. The steady_clock is guaranteed to be monotonic, but there is no guarantee of the meaning for the value you are getting. It's ideal for sorting events or measuring their intervals, but you can't get a timepoint out of it.
On the other hand, system_clock has a meaning, it's the UNIX epoch, so you can get a time value out of it, but is not guaranteed to be monotonic.
To get the period (duration of one tick) of a steady_clock, you have the period member:
auto period = std::chrono::steady_clock::period();
std::cout << "Clock period " << period.num << " / " << period.den << " seconds" << std::endl;
std::cout << "Clock period " << static_cast<double>(period.num) / period.den << " seconds" << std::endl;
Assuming you want to filter events that happened in the last few seconds using steady_clock values, you first need to compute the number of ticks in the time period you want and subtract it from now. Something along the lines of:
std::chrono::system_clock::time_point now = std::chrono::system_clock::now();
std::time_t t_c = std::chrono::system_clock::to_time_t(now - std::chrono::seconds(10));
And use t_c as cutoff point.
However, do not rely on std::chrono::steady_clock::now().time_since_epoch().count(); to get something meaningful - is just a number. The epoch for the steady_clock is usually the boot time. If you need a time, you should use system_clock (keeping in mind that is not monotonous).
C++20a introduces some more clocks, which are convertible to time.
As it took me far too long to figure it out from various sources today, I'm going to post my solution here as self-answer. ( I would appreciate comments on it, in case something is not correct or could be done better.)
Getting a clock's period in seconds and ticks-per-second value
using namespace std::chrono;
auto period = system_clock::period();
double period_s = (double) period.num / period.den;
uint64 tps = period.den / period.num;
Getting a clock's timepoint (now) as uint64 value for time-stamping a data stream
using namespace std::chrono;
system_clock::time_point tp_now = system_clock::now();
uint64 nowAsTicks = tp_now.time_since_epoch().count();
Getting a clock's timepoint given a stored uint64 value
using namespace std::chrono;
uint64 givenTicks = 12345; // Whatever the value was
system_clock::time_point tp_recreated = system_clock::time_point{} + system_clock::duration(givenTicks);
uint64 recreatedTicks = tp_now.time_since_epoch().count();
Assert( givenTicks == recreatedTicks ); // has to be true now
The last ( uint64 to timepoint ) was troubling me the most. The key-insights needed were:
(On Win10) The system_clock uses a time-resolution of 100 nanoseconds. Therefore one can not directly add std::chrono::nanoseconds to its native time points. (std::chrono:system_clock_time_point)
However, because the ticks are 100's of nanoseconds, one can also not use the next higher duration unit (microseconds) as it cannot be represent as an integer value.
One could use use an explicit cast to microseconds, but that would loose the 0.1us resolution of the the tick.
The proper way is to use the system_clock's own duration and directly initialize it with the stored tick value.
In my search I found the following resources most helpful:
Lecture of Howard Hinnant on YouTube - extremely helpful. I wish I would have started here.
cppreference.com on time_point and duration and time_since_epoch
cplusplus.com on steady clock and time_point
A nice place to look as usual is the reference manual :
https://en.cppreference.com/w/cpp/chrono
In this case you are looking for :
https://en.cppreference.com/w/cpp/chrono/clock_time_conversion
Since really you are using a clock with "epoch" 1/1/70 as origin and ms as unit.
Then just use arithmetic on durations to do the cutoff things you want :
https://en.cppreference.com/w/cpp/chrono/duration
There are code examples at bottom of each linked page.

Does Howard Hinnant's date::parse() function work with floating-point durations?

I'm trying to parse 2020-03-25T08:27:12.828Z into std::chrono::time_point<system_clock, duration<double>> using Howard Hinnant's Date library.
It is expected that the following code outputs two identical strings:
#include "date.h"
#include <chrono>
#include <string>
#include <iostream>
using namespace std;
using namespace std::chrono;
using namespace date;
int main() {
double d = 1585124832.828;
time_point<system_clock, duration<double>> t{duration<double>{d}}, t1;
string s {format("%FT%TZ", t) };
cout << s << "\n";
stringstream ss {s};
ss >> parse("%FT%TZ", t1);
cout << format("%FT%TZ", t1) << "\n";
}
But I get:
2020-03-25T08:27:12.828000Z
1970-01-01T00:00:00.000000Z
When I declare t and t1 as follows:
time_point<system_clock, milliseconds> t{duration_cast<milliseconds>(duration<double>{d})}, t1;
the code works as expected i.e. it outputs two identical lines
It can parse floating point, but you really don't want to. Your fix using milliseconds is the recommended way to go.
Explanation:
When you format the double-based seconds, it uses fixed formatting which defaults to 6 decimal places. This is why you see the 3 trailing zeroes after the .828.
On parse, the expected precision is driven by the precision of the input type, even if the rep is floating point. So with duration<double> it only parses the integral part of the seconds. Then it starts looking for the trailing Z and finds . instead. This causes the parse to fail. If you didn't have the Z in the parse string, it wouldn't fail, but it also wouldn't parse the fractional part of the seconds.
If you changed the time_point to be double-based microseconds, then it works again:
time_point<system_clock, duration<double, micro>> t{duration<double>{d}}, t1;
But I consider this way too cryptic and subtle, and it still has another problem you haven't hit yet: In C++17, round is supplied by the vendor as std::chrono::round, and this is used under the hood of parse. And the C++17 version of round does not permit the destination duration to have a floating point rep. So your code won't even compile in C++17 or later.
Using integral-based milliseconds avoids all of this complication with parsing floating point values. And you can still convert the result back to duration<double> if you want to.
One subtle suggestion though:
time_point<system_clock, milliseconds> t{round<milliseconds>(duration<double>{d})}, t1;
When converting from floating-point based to integral-based reps, I like to use round instead of duration_cast. This avoids off-by-one errors when the underlying double doesn't exactly represent the desired value and duration_cast truncates in the wrong direction (towards zero). round will round towards the nearest representable value.
The above can be further simplified with the date::sys_time templated type alias:
sys_time<milliseconds> t{round<milliseconds>(duration<double>{d})}, t1;
The above is exactly equivalent. sys_time<duration> is a type alias for time_point<system_clock, duration>. And in C++20, it simplifies even further with new CTAD rules:
sys_time t{round<milliseconds>(duration<double>{d})};
The <milliseconds> is deduced from the type of the argument (C++20). Though you then have to declare t1 separately, or give t1 a milliseconds initial value (e.g. , t1{0ms};).

How safe is it to assume time_t is in seconds?

I'm doing a lot of calculations with times, building time objects relative to other time objects by adding seconds. The code is supposed to run on embedded devices and servers. Most documentations say about time_t that it's some arithmetic type, storing usually the time since the epoch. How safe is it to assume that time_t store a number of seconds since something? If we can assume that, then we can just use addition and subtraction rather than localtime, mktime and difftime.
So far I've solved the problem by using a constexpr bool time_tUsesSeconds, denoting whether it is safe to assume that time_t uses seconds. If it's non-portable to assume time_t is in seconds, is there a way to initialize that constant automatically?
time_t timeByAddingSeconds(time_t theTime, int timeIntervalSeconds) {
if (Time_tUsesSeconds){
return theTime + timeIntervalSeconds;
} else {
tm timeComponents = *localtime(&theTime);
timeComponents.tm_sec += timeIntervalSeconds;
return mktime(&timeComponents);
}
}
The fact that it is in seconds is stated by the POSIX specification, so, if you're coding for POSIX-compliant environments, you can rely on that.
The C++ standard also states that time_t must be an arithmetic type.
Anyway, the Unix timing system (second since the Epoch) is going to overflow in 2038. So, it's very likely that, before this date, C++ implementations will switch to other non-int data types (either a 64-bit int or a more complex datatype). Anyway, switching to a 64-bit int would break binary compatibility with previous code (since it requires bigger variables), and everything should be recompiled. Using 32-bit opaque handles would not break binary compatibility, you can change the underlying library, and everything still works, but time_t would not a time in seconds anymore, it'd be an index for an array of times in seconds. For this reason, it's suggested that you use the functions you mentioned to manipulate time_t values, and do not assume anything on time_t.
If C++11 is available, you can use std::chrono::system_clock's to_time_t and from_time_t to convert to/from std::chrono::time_point, and use chrono's arithmetic operators.
If your calculations involve the Gregorian calendar, you can use the HowardHinnant/date library, or C++20's new calendar facilities in chrono (they have essentially the same API).
There is no requirement in standard C or in standard C++ for the units that time_t represents. To work with seconds portably you need to use struct tm. You can convert between time_t and struct tm with mktime and localtime.
Rather than determine whether time_t is in seconds, since time_t is an arithmetic type, you can instead calculate a time_t value that represents one second, and work with that. This answer I wrote before explains the method and has some caveats, here's some example code (bad_time() is a custom exception class, here):
time_t get_sec_diff() {
std::tm datum_day;
datum_day.tm_sec = 0;
datum_day.tm_min = 0;
datum_day.tm_hour = 12;
datum_day.tm_mday = 2;
datum_day.tm_mon = 0;
datum_day.tm_year = 30;
datum_day.tm_isdst = -1;
const time_t datum_time = mktime(&datum_day);
if ( datum_time == -1 ) {
throw bad_time();
}
datum_day.tm_sec += 1;
const time_t next_sec_time = mktime(&datum_day);
if ( next_sec_time == -1 ) {
throw bad_time();
}
return (next_sec_time - datum_time);
}
You can call the function once and store the value in a const, and then just use it whenever you need a time_t second. I don't think it'll work in a constexpr though.
My two cents: on Windows it is in seconds over time but the time it takes for one second to increment to the next is usually 18*54.925 ms and sometimes 19*54.925. The reason for this is explained in this post.
(Answering own question)
One answer suggests that as long as one is using posix, time_t is in seconds and arithmetic on time_t should work.
A second answer calculates the time_t per second, and uses that as a factor when doing arithmetic. But there are still some assumptions about time_t made.
In the end I decided portability is more important, I don't want my code to fail silently on some embedded device. So I used a third way. It involves storing an integer denoting the time since the program starts. I.e. I define
const static time_t time0 = time(nullptr);
static tm time0Components = *localtime(&time0);
All time values used throughout the program are just integers, denoting the time difference in seconds since time0. To go from time_t to this delta seconds, I use difftime. To go back to time_t, I use something like this:
time_t getTime_t(int timeDeltaSeconds) {
tm components = time0Components;
components.tm_sec += timeDeltaSeconds;
return mktime(&components);
}
This approach allows making operations like +,- cheap, but going back to time_t is expensive. Note that the time delta values are only meaningful for the current run of the program. Note also that time0Components has to be updated when there's a time zone change.

chrono with different time periods?

Currently I am using boost::rational<std::uint64> to keep track in my application.
Basically I have a clock that runs over a very long period of time and will be tick by different components of different time resolutions, e.g. 1/50s, 1/30s, 1001/30000s etc... I want to maintain perfect precision, i.e. no floating point. boost::rational works well for this purpose, however I think it would be better design to use std::chrono::duration for this.
My problem though is, how can I use std::chrono::duration here? Since it uses a compile time period I don't quite see how I can use it in my scenario where I need to maintain precision?
If I'm understanding your question, and if you know all of the different time resolutions at compile-time, then the following will do what you want. You can figure out the correct tick period by using common_type on all of your different time resolutions as shown below:
#include <cstdint>
#include <chrono>
struct clock
{
typedef std::uint64_t rep;
typedef std::common_type
<
std::chrono::duration<rep, std::ratio<1, 50>>,
std::chrono::duration<rep, std::ratio<1, 30>>,
std::chrono::duration<rep, std::ratio<1001, 30000>>
>::type duration;
typedef duration::period period;
typedef std::chrono::time_point<clock> time_point;
static const bool is_steady = true;
static time_point now()
{
// just as an example
using namespace std::chrono;
return time_point(duration_cast<duration>(steady_clock::now().time_since_epoch()));
}
};
This will compute at compile-time the largest tick period which will exactly represent each of your specified resolutions. For example with this clock one can exactly represent:
1/50 with 600 ticks.
1/30 with 1000 ticks.
1001/30000 with 1001 ticks.
The code below exercises this clock and uses the "chrono_io" facility described here to print out not only the run-time number of ticks of your clock, but also the compile-time units of your clock-tick:
#include <iostream>
#include <thread>
#include "chrono_io"
int main()
{
auto t0 = clock::now();
std::this_thread::sleep_for(std::chrono::milliseconds(20));
auto t1 = clock::now();
std::cout << (t1-t0) << '\n';
}
For me this prints out:
633 [1/30000]seconds
Meaning: There were 633 clock ticks between calls to now() and the unit of each tick is 1/30000 of a second. If you don't want to be beholden to "chrono_io" you can inspect the units of your clock with clock::period::num and clock::period::den.
If your different time resolutions are not compile-time information, then your current solution with boost::rational is probably best.
You're allowed to set the period to 1 and use a floating point type for Rep.
I suspect that you can do the same thing with boost::rational, but you'll have to look quite closely at std::chrono, which I haven't done. Look at treat_as_floating_point and duration_values. Also try to figure out what the standard means by "An arithmetic type or a class emulating an arithmetic type".
One might reasonably argue that if boost::rational doesn't emulate an arithmetic type, then it's not doing its job. But it doesn't necessarily follow that it really does everything std::chrono::duration expects.