I've a code as follows -
int main(){
....
auto time = std::chrono::system_clock::now().time_since_epoch() / std::chrono::milliseconds(1);
....
return 0;
}
The variable time here gives output as l with typeid().name() method, but is it safe to assume that if I replace auto with long type, the variable will still store the correct amount of milliseconds across different machines?
I need it because I cannot specify auto as type in class members, since they aren't constexpr or static where it might've been possible. And my intent is to send the data to browser where I can do var d = new Date(time) and it displays correct time. The communication part has been figured out via json format, I'm only stuck at how to store it correctly across different systems.
[...] is it safe to assume that if I replace auto with long type, the variable will still store the correct amount of milliseconds across different machines?
No, you need a signed integer type of at least 45 bits, which long is not guarantee to be. You should use std::chrono::milliseconds::rep:
using namespace std::chrono;
milliseconds::rep time =
duration_cast<milliseconds>(system_clock::now().time_since_epoch()).count();
Also note that in term of portability, the system_clock's epoch is not guaranteed to be January 1st 1970 00:00:00 UTC by the standard (even if it is the case most of the time).
Both of the existing answers are good. But as long as you're in C++ I encourage you to make the data member of type:
std::chrono::time_point<std::chrono::system_clock, std::chrono::milliseconds>
I know this is an ugly mouthful, but it can easily be made prettier. It is also easy to use. And it will help prevent run time errors in your C++ code.
Make it prettier
I recommend this templated using:
template <class Duration>
using sys_time = std::chrono::time_point<std::chrono::system_clock, Duration>;
Now you can make your data member have type:
sys_time<std::chrono::milliseconds> time_;
This is much more readable, and it exactly preserves the semantics that you are storing a time point, not an arbitrary number, or the number of calories in grapefruit.
Type Safety
Let's say six months from now you are re-visiting this code and you write:
auto z = x.time_ + y.time_;
If you had previously decided to give time_ type std::int64_t, or std::chrono::milliseconds::rep, then the above new line of code compiles and creates a run time error. It makes no sense to add two points in time. Tomorrow + today is nonsensical.
However if you had previously decided to give time_ type sys_time<milliseconds> as I suggest, the above line of code creating z does not compile. The type system has detected the logic error at compile time. Now you are forced to immediately re-visit your logic and discover why you are attempting to add two time points. Maybe it was just a type-o and you meant to subtract them (which is logical, compiles, and results in a duration of type milliseconds).
Ease of use
You can assign now() to your time_ data member with this simple syntax:
using namespace std::chrono;
time_ = time_point_cast<milliseconds>(system_clock::now());
Now time_ is just another system_clock-based time_point but with a precision of milliseconds. For outputting to json you can get the internal signed integral value with:
json_stream << time_.time_since_epoch().count();
For parsing in from json you can:
std::int64_t temp;
json_stream >> temp;
time_ = sys_time<milliseconds>{milliseconds{temp}};
Your approach will work and is portable, but i suggest to use a more straightforward approach for counting milliseconds:
std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::system_clock::now().time_since_epoch()).count()
This will definitely work because .count() returns a std::chrono::milliseconds::rep which is "A signed integral type of at least 45 bits" and should fit in long.
Note: it is not guaranteed that system_clock will have millisecond resolution. But in any case you will get result in milliseconds.
Side note: I can be good to utilize using namespace std::chrono; because this will reduce code length significantly.
Related
I get a timestamp from a GPS device in a gps_data struct as a double.
I'd like to convert this GPS timestamp to UTC and TAI times, something simple as:
void handle_gps_timestamp(double timestamp)
{
double utc = utc_from_gps(timestamp);
double tai = tai_from_gps(timestamp);
do_stuff(gps, utc, tai);
}
Luckily I found Howard Hinnant's date and timezone library (proposed for C++20) that seems to provide this exact functionality. Unfortunately, at least from what I can see, the date/tz/chrono library has no convenient methods that allow this simple usage.
I must first somehow "transfer" my double into a known chrono/date type. But OK, since I understand the overall concept, namely that the timepoint is defined as a duration after (or before) the epoch of a clock, and I think that this is a beautiful model.
Assumption
I should be able to very easily translate that model to fit my problem, right?
In my case, I have a timestamp that is a point in time, specified as the duration since the gps epoch. Now, there should be a class type of a clock that abstracts and handles all of this for me, I'd like to think. And yes! There is a date::gps_clock and a date::gps_time, which surely should do the work.
Problem
I cannot make it work for me. I'm sure the solution is trivial.
Question
Can someone give me a helping hand, showing how I should use Howard's date library applied to my problem?
It is difficult to answer this question precisely because the input to the problem is underspecified:
I get a timestamp from a GPS device in a gps_data struct as a double ... specified as the duration since the gps epoch.
Therefore I'm going to make some assumptions. I'll state all of my assumptions, and hopefully it will be clear how to alter my answer for other guesses/facts about what that double represents.
Let's say that the double is a non-integral count of milliseconds since the gps epoch. Let's furthermore assume that I want to capture the precision of this input down to microseconds.
#include "date/tz.h"
#include <cstdint>
#include <iostream>
int
main()
{
double gps_input = 1e+12 + 1e-3;
using namespace date;
using namespace std::chrono;
using dms = duration<double, std::milli>;
gps_time<microseconds> gt{round<microseconds>(dms{gps_input})};
auto utc = clock_cast<utc_clock>(gt);
auto tai = clock_cast<tai_clock>(gt);
std::cout << gt << " GPS\n";
std::cout << utc << " UTC\n";
std::cout << tai << " TAI\n";
}
I've arbitrarily created an example input and stored it in gps_input.
Some using directives make the code a lot less verbose.
A custom chrono::duration type that exactly matches the documented specification for what the double represents makes things much simpler, and lessens the chance for errors. In this case I've made a chrono::duration that stores milliseconds in a double and named that type dms.
Now you simply convert the double to dms, and then using round, convert the dms to microseconds, and store those microseconds in a gps time point with precision microseconds or finer. One could use duration_cast in place of round, but when converting from floating point to integral, I usually prefer round, which means round-to-nearest-and-to-even-on-tie.
Now that you have a gps_time, one can use the clock_cast function to convert to other times such as utc_time and tai_time.
This program outputs:
2011-09-14 01:46:40.000001 GPS
2011-09-14 01:46:25.000001 UTC
2011-09-14 01:46:59.000001 TAI
Adjust the milliseconds and microseconds units above as needed. For example if the input represents seconds, the easiest thing to do is to default the second template argument on dms:
using dms = duration<double>;
This library works with C++11/14/17. And with minor modifications it is now part of the official C++20 specification.
The other answer isn't bad but it does require you to have c++17, curl, run cmake, and acquire some custom libraries.
Something that is much easier to drop in as a .h and .cpp would be http://www.leapsecond.com/tools/gpsdate.c.
That doesn't handle the TAI conversion but that might also be on that list.
I'm quite new to c++ so thera are a bunch of questions I've got but for now this one drives my crazy:
I've ot a json response and want to parse one object as long (because it's a timestamp). After that I want tp parse that long to a time_point object via
chrono::system_clock::from_time_t(...);
So this is what I got for now:
auto last_change_date_long = (long long)json_troubleticket["lastChangeDate"].int_value();
time_t last_change_date_raw = time_t(last_change_date_long);
auto last_change_date = chrono::system_clock::from_time_t(last_change_date_raw);
It compiles, but if i run this (while I know the value for lastChangeDate is 1480702672000) it's result is
2147483647000 ...
Does anyone have a suggestion what went wrong?
This will do it:
auto i = 1480702672000;
std::chrono::system_clock::time_point tp{std::chrono::milliseconds{i}};
Note that the above is not guaranteed to work by the standard because the epoch of system_clock is unspecified. However all implementations are currently using Unix Time, and I have an informal agreement with the implementors that they will not deviate from this while I try to standardize this existing practice.
The reason you're seeing the behavior you have is that your json is counting milliseconds since 1970-01-01 00:00:00 UTC, but time_t typically counts seconds (though that is also not specified by the standard). So at the point where you create last_change_date_raw from last_change_date_long, you're implicitly converting milliseconds to seconds. This would result in a date midway through the year 48891. The implementation of from_time_t is likely freaking out about that (overflowing).
Fwiw, this particular time point represents:
2016-12-02 18:17:52.000 UTC
I have been experimenting with all kind of timers on Linux and OSX, and would like to try and wrap some of them with the same interface used by std::chrono.
That's easy to do for timers that have a well-defined "period" at compile time, e.g. the POSIX clock_gettime() familiy, the clock_get_time() family on OSX, or gettimeofday().
However, there are some useful timers for which the "period" - while constant - is only known at runtime.
For example:
- POSIX states the period of clock(), CLOCKS_PER_SEC, may be a variable on non-XSI systems
- on Linux, the period of times() is given at runtime by sysconf(_SC_CLK_TCK)
- on OSX, the period of mach_absolute_time() is given at runtime by mach_timebase_info()
- on recent Intel processors, the DST register ticks at a constant rate, but of course that can only be determined at runtime
To wrap these timers in the std::chrono interface, one possibility would be to use a period of std::chrono::nanosecond , and convert the value of each timer to nanoseconds. An other approach could be to use a floating point representation. However, both approaches would introduce a (very small) overhead to the now() function, and a (probably small) loss in precision.
The solution I'm trying to pursue is to define a set of classes to represent such "run-time constant" periods, built along the same lines as the std::ratio class.
However I expect that will require rewriting all the related template classes and functions (as they assume constexpr values).
How do I wrap these kind of timers a la std:chrono ?
Or use non-constexpr values for the time period of a clock ?
Does anyone have any experience with wrapping these kind of timers a
la std:chrono ?
Actually I do. And on OSX, one of your platforms of interest. :-)
You mention:
on OSX, the period of mach_absolute_time() is given at runtime by
mach_timebase_info()
Absolutely correct. Also on OSX, the libc++ implementation of high_resolution_clock and steady_clock is actually based on mach_absolute_time. I'm the author of this code, which is open source with a generous license (do anything you want with it as long as you retain the copyright).
Here is the source for libc++'s steady_clock::now(). It is built pretty much the way you surmised. The run time period is converted to nanoseconds prior to returning. On OS X the conversion factor is very often 1, and the code takes advantage of that fact with an optimization. However the code is general enough to handle non-1 conversion factors.
On the first call to now() there's a small cost of querying the run time conversion factor to nanoseconds. In the general case a floating point conversion factor is computed. In the common case (conversion factor == 1) the subsequent cost is calling through a function pointer. I've found that the overhead is really quite reasonable.
On OS X the conversion factor, although not determined until run time, is still a constant (i.e. does not vary as the program executes), so it only needs to be computed once.
If you're in a situation where your period is actually varying dynamically, you'll need more infrastructure to handle this. Essentially you would need to integrate (calculus) the period vs time curve and then compute an average period between two points in time. That would require a constant monitoring of the period as it changes with time, and <chrono> isn't the right tool for that. Such tools are typically handled at the OS level.
[Does anyone have any experience] Or with using non-constexpr values for the time period of a clock ?
After reading through the standard (20.11.5, Class template duration), "period" is expected to be "a specialization of ratio":
Remarks: If Period is not a specialization of ratio, the program is ill-formed.
and all chrono templates rely heavily on constexpr functionality.
Does anyone have any experience with wrapping these kind of timers a la std:chrono ?
I've found here a suggestion to use a duration with period = 1, boost::rational as rep , though without any concrete examples.
I have done a similar thing for my purposes, only for Linux though. You find the code here; feel free to use the code in whatever way you want.
The challenges my implementation addresses overlap partially with the ones mentioned in your question. Specifically:
The tick factor (required to convert from clock ticks to a time unit based on seconds) is retrieved at run time, but only the first time now() is used‡. If you are concerned about the small overhead this causes, you may call the now() function once at start-up before you measure any actual intervals. The tick factor is stored in a static variable, which means there is still some overhead as – on the lowest level – each call of the now() function implies checking whether the static variable has been initialized. However, this overhead will be the same in each call of now(), so it shouldn't impact measuring time intervals.
I do not convert to nanoseconds by default, because when measuring relatively long periods of time (e.g. a few seconds) this causes overflows very quickly. This is in fact the main reason why I don't use the boost implementation. Instead of converting to nanoseconds, I implement the base unit as a template parameter (called Precision in the code). I use std::ratio from C++11 as template arguments. So I can choose, for example, a clock<micro>, which implies that calling the now() function will internally convert to microseconds rather than nanoseconds, which means I can measure periods of many seconds or minutes without overflows and still with good precision. (This is independent of the unit used to produce output. You can have a clock<micro> and display the result in seconds, etc.)
My clock type, which is called combined_clock combines user time, system time and wall-clock time. There is a boost clock type for this, too, but it's not compatible with the ratio types and units from std, whereas mine is.
‡The tick factor is retrieved using the ::sysconf() call you suggest, and that is guaranteed to return one and the same value throughout the life time of the process.
So the way you use it is as follows:
#include "util/proctime.hpp"
#include <ratio>
#include <chrono>
#include <thread>
#include <utility>
#include <iostream>
int main()
{
using std::chrono::duration_cast;
using millisec = std::chrono::milliseconds;
using clock_type = rlxutil::combined_clock<std::micro>;
auto tp1 = clock_type::now();
/* Perform some random calculations. */
unsigned long step1 = 1;
unsigned long step2 = 1;
for (int i = 0 ; i < 50000000 ; ++i) {
unsigned long step3 = step1 + step2;
std::swap(step1,step2);
std::swap(step2,step3);
}
/* Sleep for a while (this adds to real time, but not CPU time). */
std::this_thread::sleep_for(millisec(1000));
auto tp2 = clock_type::now();
std::cout << "Elapsed time: "
<< duration_cast<millisec>(tp2 - tp1)
<< std::endl;
return 0;
}
The usage above involves a pretty-print function that generates output like this:
Elapsed time: [user 40, system 0, real 1070 millisec]
I'm creating a simple timer class which returns me e.g. the current time in millis. On linux I'm using gettimeofday. I'm wondering what return type this function should have. i.e.
double getMillis() or uint64_t getMillis() etc... I would say uint64_t can hold bigger values and therefore recommended, though while googling I see lots of different implementations.
Any advice on this?
Thanks
My recommended data type to hold absolute time stamps in milliseconds is int64_t, mainly because time_t is signed.
I would go with the unsigned integer type since the number of milliseconds is a count. Makes adding and subtracting dependable without float as well. Most implementations I have used has unsigned integer types.
Is there any class in c++ for representing time in milliseconds ?
I need to hold time and compare values, set time from device. Do I need to write my own or there is already ? I looked at <ctime> and time_t but it can holds only seconds.
Well, C++11's std::chrono has a concept of time duration, one of which is milliseconds.
If you're simply dealing with millisecond time durations, then an integer type will be fine; perhaps using typedef to give it a friendly name.
POSIX does this with time_t (representing seconds) and clock_t (representing microseconds); standard C also specifies these types, but doesn't specify which units they use.
If you want to mix units, then std::chrono, or boost::chrono if you can't use C++11, have some nice types such as duration that wrap up integer values and automatically change scale as appropriate; so you can write things like duration d = seconds(4) + milliseconds(123).