Is there any class in c++ for representing time in milliseconds ?
I need to hold time and compare values, set time from device. Do I need to write my own or there is already ? I looked at <ctime> and time_t but it can holds only seconds.
Well, C++11's std::chrono has a concept of time duration, one of which is milliseconds.
If you're simply dealing with millisecond time durations, then an integer type will be fine; perhaps using typedef to give it a friendly name.
POSIX does this with time_t (representing seconds) and clock_t (representing microseconds); standard C also specifies these types, but doesn't specify which units they use.
If you want to mix units, then std::chrono, or boost::chrono if you can't use C++11, have some nice types such as duration that wrap up integer values and automatically change scale as appropriate; so you can write things like duration d = seconds(4) + milliseconds(123).
Related
Since std::filesystem::file_time_type uses a trivial clock in C++17, is there a way to retrieve the actual clock type of file_time_type with C++17**?
The goal is to convert the time to std::chrono::system_clock to use it e.g. in a stream.
** In C++20, file_time_type will use std::chrono::file_clock, which has operator<< and can be casted to other clocks using std::chrono::clock_cast.
You can retrieve the clock easily enough via file_time_type::clock. But converting from one clock to another won't really be possible until the changes from C++20 are available.
The reason the clock even matters is that different clocks define a different "epoch" start date, the 0 time point for that clock. But in C++11-17, all clocks have an implementation-defined "epoch", and each clock can use a different one. The file_time_type::clock is no different. Since every implementation of every clock potentially uses a different epoch, and no clock epochs are defined relative to each other, there's no way to convert one clock's time_point into another clock's time_point. Not without implementation-specific knowledge.
C++20 changes things in exactly one way: system_clock has a well-defined epoch: it uses UNIX-time. Because there is one clock with a well-defined epoch, that clock can act as a universal intermediate clock for clock-to-clock conversions. That is, every implementation-defined clock's epoch can be converted relative to UNIX-time, and vice-versa. Hence: clock_cast.
Without that definition, and the time_conversion machinery, there's not much you can do... portably, at least.
I've a code as follows -
int main(){
....
auto time = std::chrono::system_clock::now().time_since_epoch() / std::chrono::milliseconds(1);
....
return 0;
}
The variable time here gives output as l with typeid().name() method, but is it safe to assume that if I replace auto with long type, the variable will still store the correct amount of milliseconds across different machines?
I need it because I cannot specify auto as type in class members, since they aren't constexpr or static where it might've been possible. And my intent is to send the data to browser where I can do var d = new Date(time) and it displays correct time. The communication part has been figured out via json format, I'm only stuck at how to store it correctly across different systems.
[...] is it safe to assume that if I replace auto with long type, the variable will still store the correct amount of milliseconds across different machines?
No, you need a signed integer type of at least 45 bits, which long is not guarantee to be. You should use std::chrono::milliseconds::rep:
using namespace std::chrono;
milliseconds::rep time =
duration_cast<milliseconds>(system_clock::now().time_since_epoch()).count();
Also note that in term of portability, the system_clock's epoch is not guaranteed to be January 1st 1970 00:00:00 UTC by the standard (even if it is the case most of the time).
Both of the existing answers are good. But as long as you're in C++ I encourage you to make the data member of type:
std::chrono::time_point<std::chrono::system_clock, std::chrono::milliseconds>
I know this is an ugly mouthful, but it can easily be made prettier. It is also easy to use. And it will help prevent run time errors in your C++ code.
Make it prettier
I recommend this templated using:
template <class Duration>
using sys_time = std::chrono::time_point<std::chrono::system_clock, Duration>;
Now you can make your data member have type:
sys_time<std::chrono::milliseconds> time_;
This is much more readable, and it exactly preserves the semantics that you are storing a time point, not an arbitrary number, or the number of calories in grapefruit.
Type Safety
Let's say six months from now you are re-visiting this code and you write:
auto z = x.time_ + y.time_;
If you had previously decided to give time_ type std::int64_t, or std::chrono::milliseconds::rep, then the above new line of code compiles and creates a run time error. It makes no sense to add two points in time. Tomorrow + today is nonsensical.
However if you had previously decided to give time_ type sys_time<milliseconds> as I suggest, the above line of code creating z does not compile. The type system has detected the logic error at compile time. Now you are forced to immediately re-visit your logic and discover why you are attempting to add two time points. Maybe it was just a type-o and you meant to subtract them (which is logical, compiles, and results in a duration of type milliseconds).
Ease of use
You can assign now() to your time_ data member with this simple syntax:
using namespace std::chrono;
time_ = time_point_cast<milliseconds>(system_clock::now());
Now time_ is just another system_clock-based time_point but with a precision of milliseconds. For outputting to json you can get the internal signed integral value with:
json_stream << time_.time_since_epoch().count();
For parsing in from json you can:
std::int64_t temp;
json_stream >> temp;
time_ = sys_time<milliseconds>{milliseconds{temp}};
Your approach will work and is portable, but i suggest to use a more straightforward approach for counting milliseconds:
std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::system_clock::now().time_since_epoch()).count()
This will definitely work because .count() returns a std::chrono::milliseconds::rep which is "A signed integral type of at least 45 bits" and should fit in long.
Note: it is not guaranteed that system_clock will have millisecond resolution. But in any case you will get result in milliseconds.
Side note: I can be good to utilize using namespace std::chrono; because this will reduce code length significantly.
I have been searching for over an hour but I simply seem to not be able to find the solution!
I am looking for a function that gives me a similar struct as GetLocalTime on Windows does. The important thing for me is that this struct has hours, minutes, seconds and milliseconds.
localtime() does not include milliseconds and therefore I cannot use it!
I would apprechiate a solution that uses the standard library or another very small library since I am working on a Raspberry Pi and connot use large libraries like boost!
As it was mentioned above, there are not direct equivalent. If you can use C++ 11, <chrono> header allows to get the same result, but not in single call. You can use high_resolution_clock to get current Unix time in milliseconds, then you can get localtime C function to get time without milliseconds, and use current Unix time in milleseconds to find milliseconds count. It looks like you will have to write your own GetLocalTime implementation, but with C++ 11 it will not be complex.
GetLocalTime is not a usual Linux function.
Read time(7), you probably want clock_gettime(2), or (as commented by Joachim Pileborg), the older gettimeofday(2)
If you need some struct giving all of hours, minutes, seconds, milliseconds you have to code that yourself using localtime(3) and explicitly computing the millisecond part.
Something like the below code is printing the time with milliseconds
struct timespec ts = {0,0};
struct tm tm = {};
char timbuf[64];
if (clock_gettime(CLOCK_REALTIME, &ts))
{ perror("clock_gettime"), exit(EXIT_FAILURE);};
time_t tim = ts.tv_sec;
if (localtime(&tim, &tm))
{ perror("localtime"), exit(EXIT_FAILURE);};
if (strftime(timbuf, sizeof(timbuf), "%D %T", &tm))
{ perror("strftime"), exit(EXIT_FAILURE);};
printf("%s.%03d\n", timbuf, (int)(ts.tv_nsec/1000000));
You can use a combination of:
clock_gettime(CLOCK_REALTIME); returns local time, up to millisecond (of course constrained by the actual clock resolution); does not care of local timezone information, returns UTC time. Just use millisecond information (from tv_nsec field).
time(); returns local time, up to the second - no millisecond - also UTC time. time() results (a time_t) is easy to convert to the final format.
then convert time() result using localtime_r(); this sets up a structure very similar to Windows SYSTEMTIME; result is up to the second, and takes into account local timezone information.
finally set up the millisecond field using clock_gettime() results.
These routines are documented, not deprecated, portable.
You may need to call tzset() once (this sets the timezone information - a C global variable - from operating system environment - probably a heavy operation).
I have been experimenting with all kind of timers on Linux and OSX, and would like to try and wrap some of them with the same interface used by std::chrono.
That's easy to do for timers that have a well-defined "period" at compile time, e.g. the POSIX clock_gettime() familiy, the clock_get_time() family on OSX, or gettimeofday().
However, there are some useful timers for which the "period" - while constant - is only known at runtime.
For example:
- POSIX states the period of clock(), CLOCKS_PER_SEC, may be a variable on non-XSI systems
- on Linux, the period of times() is given at runtime by sysconf(_SC_CLK_TCK)
- on OSX, the period of mach_absolute_time() is given at runtime by mach_timebase_info()
- on recent Intel processors, the DST register ticks at a constant rate, but of course that can only be determined at runtime
To wrap these timers in the std::chrono interface, one possibility would be to use a period of std::chrono::nanosecond , and convert the value of each timer to nanoseconds. An other approach could be to use a floating point representation. However, both approaches would introduce a (very small) overhead to the now() function, and a (probably small) loss in precision.
The solution I'm trying to pursue is to define a set of classes to represent such "run-time constant" periods, built along the same lines as the std::ratio class.
However I expect that will require rewriting all the related template classes and functions (as they assume constexpr values).
How do I wrap these kind of timers a la std:chrono ?
Or use non-constexpr values for the time period of a clock ?
Does anyone have any experience with wrapping these kind of timers a
la std:chrono ?
Actually I do. And on OSX, one of your platforms of interest. :-)
You mention:
on OSX, the period of mach_absolute_time() is given at runtime by
mach_timebase_info()
Absolutely correct. Also on OSX, the libc++ implementation of high_resolution_clock and steady_clock is actually based on mach_absolute_time. I'm the author of this code, which is open source with a generous license (do anything you want with it as long as you retain the copyright).
Here is the source for libc++'s steady_clock::now(). It is built pretty much the way you surmised. The run time period is converted to nanoseconds prior to returning. On OS X the conversion factor is very often 1, and the code takes advantage of that fact with an optimization. However the code is general enough to handle non-1 conversion factors.
On the first call to now() there's a small cost of querying the run time conversion factor to nanoseconds. In the general case a floating point conversion factor is computed. In the common case (conversion factor == 1) the subsequent cost is calling through a function pointer. I've found that the overhead is really quite reasonable.
On OS X the conversion factor, although not determined until run time, is still a constant (i.e. does not vary as the program executes), so it only needs to be computed once.
If you're in a situation where your period is actually varying dynamically, you'll need more infrastructure to handle this. Essentially you would need to integrate (calculus) the period vs time curve and then compute an average period between two points in time. That would require a constant monitoring of the period as it changes with time, and <chrono> isn't the right tool for that. Such tools are typically handled at the OS level.
[Does anyone have any experience] Or with using non-constexpr values for the time period of a clock ?
After reading through the standard (20.11.5, Class template duration), "period" is expected to be "a specialization of ratio":
Remarks: If Period is not a specialization of ratio, the program is ill-formed.
and all chrono templates rely heavily on constexpr functionality.
Does anyone have any experience with wrapping these kind of timers a la std:chrono ?
I've found here a suggestion to use a duration with period = 1, boost::rational as rep , though without any concrete examples.
I have done a similar thing for my purposes, only for Linux though. You find the code here; feel free to use the code in whatever way you want.
The challenges my implementation addresses overlap partially with the ones mentioned in your question. Specifically:
The tick factor (required to convert from clock ticks to a time unit based on seconds) is retrieved at run time, but only the first time now() is used‡. If you are concerned about the small overhead this causes, you may call the now() function once at start-up before you measure any actual intervals. The tick factor is stored in a static variable, which means there is still some overhead as – on the lowest level – each call of the now() function implies checking whether the static variable has been initialized. However, this overhead will be the same in each call of now(), so it shouldn't impact measuring time intervals.
I do not convert to nanoseconds by default, because when measuring relatively long periods of time (e.g. a few seconds) this causes overflows very quickly. This is in fact the main reason why I don't use the boost implementation. Instead of converting to nanoseconds, I implement the base unit as a template parameter (called Precision in the code). I use std::ratio from C++11 as template arguments. So I can choose, for example, a clock<micro>, which implies that calling the now() function will internally convert to microseconds rather than nanoseconds, which means I can measure periods of many seconds or minutes without overflows and still with good precision. (This is independent of the unit used to produce output. You can have a clock<micro> and display the result in seconds, etc.)
My clock type, which is called combined_clock combines user time, system time and wall-clock time. There is a boost clock type for this, too, but it's not compatible with the ratio types and units from std, whereas mine is.
‡The tick factor is retrieved using the ::sysconf() call you suggest, and that is guaranteed to return one and the same value throughout the life time of the process.
So the way you use it is as follows:
#include "util/proctime.hpp"
#include <ratio>
#include <chrono>
#include <thread>
#include <utility>
#include <iostream>
int main()
{
using std::chrono::duration_cast;
using millisec = std::chrono::milliseconds;
using clock_type = rlxutil::combined_clock<std::micro>;
auto tp1 = clock_type::now();
/* Perform some random calculations. */
unsigned long step1 = 1;
unsigned long step2 = 1;
for (int i = 0 ; i < 50000000 ; ++i) {
unsigned long step3 = step1 + step2;
std::swap(step1,step2);
std::swap(step2,step3);
}
/* Sleep for a while (this adds to real time, but not CPU time). */
std::this_thread::sleep_for(millisec(1000));
auto tp2 = clock_type::now();
std::cout << "Elapsed time: "
<< duration_cast<millisec>(tp2 - tp1)
<< std::endl;
return 0;
}
The usage above involves a pretty-print function that generates output like this:
Elapsed time: [user 40, system 0, real 1070 millisec]
I'm sure this question is answered elsewhere, but I cannot find it on Google or SO, so here goes.
In C/C++, I want to convert a relative time in format dd-hh:mm:ss provided by
ps -o etime
to an absolute UTC formatted date.
This doesn't seem like it should be very hard. Supposing I have already got a function to produce the relative time stored in struct tm format:
struct tm *starting_rel_time = my_reltime_converstion(...);
time_t t = time(0);
struct tm *current_abs_time = localtime(&t);
what I want is basically the opposite of difftime:
struct *tm starting_abs_time = current_abs_time - starting_rel_time;
Now, I can write my own function to do the conversion, but it's a nightmare because of all the carry operations and special conditions (leap years etc.). Surely there is a way to do this in the C/C++ libraries?
Use Boost::Date_Time libraries.
Convert the DD-HH:MM::SS to seconds with simple math; it's relative-time, so just multiply and add. Then, query the current time() in seconds (assuming it's "relative to now"), and add them. Then use gmtime to convert back to a struct tm.
There is no such language as C/C++.
If you're asking about C, I suggest representing dates internally with a simple numeric type, and converting to and from struct tm only when necessary. If you only need to cover a few decades, then you could use time_t and convert using the standard gmtime and mktime library functions. To cover a wider timespan, you could use a Julian day representation.
If you're asking about C++, I suggest the Boost.Date_Time library. Of course, the C library functions are still available if they meet your needs.
What you're trying to do doesn't make sense. You cannot add two dates.
(And difftime doesn't return a date, nor a time_t.)
In practice, on most, if not all implementations, time_t will be an
integral type with the number of seconds since some specific "epoch".
On such machines, you can add or subtract an integral number of seconds
from a time_t to get a new time, at least if all of the times you're
interested in are in the interval supported by time_t (roughly between
1970 and 2038 on most Unix platforms). This, along with gmtime,
mktime and localtime is probably sufficient for your needs. Note
especially that mktime is required to "correct" it's tm input: you
can, for example, take a tm, add 5 to the field tm_mday, call
mktime on it, and get the correct values for a date five days in the
future—all of the necessary carry operations and special
conditions are handled in mktime.
If this is not sufficient, C++11 has both a time_point and a
duration class, with (from a quick glance) seems to have all of the
functionality you could possibly need.