I took a look att cppreference.org (emphasis mine):
The clock std::chrono::utc_clock is a Clock that represents Coordinated Universal Time (UTC). It measures time since 00:00:00 UTC, Thursday, 1 January 1970, including leap seconds.
Comparing that to the definition of system_clock:
system_clock measures Unix Time (i.e., time since 00:00:00 Coordinated Universal Time (UTC), Thursday, 1 January 1970, not counting leap seconds).
Is it actually possible to have both in the same system? For example, if the system clock is synchronized via NTP, then the server decides what time it is, and that could use leap seconds or not, but the C++ library implementation cannot know anything about that. Or, does the standard require a database over when a leap second was introduced?
NTP servers give you UTC (in a seconds-since-1900 format). The current time, is the current time. It doesn't really matter how many leap seconds there have been in order to get there.
Where things get complicated is when a leap second is added. NTP will announce this in the moment, and various operating systems do various things internally to record that, owing to their propensity to store time as "number of seconds since an epoch" — Linux and Windows don't include leap seconds in this, because it would make their timestamp rendering more complicated (how many leap seconds have there been?) and they can't be dealing with. Instead, they just slow or speed up the clock for a little while around the announced leap second, so rather than actually recording it they just adjust their own seconds-count so rendering that count as a timestamp will appear accurate later.
(What I don't know is how an OS will redetermine its not-really-but-sort-of-seconds count from an NTP transaction without knowing how many leap seconds to deduct; edits welcomed.)
system_clock gives you this seconds count, which (on mainstream platforms) just comes straight from the OS (e.g. time()).
utc_clock gives you a similar seconds count, but one that is "real". On such mainstream platforms this will necessarily have to be the system_clock with leap seconds added after-the-fact. This historical data comes from the system too, indeed some kind of database (though the exact source is up to the implementation).
In conclusion, the data sources for the two clocks are (slightly) different, so there is no question of whether they can co-exist on the same system. But, the system_clock probably comes directly from your operating system, in a way that the utc_clock probably doesn't.
Further reading
NTP and leap seconds: https://www.meinbergglobal.com/english/info/leap-second.htm#os
The new utc_clock feature and its friends: https://howardhinnant.github.io/date/d0355r4.html
I found this working draft about time for C++ 20, which seems to be due out in AD 2020. It has a subpage about utc_clock which includes this example:
clock_cast<utc_clock>(sys_seconds{sys_days{1970y/January/1}}).time_since_epoch() is 0s.
clock_cast<utc_clock>(sys_seconds{sys_days{2000y/January/1}}).time_since_epoch()
and states the last value "is 946'684'822s, which is 10'957 * 86'400s + 22s." Notice that 10,957 days are about 30 years, so the value of utc_clock evidently represents seconds since 1 January 1970 UTC, where each leap second increments the value of utc_clock.
Since this is expressed as a conversion, it seems reasonable to infer the conversion would call upon a table of leap seconds, and could be performed even if the operating system had no notion of the distinction between UTC and TAI, and no concept of a minute which contained 61 seconds.
I must admit I am more interested in time than C++ and have not written serious C++ code in nearly 20 years.
Related
When printing my logs, I want each message to have a time stamp, measuring time since start of the program. Preferably in nanoseconds, though milliseconds are fine as well:
( 110 ns) Some log line
( 1220 ns) Another log line
( 2431 ns) Now for some computation...
(10357 ns) Error!
To my understanding, there three different clocks in the C++ chrono library and two more C-style clocks:
std::chrono::high_resolution_clock
std::chrono::system_clock
std::chrono::steady_clock
std::time
std::clock
What are the pros and cons for each those for the task described above?
system_clock is a clock that keeps time with UTC (excluding leap seconds). Every once in a while (maybe several times a day), it gets adjusted by small amounts, to keep it aligned with the correct time. This is often done with a network service such as NTP. These adjustments are typically on the order of microseconds, but can be either forward or backwards in time. It is actually possible (though not likely nor common) for timestamps from this clock to go backwards by tiny amounts. Unless abused by an administrator, system_clock does not jump by gross amounts, say due to daylight saving, or changing the computer's local time zone, since it always tracks UTC.
steady_clock is like a stopwatch. It has no relationship to any time standard. It just keeps ticking. It may not keep perfect time (no clock does really). But it will never be adjusted, especially not backwards. It is great for timing short bits of code. But since it never gets adjusted, it may drift over time with respect to system_clock which is adjusted to keep in sync with UTC.
This boils down to the fact that steady_clock is best for timing short durations. It also typically has nanosecond resolution, though that is not required. And system_clock is best for timing "long" times where "long" is very fuzzy. But certainly hours or days qualifies as "long", and durations under a second don't. And if you need to relate a timestamp to a human readable time such as a date/time on the civil calendar, system_clock is the only choice.
high_resolution_clock is allowed to be a type alias for either steady_clock or system_clock, and in practice always is. But some platforms alias to steady_clock and some to system_clock. So imho, it is best to just directly choose steady_clock or system_clock so that you know what you're getting.
Though not specified, std::time is typically restricted to a resolution of a second. So it is completely unusable for situations that require subsecond precision. Otherwise std::time tracks UTC (excluding leap seconds), just like system_clock.
std::clock tracks processor time, as opposed to physical time. That is, when your thread is not busy doing something, and the OS has parked it, measurements of std::clock will not reflect time increasing during that down time. This can be really useful if that is what you need to measure. And it can be very surprising if you use it without realizing that processor time is what you're measuring.
And new for C++20
C++20 adds four more clocks to the <chrono> library:
utc_clock is just like system_clock, except that it counts leap seconds. This is mainly useful when you need to subtract two time_points across a leap second insertion point, and you absolutely need to count that inserted leap second (or a fraction thereof).
tai_clock measures seconds since 1958-01-01 00:00:00 and is offset 10s ahead of UTC at this date. It doesn't have leap seconds, but every time a leap second is inserted into UTC, the calendrical representation of TAI and UTC diverge by another second.
gps_clock models the GPS time system. It measures seconds since the first Sunday of January, 1980 00:00:00 UTC. Like TAI, every time a leap second is inserted into UTC, the calendrical representation of GPS and UTC diverge by another second. Because of the similarity in the way that GPS and TAI handle UTC leap seconds, the calendrical representation of GPS is always behind that of TAI by 19 seconds.
file_clock is the clock used by the filesystem library, and is what produces the chrono::time_point aliased by std::filesystem::file_time_type.
One can use a new named cast in C++20 called clock_cast to convert among the time_points of system_clock, utc_clock, tai_clock, gps_clock and file_clock. For example:
auto tp = clock_cast<system_clock>(last_write_time("some_path/some_file.xxx"));
The type of tp is a system_clock-based time_point with the same duration type (precision) as file_time_type.
I'm aware of steady_clock and that it is the only clock that is specified to be monotonic. And I understand that system_clock can jump forward or backward due to daylight savings and leap years. But doesn't count() give you the number of ticks of the system clock since the Unix epoch, which is an always increasing number, regardless of how that integer number of ticks is parsed into a "calendar date + wall clock" interpretation? i.e. Even if the "calendar date + wall clock" jumps from 2am from 3am on a given day in March, hasn't the integer count of ticks only increased by one tick?
In short, doesn't it stand to reason that the value of std::chrono::system_clock.now().time_since_epoch().count() should be expected to, in the short term, increase monotonically (barring updates to the system clock, which of course are a very real event), even if the date+time it refers to jumps around?
EDIT
As pointed out by #SergeyA, if the system clock is changed, then of course the value will jump around. But I think the change in wall clock time due to daylight savings is not an NTP update event or a manual change by a user. If it helps clarify the question, I'm interested in uptimes of an hour or two, which could cross the DST boundary, as opposed to uptimes of weeks or months, in which the clock could drift.
system_clock tracks Unix Time. Unix Time has no UTC offset adjustments (daylight saving). It is just a linear count of non-leap seconds. It is possible that an implementation could jump backwards during a leap second insertion. Though in practice the leap second is "smeared" with many tiny adjustments over a span of hours.
In theory it is possible for a system_clock to be monotonic. In practice, no clock keeps perfect time, and must be adjusted, (potentially backwards) to stay in sync with Unix Time.
In C++11/14/17, the Unix Time measure is not specified for system_clock, but it is the existing practice. In C++20, it will be specified.
Short answer - no, it does not. System clock can be (and will be in practice!) adjusted outside, as a result of manual action or source (NTP, PTP) synchronization.
In C++ I'm writing a function that converts time(NULL), which is all the seconds since January 1, 1970 in UTC time, to local time EST in military time format (hours:minutes:seconds). I'm honestly stumped how to mathematically do this so that the program stays accurate as time moves forward.
Also I'm well aware that there is a local time function but I'd like to build this function from the ground up. Does anyone have any advice or tips?
Also I'm well aware that there is a local time function but I'd like to build this function from the ground up. Does anyone have any advice or tips?
Why would you want to do this when there are plenty of free and well-tested packages? As mentioned in the comments, getting daylight savings time correct is non-trivial. Existing packages do just that, and they do it right, based on the IANA tzinfo database.
C options:
std::localtime(). This function uses a global variable; it is not thread safe.
localtime_r(). This is a POSIX function and is not a part of the C++ library. It does not exist on Windows.
localtime_s(). This is an optional C11 function. Even if it exists on your machine, it might not be a part of <ctime>.
C++ options:
Boost Date-Time, https://github.com/boostorg/date_time .
Howard Hinant's date-time module, https://github.com/HowardHinnant/date .
localtime() from glibc should do the job of calculating the date, provided the environment is set to the correct timezone; else use gmtime(). Building a string from the values is a separate job, see strftime() for that.
http://linux.die.net/man/3/localtime
If you want to learn about the algorithms for converting a count of days to a year/month/day triple (and back), here they are, highly optimized, explained in pains-taking detail (don't read while operating heavy machinery):
http://howardhinnant.github.io/date_algorithms.html
You should also know that most (all?) implementations of time() track an approximation of UTC called Unix Time. This count treats leap seconds simply as clock corrections to an imperfect clock. That means you can ignore the effect of leap seconds when converting Unix Time seconds to days (just divide by 86400).
For converting to EST, you have some choices (in order of increasing difficulty and accuracy):
You can ignore daylight savings time and always take the offset as -5 hours.
You can assume the current daylight savings rules, ignoring the fact that they have changed many times in the past, and will likely change again.
You can get the past and present rules from the IANA timezone database,
or your OS's local equivalent.
There are two things which you will need to consider:
1. Leap years
One extra day in a year, that is possible to calculate mathematically.
2. Leap seconds
Seconds inserted or removed as needed (so a minute can have 61 or 59 seconds).
Those are irregular and you will need a lookup table for them. Otherwise your conversion routine will not be correct.
List of them is available for example here: https://en.wikipedia.org/wiki/Leap_second
According to this struct tm documentation, the value tm_sec can be 0 - 61 and the extra range is to include leap seconds in some systems. How can I detect if there is a leap second in a system generally and in my openSUSE 13.2 machine specifically?
Leap seconds are not related to a particular system. It is even not directly an informatic problem but rather a physical one.
In a ideal world, where every clock would be perfectly accurate, there would be no reason for leap seconds. But in our restricted world, clocks tend to deviate.
Moving the system time, specially moving it backwards can cause bad things to happen. If a batch task stores the time of its last run to know which files has already been processed and because of the clock moving backwards it thinks that time is now before its last run, it will process again file which may be plain wrong.
For that reason, system developpers invented the leap seconds that allow a system to adjust its system clock (of course only slight deltas, but it is generally enough if you check you clock on a regular base) without having to move it backwards.
As noted by #Supuhstar, this is just an operating system implementation of leap seconds. But leap seconds have a true physical origin and were introduced to compensate variation between two definitions of time UTC and TAI - physically variations in earth rotation speed. I give more details on it in this other post of mine
I'm a bit lost with time terminology.
I understand what epoch is and the difference between GMT and UTC.
However I'm a bit confused about the following notations:
calendar time
local time
wall time
How are these related to timezones and daylight savings?
P.S.
Good link (thanks to #ZincX)
Calendar time is the time that has elapsed since the epoch (t - E).
Local time is calendar time corrected for the timezone and DST.
Wall time: I assume you mean wallclock time. This it the time elapsed since a process or job has started running.
RTFM time(7), localtime(3), time(1).
Epoch: Used to refer to the beginning of something. such as the Unix Epoch which is of 00:00 , January 1, 1970 UTC. (i.e. a time_t with the value 0 represents midnight, Jan 1. 1970 UTC)
calendar time: the time since the epoch
local time: the calendar time in the timezone you (or the computer) resides in.
wall time: time elapsed on a real clock since some arbitary start , e.g. since you started a program (as opposed to e.g. CPU time used by a program since it started)
Try info and then * Date input formats: (coreutils)Date input formats. which starts with this wonderful text:
Our units of temporal measurement, from seconds on up to months,
are so complicated, asymmetrical and disjunctive so as to make
coherent mental reckoning in time all but impossible. Indeed, had
some tyrannical god contrived to enslave our minds to time, to
make it all but impossible for us to escape subjection to sodden
routines and unpleasant surprises, he could hardly have done
better than handing down our present system. It is like a set of
trapezoidal building blocks, with no vertical or horizontal
surfaces, like a language in which the simplest thought demands
ornate constructions, useless particles and lengthy
circumlocutions. Unlike the more successful patterns of language
and science, which enable us to face experience boldly or at least
level-headedly, our system of temporal calculation silently and
persistently encourages our terror of time.
... It is as though architects had to measure length in feet,
width in meters and height in ells; as though basic instruction
manuals demanded a knowledge of five different languages. It is
no wonder then that we often look into our own immediate past or
future, last Tuesday or a week from Sunday, with feelings of
helpless confusion. ...
-- Robert Grudin, `Time and the Art of Living'.
Calendar time is the time since epoch time. This can be represented in a few ways including:
simple calendar time: number of seconds since epoch
high-resolution calendar time: this uses a struct timeval datatype to include fractions of seconds too.
Local time: this is also a structure (the struct tm data type) which includes calendar time as well as timezone and DST information. This gives all the information to present a time string understandable in a certain locale.
Wall time: time elapsed during the running of your process.
I recommend reading the information at this link. It helped me.
Also take a look at this question for a good explanation of calendar time and local time with code examples written in C :
How do I use the C date and time functions on UNIX?
A bit nit-picky, but there is no such thing as GMT anymore. That term was deprecated almost 40 years ago.
There are many different time standards, UTC being one of them. Another is UT1, which is essentially time as measured by a sundial. Yet another is TAI, International Atomic Time (the real acronym is in French, as are most of the acronyms for the various time standards). TAI, as the name suggests, is time as measured by an atomic clock. UTC is a compromise between TAI and UT1. We want our time scale to stay more or less in sync with the sun, but we also want it to be based on the best definition at hand for a second. There is a tension between these two desires because the Earth does not rotate at a constant rate.
Over the long term, the Earth's rotation rate is slowing down because of the tides. The length of a day was considerably shorter a couple of billion years ago. It was a tiny bit shorter a couple of hundred years ago. Our 86,400 second-long day is based on the length of a solar day from a couple of hundred years ago. Today a solar day is about 86,400.002 seconds long, so we have to add leap seconds every so often to keep midnight more or less at midnight.
As far as the specific questions you asked,
It is now 4:25 PM CDT on June 8, 2011. That's calendar time. Here's a challenge: How many second elapsed between 12:42 AM EST on January 3, 1999 and 4:25 PM CDT on June 8, 2011? That's akin to asking someone to do arithmetic in Roman numerals. Yech.
Local time: 4:25 PM CDT and 12:42 AM EST are examples of local time. What is midnight to me is noon to someone halfway around the world.
Wall time (Better: Wall clock time): Suppose you run a program and it takes 20 minutes to run to completion. That 20 minutes is the wall clock time it took the program to run this time around. When I see some program take a lot longer than expected to run I check to see if my stupid antivirus program has gone viral. Oftentimes that is exactly what happens. After giving the antivirus program a kick in the pants, that exact same program run might take only five minutes of wall clock time. The CPU time, on the other hand, will be pretty much the same over the two runs.