'Cheapest' way of getting time stamp in Linux (c++) - c++

I am wondering what is the cheapest way of getting a timestamp in Linux (in c++).
I assume it's an accuracy trade-of so I believe there is more than 1 possibility.
I need to have milliseconds but not necessarily microseconds, so std::localtime isn't an option and gettimeofday is probably too costly. (due to the microseconds accuracy).

1: fprintf(stdout, "%u\n", (unsigned)time(NULL));
2: struct timeval tv;
gettimeofday(&tv,NULL);
tv.tv_sec // seconds
tv.tv_usec // microseconds
3: std::time_t result = std::time(nullptr);
std::cout << std::asctime(std::localtime(&result))
<< result << " seconds since the Epoch\n";
4:
using namespace std::chrono;
milliseconds ms = duration_cast< milliseconds >(
high_resolution_clock::now().time_since_epoch()
);`

I would suggest ctime library and the following code:
std::time_t timestamp = std::time(nullptr);
std::cout << std::asctime(std::localtime(&result))
<< result << " seconds since the Epoch\n";
This would return in "result" variable the number of seconds since the Epoch (pretty straight forward). There are ways to convert this into a readable format, but that, as you said, is costly. Getting that number would be very efficient because it only reads the variable from the system instead of converting / concatenating and calculating a readable date.

Related

Convert time_t from localtime zone to UTC

I have a time_t that represents the time in seconds since epoch. Those seconds refer to the local time.
I want to convert them to UTC.
Is there a way to do this in C++?
I'm going to show two ways of doing this:
Using the C API.
Using a modern C++11/14 library based on top of <chrono>.
For the purposes of this demo, I'm assuming that the current number of seconds in the local time zone is 1,470,003,841. My local time zone is America/New_York, and so the results I get reflect that we are currently at -0400 UTC.
First the C API:
This API is not type-safe and is very error prone. I made several mistakes just while coding up this answer, but I was able to quickly detect these mistakes because I was checking the answers against the 2nd technique.
#include <ctime>
#include <iostream>
int
main()
{
std::time_t lt = 1470003841;
auto local_field = *std::gmtime(&lt);
local_field.tm_isdst = -1;
auto utc = std::mktime(&local_field);
std::cout << utc << '\n'; // 1470018241
char buf[30];
std::strftime(buf, sizeof(buf), "%F %T %Z\n", &local_field);
std::cout << buf;
auto utc_field = *std::gmtime(&utc);
std::strftime(buf, sizeof(buf), "%F %T UTC\n", &utc_field);
std::cout << buf;
}
First I initialize the time_t. Now there is no C API to go from a local time_t to a UTC time_t. However you can use gmtime to go from a UTC time_t to a UTC tm (from serial to field type, all in UTC). So the first step is to lie to gmtime, telling it you've got a UTC time_t. And then when you get the result back you just pretend you've got a local tm instead of a UTC tm. Clear so far? This is:
auto local_field = *std::gmtime(&lt);
Now before you go (and I personally messed this part up the first time through) you have to augment this field type to say that you don't know if it is currently daylight saving or not. This causes subsequent steps to figure that out for you:
local_field.tm_isdst = -1;
Next you can use make_time to convert a local tm to a UTC time_t:
auto utc = std::mktime(&local_field);
You can print that out, and for me it is:
1470018241
which is 4h greater. The rest of the function is to print out these times in human readable format so that you can debug this stuff. For me it output:
2016-07-31 22:24:01 EDT
2016-08-01 02:24:01 UTC
A modern C++ API:
There exist no facilities in the std::lib to do this. However you can use this free, open source (MIT license) library for this.
#include "date/tz.h"
#include <iostream>
int
main()
{
using namespace date;
using namespace std::chrono_literals;
auto zt = make_zoned(current_zone(), local_seconds{1470003841s});
std::cout << zt.get_sys_time().time_since_epoch() << '\n'; // 1470018241s
std::cout << zt << '\n';
std::cout << zt.get_sys_time() << " UTC\n";
}
The first step is to create the local time in terms of seconds since the epoch:
local_seconds{1470003841s}
The next thing to do is to create a zoned_time which is a pairing of this local time and the current time zone:
auto zt = make_zoned(current_zone(), local_seconds(1470003841s));
Then you can simply print out the UTC number of seconds of this pairing:
std::cout << zt.get_sys_time().time_since_epoch() << '\n';
This output for me:
1470018241s
(4h later than the input). To print out this result as I did in the C API:
std::cout << zt << '\n';
std::cout << zt.get_sys_time() << " UTC\n";
which outputs:
2016-07-31 22:24:01 EDT
2016-08-01 02:24:01 UTC
In this modern C++ approach, the local time and the UTC time are different types, making it much more likely that I catch accidental mixing of these two concepts at compile time (as opposed to creating run time errors).
Update for C++20
The second technique will be available in C++20 with the following syntax:
#include <chrono>
#include <iostream>
int
main()
{
using namespace std::chrono;
zoned_time zt{current_zone(), local_seconds{1470003841s}};
std::cout << zt.get_sys_time().time_since_epoch() << '\n'; // 1470018241s
std::cout << zt << '\n';
std::cout << zt.get_sys_time() << " UTC\n";
}
You can use gmtime:
Convert time_t to tm as UTC time Uses the value pointed by timer to
fill a tm structure with the values that represent the corresponding
time, expressed as a UTC time (i.e., the time at the GMT timezone).
(c) http://www.cplusplus.com/reference/ctime/gmtime/
If you are okay with using Abseil's time library, one other way to do this is:
auto civil_second =
absl::LocalTimeZone().At(absl::FromTimeT(<your time_t>)).cs;
time_t time_in_utc = absl::ToTimeT(absl::FromCivil(civil_second, absl::UTCTimeZone()));
(Maybe there is a simpler set of calls in the library to do this, but I have not explored further. :))
Normaly, you would convert from time_t to struct tm and there aren't many examples of converting from time_t to time_t in a different time zone (UTC in case of the OP's question). I wrote these 2 functions for that exact purpose. They may be useful when you are only in a need ot using time_t but in a specific time zone.
time_t TimeAsGMT(time_t t)
{
std::chrono::zoned_time zt{"UTC", std::chrono::system_clock::from_time_t(t)};
return std::chrono::system_clock::to_time_t(zt.get_sys_time());
}
or if you want the current time as UTC in the form of time_t
time_t CurTimeAsGMT()
{
std::chrono::zoned_time zt{"UTC", std::chrono::system_clock::now()}; // Get the time in UTC time zone
return std::chrono::system_clock::to_time_t(zt.get_sys_time()); // return this time as time_t
}
If you run both functions and compare the initial value and the result value, you will see that the difference matches the difference between your current time (at your current time zone) and UTC / GMT time zone.

How can I get current time of day in milliseconds in C++?

The thing is, I have to somehow get current time of day in milliseconds in convenient format.
Example of desired output:
21 h 04 min 12 s 512 ms
I know how to get this format in seconds, but I have no idea how to get my hands on milliseconds?
Using the portable std::chrono
auto now = std::chrono::system_clock::now();
auto time = std::chrono::system_clock::to_time_t(now);
auto ms = std::chrono::duration_cast<std::chrono::milliseconds>(now.time_since_epoch()) -
std::chrono::duration_cast<std::chrono::seconds>(now.time_since_epoch());
std::cout << std::put_time(std::localtime(&time), "%H h %M m %S s ");
std::cout << ms.count() << " ms" << std::endl;
Output:
21 h 24 m 22 s 428 ms
Live example
Note for systems with clocks that doesn't support millisecond resolution
As pointed out by #user4581301, on some systems std::system_clock might not have enough resolution for accurately representing current time in milliseconds. If that is the case, try using std::high_resolution_clock for calculating the number of milliseconds since the last second. This will ensure the highest available resolution provided by your implementation.
Taking the time from two clocks will inevitably lead you to get two separate points in time (however small the time difference will be). So keep in mind that using a separate clock for calculating the milliseconds will not yield perfect synchronization between the second, and millisecond periods.
// Use system clock for time.
auto now = std::chrono::system_clock::now();
/* A small amount of time passes between storing the time points. */
// Use separate high resolution clock for calculating milliseconds.
auto hnow = std::chrono::high_resolution_clock::now();
auto ms = std::chrono::duration_cast<std::chrono::milliseconds>(hnow.time_since_epoch()) -
std::chrono::duration_cast<std::chrono::seconds>(hnow.time_since_epoch());
Also, there seems to be no guarantee that the tick events of std::high_resolution_clock and std::system_clock are synchronized, and because of this the millisecond period might not be in sync with the periodic update of the current second given by the system clock.
Because of these reasons, using a separate high resolution clock for millisecond resolution should not be used when <1 second precision is critical.
With the exception of using boost::chrono, I am not aware of any system independent method. I have implemented the following for windows and posix:
LgrDate LgrDate::gmt()
{
LgrDate rtn;
#ifdef _WIN32
SYSTEMTIME sys;
GetSystemTime(&sys);
rtn.setDate(
sys.wYear,
sys.wMonth,
sys.wDay);
rtn.setTime(
sys.wHour,
sys.wMinute,
sys.wSecond,
sys.wMilliseconds*uint4(nsecPerMSec));
#else
struct timeval time_of_day;
struct tm broken_down;
gettimeofday(&time_of_day,0);
gmtime_r(
&time_of_day.tv_sec,
&broken_down);
rtn.setDate(
broken_down.tm_year + 1900,
broken_down.tm_mon + 1,
broken_down.tm_mday);
rtn.setTime(
broken_down.tm_hour,
broken_down.tm_min,
broken_down.tm_sec,
time_of_day.tv_usec * nsecPerUSec);
#endif
return rtn;
} // gmt
On a POSIX system I would do
#include <sys/time.h>
#include <sys/resource.h>
struct timespec tspec;
clock_gettime(CLOCK_REALTIME, &tspec);
int sec = (int) tspec.tv_sec;
int msec = (int) ((double) tspec.tv_nsec) / 1000000.0;
Note, CLOCK_REALTIME is used to get the wall clock, which is adjusted using NTP
and then use whatever you have for the h:m:s part

Measurement with boost::posix_time::microsec_clock has error more than ten microseconds?

I have the following code:
long long unsigned int GetCurrentTimestamp()
{
LARGE_INTEGER res;
QueryPerformanceCounter(&res);
return res.QuadPart;
}
long long unsigned int initalizeFrequency()
{
LARGE_INTEGER res;
QueryPerformanceFrequency(&res);
return res.QuadPart;
}
//start time stamp
boost::posix_time::ptime startTime = boost::posix_time::microsec_clock::local_time();
long long unsigned int start = GetCurrentTimestamp();
// ....
// execution that should be measured
// ....
long long unsigned int end = GetCurrentTimestamp();
boost::posix_time::ptime endTime = boost::posix_time::microsec_clock::local_time();
boost::posix_time::time_duration duration = endTime - startTime;
std::cout << "Duration by Boost posix: " << duration.total_microseconds() <<std::endl;
std::cout << "Processing time is " << ((end - start) * 1000000 / initalizeFrequency())
<< " microsec "<< std::endl;
Result of this code is
Duration by Boost posix: 0
Processing time is 24 microsec
Why there is such a big divergence? Boost sucks as much as it should measure microseconds but it measures microseconds with tenth of microseconds error???
Posix time: microsec_clock:
Get the UTC time using a sub second resolution clock. On Unix systems this is implemented using GetTimeOfDay. On most Win32 platforms it is implemented using ftime. Win32 systems often do not achieve microsecond resolution via this API. If higher resolution is critical to your application test your platform to see the achieved resolution.
ftime simply does not provide microsecond resolution. The argument may contain the word microsecond but the implementation does not provide any accuracy in that range. It's granularity is in the ms regime.
You'd get something different than ZERO when you operation needs more time, say more than at least 20ms.
Edit: Note: In the long run the microsec_clock implementation for Windows should use the GetSystemTimePreciseAsFileTime function when possible (min. req. Windows 8 desktop, Windows Server 2012 desktop) to achieve microsecond resolution.
Unfortunately current Boost implementation of boost::posix_time::microsec_clock doesn't uses QueryPerformanceCounter Win32 API, it uses GetSystemTimeAsFileTime instead which in its turn uses GetSystemTime. But system time resolution is milliseconds (or even worse).

boost::date_time (boost-145) using a 64-bit uint with microsec calculations, without truncation

I am using date_time to abstract away platform peculiarities. and I need to produce a 64-bit microsec resolution uint64_t which will be used in serialization. I do not understand what is going wrong below.
#include <boost/date_time/posix_time/posix_time.hpp>
#include <boost/cstdint.hpp>
#include <iostream>
using namespace boost::posix_time;
using boost::uint64_t;
ptime UNIX_EPOCH(boost::gregorian::date(1970,1,1));
int main() {
ptime current_time = microsec_clock::universal_time();
std::cout << "original time: "<< current_time << std::endl;
long microsec_since_epoch = ((current_time -UNIX_EPOCH).total_microseconds());
ptime output_ptime = UNIX_EPOCH + microseconds(microsec_since_epoch);
std::cout << "Deserialized time : " << output_ptime << std::endl;
std::cout << "Microsecond output: " << microsec_since_epoch << std::endl;
std::cout << "Microsecond to second arithmetic: "
<< microsec_since_epoch/(10*10*10*10*10*10) << std::endl;
std::cout << "Microsecond to tiume_duration, back to microsecond : " <<
microseconds(microsec_since_epoch).total_microseconds() << std::endl;
return 0;
}
Here is the output I get.
original time: 2010-Dec-17 09:52:06.737123
Deserialized time : 1970-Jan-16 03:10:41.577454
Microsecond output: 1292579526737123
Microsecond to second arithmetic: 1292579526
Microsecond to tiume_duration, back to microsecond : 1307441577454
When I switch to using total_seconds() and +seconds(..) The problems dissapear --i.e., input changes to:
2010-Dec-15 18:26:22.606978
2010-Dec-15 18:26:22
date_time claims to use a 64-bit type internally, and 2^64÷ (10^6×3600×24×365) ~= 584942 even 2^60÷ (10^6×3600×24×365) ~= 36558.
The opening lines from wikipedia have this to say about Posix time
Unix time, or POSIX time, is a system
for describing points in time, defined
as the number of seconds elapsed since
midnight Coordinated Universal Time
(UTC) of January 1, 1970
Why is such Massive truncation going on 40 years down the line ?
How do I use the full 64-bit space with microsecond resolution using boost::date_time ?
--edit1 in response to hans--
The post has been changed to reflect the integer output of the duration.total_microseconds() part. Note 1292576572566904÷(10^6×3600×24×365) ~= 40.98 years. The output from seconds has not been updated.
--edit2--
Downscaling the microseconds to seconds before the "deserialization" step, also works well. This approach solved my problem,I only need the microsecond resolution at creation, and I can live without it at deserialization.
I do still want to know the what and why of the problem.
This seems to be a problem with the microseconds() not be ing able to handle such a large microseconds input. The following snippit is a fix to this problem:
#define MICROSEC 1000000
uint64_t sec_epoch = microsec_since_epoch / MICROSEC;
uint64_t mod_micro_epoch= microsec_since_epoch % MICROSEC;
ptime new_method = UNIX_EPOCH + seconds(sec_epoch) + microseconds(mod_micro_epoch);
std::cout << "Deserialization with new method: " << new_method << std::endl;
The return type for total_microseconds() is tick_type, not long. Looks like you're compiling this with a compiler that has a 32-bit long type. Much to small to store 40 years worth of microseconds.

Why are gettimeofday() intervals occasionally negative?

I have an experimental library whose performance I'm trying to measure. To do this, I've written the following:
struct timeval begin;
gettimeofday(&begin, NULL);
{
// Experiment!
}
struct timeval end;
gettimeofday(&end, NULL);
// Print the time it took!
std::cout << "Time: " << 100000 * (end.tv_sec - begin.tv_sec) + (end.tv_usec - begin.tv_usec) << std::endl;
Occasionally, my results include negative timings, some of which are nonsensical. For instance:
Time: 226762
Time: 220222
Time: 210883
Time: -688976
What's going on?
You've got a typo. Corrected last line (note the number of 0s):
std::cout << "Time: " << 1000000 * (end.tv_sec - begin.tv_sec) + (end.tv_usec - begin.tv_usec) << std::endl;
BTW, timersub is a built in method to get the difference between two timevals.
The posix realtime libraries are better suited for measurement of high accuracy intervals. You really don't want to know the current time. You just want to know how long it has been between two points. That is what the monotonic clock is for.
struct timespec begin;
clock_gettime( CLOCK_MONOTONIC, &begin );
{
// Experiment!
}
struct timespec end;
clock_gettime(CLOCK_MONOTONIC, &end );
// Print the time it took!
std::cout << "Time: " << double(end.tv_sec - begin.tv_sec) + (end.tv_nsec - begin.tv_nsec)/1000000000.0 << std::endl;
When you link you need to add -lrt.
Using the monotonic clock has several advantages. It often uses the hardware timers (Hz crystal or whatever), so it is often a faster call than gettimeofday(). Also monotonic timers are guaranteed to never go backwards even if ntpd or a user is goofing with the system time.
You took care of the negative value but it still isn't correct. The difference between the millisecond variables is erroneous, say we have begin and end times as 1.100s and 2.051s. By the accepted answer this would be an elapsed time of 1.049s which is incorrect.
The below code takes care of the cases where there is only a difference of milliseconds but not seconds and the case where the milliseconds value overflows.
if(end.tv_sec==begin.tv_sec)
printf("Total Time =%ldus\n",(end.tv_usec-begin.tv_usec));
else
printf("Total Time =%ldus\n",(end.tv_sec-begin.tv_sec-1)*1000000+(1000000-begin.tv_usec)+end.tv_usec);
std::cout << "Time: " << 100000 * (end.tv_sec - begin.tv_sec) + (end.tv_usec - begin.tv_usec) << std::endl;
As noted, there are 1000000 usec in a sec, not 100000.
More generally, you may need to be aware of the instability of timing on computers. Processes such as ntpd can change clock times, leading to incorrect delta times. You might be interested in POSIX facilities such as timer_create.
do
$ time ./proxy-application
next time