Type of CLOCKS_PER_SEC - c++

What datatype is CLOCKS_PER_SEC typically represented as? long unsigned int? clock_t? Does it vary from implementation to implementation?
I ask because I use CLOCKS_PER_SEC in a return value, and I want to make sure I use the most appropriate type.

All that the C standard promises is that CLOCKS_PER_SEC is a constant expression with the type clock_t which must be an arithmetic type (could be an integer or a floating type).
(C99 7.23 Date and time <time.h>)
I think that clock_t is typically a long, but I wouldn't bet my life that I'm right.
My usually trusty Harbison & Steele (3rd ed) suggests casting clock_t to double for use in your programs so your code can work regardless of the actual clock_t type (18.1 CLOCK, CLOCK_T, CLOCKS_PER_SEC, TIMES):
Here is how the clock function can
be used to time an ANSI C program:
#include <time.h>
clock_t start, finish, duration;
start = clock();
process();
finish = clock();
printf("process() took %f seconds to execute\n",
((double) (finish - start)) / CLOCKS_PER_SEC );
Note how the cast to type double
allows clock_t and CLOCKS_PER_SEC
to be either floating-point or
integral.
You might consider whether this would work for your purposes.

CLOCKS_PER_SEC is a macro, that usually expands to a literal.
The glibc manual says:
In the GNU system, clock_t is
equivalent to long int and
CLOCKS_PER_SEC is an integer value.
But in other systems, both clock_t and
the type of the macro CLOCKS_PER_SEC
can be either integer or
floating-point types. Casting
processor time values to double, as in
the example above, makes sure that
operations such as arithmetic and
printing work properly and
consistently no matter what the
underlying representation is.

CLOCK_PER_SEC is actually specified by POSIX as part of the time.h header.
That says it's a clock_t as described by sys/types.h.
That in turn says:
time_t and clock_t shall be integer or
real-floating types.
So all you can assume in portable code is that it is some integral or floating point type. If you just need to declare a variable to store the value, declare it as "clock_t".

Related

Converting from std::chrono:: to 32 bit seconds and nanoseconds?

This could be the inverse of Converting from struct timespec to std::chrono::?
I am getting my time as
const std::Chrono::CRealTimeClock::time_point RealTimeClockTime = std::Chrono::CRealTimeClock::now();
and I have to convert it to a struct timespec.
Actually, I don't, if there is an altrerntive; what I have to do is get the number of seconds since the epoch and the number of nanoseconds since the last last second.
I chose struct timespec becuase
struct timespec
{
time_t tv_sec; // Seconds - >= 0
long tv_nsec; // Nanoseconds - [0, 999999999]
};
The catch is that I need to shoehorn the seconds and nonseconds into uint32_t.
I am aware theat there is a danger of loss of precision, but reckon that we don't care too much about the nanoseconds while the year 208 problem gives me cause for concern.
However, I have to bang out some code now and we can update it later if necessary. The code has to meet another manufacturer's specification and it is likely to take weeks or months to get this problem resolved and use uint64_t.
So, how can I, right now, obtain 32 bit values of second and nanosecond from std::Chrono::CRealTimeClock::now()?
I'm going to ignore std::Chrono::CRealTimeClock::now() and just pretend you wrote std::chrono::system_clock::now(). Hopefully that will give you the tools to deal with whatever clock you actually have.
Assume:
#include <cstdint>
struct my_timespec
{
std::uint32_t tv_sec; // Seconds - >= 0
std::uint32_t tv_nsec; // Nanoseconds - [0, 999999999]
};
Now you can write:
#include <chrono>
my_timespec
now()
{
using namespace std;
using namespace std::chrono;
auto tp = system_clock::now();
auto tp_sec = time_point_cast<seconds>(tp);
nanoseconds ns = tp - tp_sec;
return {static_cast<uint32_t>(tp_sec.time_since_epoch().count()),
static_cast<uint32_t>(ns.count())};
}
Explanation:
I've used function-local using directives to reduce code verbosity and increase readability. If you prefer you can use using declarations instead to bring individual names into scope, or you can explicitly qualify everything.
The first job is to get now() from whatever clock you're using.
Next use std::chrono::time_point_cast to truncate the precision of tp to seconds precision. One important note is that time_point_cast truncates towards zero. So this code assumes that now() is after the clock's epoch and returns a non-negative time_point. If this is not the case, then you should use C++17's floor instead. floor always truncates towards negative infinity. I chose time_point_cast over floor only because of the [c++14] tag on the question.
The expression tp - tp_sec is a std::chrono::duration representing the time duration since the last integral second. This duration is implicitly converted to have units of nanoseconds. This implicit conversion is typically fine as all implementations of system_clock::duration have units that are either nanoseconds or coarser (and thus implicitly convertible to) nanoseconds. If your clock tracks units of picoseconds (for example), then you will need a duration_cast<nanoseconds>(tp - tp_sec) here to truncate picoseconds to nanoseconds precision.
Now you have the {seconds, nanoseconds} information in {tp_sec, ns}. It's just that they are still in std::chrono types and not uint32_t as desired. You can extract the internal integral values with the member functions .time_since_epoch() and .count(), and then static_cast those resultant integral types to uint32_t. The final static_cast are optional as integral conversions can be made implicitly. However their use is considered good style.

Is it guaranteed to be 2038-safe if sizeof(std::time_t) == sizeof(std::uint64_t) in C++?

Excerpted from the cppref:
Implementations in which std::time_t is a 32-bit signed integer (many
historical implementations) fail in the year 2038.
However, the documentation doesn't say how to detect whether the current implementation is 2038-safe. So, my question is:
Is it guaranteed to be 2038-safe if sizeof(std::time_t) == sizeof(std::uint64_t) in C++?
Practically speaking yes. In all modern implementations in major OSes time_t is the number of seconds since POSIX epoch, so if time_t is larger than int32_t then it's immune to the y2038 problem
You can also check if __USE_TIME_BITS64 is defined in 32-bit Linux and if _USE_32BIT_TIME_T is not defined in 32-bit Windows to know if it's 2038-safe
However regarding the C++ standard, things aren't as simple. time_t in C++ is defined in <ctime> which has the same content as <time.h> in C standard. And in C time_t isn't defined to have any format
3. The types declared are size_t (described in 7.19);
clock_t
and
time_t
which are real types capable of representing times;
4. The range and precision of times representable in clock_t and time_t are implementation-defined
http://port70.net/~nsz/c/c11/n1570.html#7.27.1p3
So it's permitted for some implementation to have for example double as time_t and store picoseconds from 1/1 year 16383 BCE, or even a 64-bit integer with only 32 value bits and 32 padding bits. That may be one of the reasons difftime() returns a double
To check y2038 issue portably at run time you can use mktime
The mktime function returns the specified calendar time encoded as a value of type time_t. If the calendar time cannot be represented, the function returns the value (time_t)(-1).
http://port70.net/~nsz/c/c11/n1570.html#7.27.2.3p3
struct tm time_str;
time_str.tm_year = 2039 - 1900;
time_str.tm_mon = 1 - 1;
time_str.tm_mday = 1;
time_str.tm_hour = 0;
time_str.tm_min = 0;
time_str.tm_sec = 1;
time_str.tm_isdst = -1;
if (mktime(&time_str) == (time_t)(-1))
std::cout << "Not y2038 safe\n";

Convert float seconds to chrono::duration

What is the easiest and elegant way to convert float time in seconds to std::chrono::duration<int64_t, std::nano>?
Is it just converting seconds to nanoseconds and passing to the std::chrono::duration constructor?
I have tried this code:
constexpr auto durationToDuration(const float time_s)
{
// need to convert the input in seconds to nanoseconds that duration takes
const std::chrono::duration<int64_t, std::nano> output{static_cast<int64_t>(time_s * 1000000000.0F)};
return output;
}
But it isn't converting well on many values of the input time_s.
The best way is also the easiest and safest. Safety is a key aspect of using chrono. Safety translates to: Least likely to contain programming errors.
There's two steps for this:
Convert the float to a chrono::duration that is represented by a float and has the period of seconds.
Convert the resultant duration of step 1 to nanoseconds (which is the same thing as duration<int64_t, std::nano>).
This might look like this:
constexpr
auto
durationToDuration(const float time_s)
{
using namespace std::chrono;
using fsec = duration<float>;
return round<nanoseconds>(fsec{time_s});
}
fsec is the resultant type of step 1. It does absolutely no computation, and just changes the type from float to a chrono::duration. Then the chrono engine is used to do the actual computation, changing one duration into another duration.
The round utility is used because floating point types are vulnerable to round-off error. So if a floating point value is close to an integral number of nanoseconds, but not exact, one usually desires that close value.
But std::chrono::round is really a C++17 facility. For C++14, just use one of the free, open-source versions floating around the web (http://howardhinnant.github.io/duration_io/chrono_util.html or https://github.com/HowardHinnant/date/blob/master/include/date/date.h).

Implicit conversion double to unsigned long overflow c++

I'm testing a timer based on the ctime library using the clock() function.
Please note that the code that follows is only for test purposes.
#include <ctime>
unsigned long Elapsed(void);
clock_t start = 0;
clock_t stop = 0;
int main()
{
start = std::clock();
while(1)
{
sleep(1);
cout << "Elapsed seconds: " << Elapsed() << endl;
}
return 0;
}
unsigned long Elapsed()
{
stop = std::clock();
clock_t ticks = stop - start;
double seconds = (double)ticks / CLOCKS_PER_SEC; //CLOCK_PER_SEC = 1 milion
return seconds;
}
As you can see I'm performing an implicit conversion from double to unsigned long when Elapsed() returns the calculated value.
The unsigned long limit for a 32 bit system is 2,147,483,647 and I get overflow after Elapsed() returns 2146.
Looks like the function converts "ticks" to unsigned long, CLOCK_PER_SEC to unsigned long and then it returns the value. When it converts the "ticks" it overflows.
I expected it, instead, to first calculate the value in double of "ticks"/CLOCK_PER_SEC and THEN convert it to unsigned long.
In the attempt to count more seconds I tried to return a unsigned long long data type, but the variable always overflows at the same value (2147).
Could you explain me why the compiler converts to unsigned long long "a priori" and why even with unsigned long long it overflows at the same value ?
Is there any way to write the Elapsed() function in a better way to prevent the overflow to happen ?
Contrary to popular belief, the behaviour on converting a floating point type such as a double to any integral type is undefined if the value cannot fit into that integral type.
So introducing a double in your function is a poor thing to do indeed.
Why not write return ticks / CLOCKS_PER_SEC; instead if you can allow for truncation and wrap-around effects? Or if not, use a unsigned long long as the return value.
If on your system, clock_t is a 32 bit type, then it's likely it'll wrap around after 2147 seconds like you're seeing. This is expected behavior (ref. clock). And no amount of casting will get around that. Your code needs to be able to deal with the wrap-around (either by ignoring it, or by explicitly accounting for it).
When it converts the "ticks" it overflows.
No, the clock itself "overflows"; the conversion has nothing to do with it. That said, the conversion to double is pointless. Your limitation is the type clock_t. See notes for example from this reference:
The value returned by clock() may wrap around on some implementations. For example, on a machine with 32-bit clock_t, it wraps after 2147 seconds or 36 minutes.
One alternative, if it's available to you, is to rely on POSIX standard instead of C standard library. It provides clock_gettime which can be used to get the CPU time represented in timespec. Not only does it not suffer from this overlflow (until much longer timespan), but it also may have higher resolution than clock. The linked reference page of clock() conveniently shows example usage of clock_gettime as well.

Measuring execution time - gettimeofday versus clock() versus chrono

I have a subroutine that should be executed once every milisecond. I wanted to check that indeed that's what's happening. But I get different execution times from different functions. I've been trying to understand the differences between these functions (there are several SO questions about the subject) but I cannot get my head around the results I got. Please forget the global variables etc. This is a legacy code, written in C, ported to C++, which I'm trying to improve, so is messy.
< header stuff>
std::chrono::high_resolution_clock::time_point tchrono;
int64_t tgettime;
float tclock;
void myfunction(){
<all kinds of calculations>
using ms = std::chrono::duration<double, std::milli>;
std::chrono::high_resolution_clock::time_point tmpchrono = std::chrono::high_resolution_clock::now();
printf("chrono %f (ms): \n",std::chrono::duration_cast<ms>(tmpchrono-tchrono).count());
tchrono = tmpchrono;
struct timeval tv;
gettimeofday (&tv, NULL);
int64_t tmpgettime = (int64_t) tv.tv_sec * 1000000 + tv.tv_usec;
printf("gettimeofday: %lld\n",tmpgettime-tgettime);
tgettime = tmpgettime;
float tmpclock = 1000.0f*((float)clock())/CLOCKS_PER_SEC;
printf("clock %f (ms)\n",tmpclock-tclock);
tclock = tmpclock;
<more stuff>
}
and the output is:
chrono 0.998352 (ms):
gettimeofday: 999
clock 0.544922 (ms)
Why the difference? I'd expect clock to be at least as large as the others, or not?
std::chrono::high_resolution_clock::now() is not even working.
std::chrono::milliseconds represents the milliseconds as integers. When you convert to that representation, time representations of higher granularity are truncated to whole milliseconds. Then you assign it to a duration that has a double representation and seconds-ratio. Then you pass the duration object - instead of a double - to printf. All of those steps are wrong.
To get the milliseconds as a floating point, do this:
using ms = std::chrono::duration<double, std::milli>;
std::chrono::duration_cast<ms>(tmpchrono-tchrono).count();
clock() returns the processor time the process has used. That will depend on how much time the OS scheduler has given to your process. Unless the process is the only one on the system, this will be different from the passed wall clock time.
gettimeofday() returns the wall clock time.
What's the difference between using high_resolution_clock::now() and gettimeofday() ?
Both measure the wall clock time. The internal representation of both is implementation defined. The granularity of both is implementation defined as well.
gettimeofday is part of the POSIX standard and therefore available in all operating systems that comply with that standard (POSIX.1-2001). gettimeofday is not monotonic, i.e. it's affected by things like setting the time (by ntpd or by adminstrator) and changes in daylight saving time.
high_resolution_clock represents the clock with the smallest tick period provided by the implementation. It may be an alias of std::chrono::system_clock or std::chrono::steady_clock, or a third, independent clock.
high_resolution_clock is part of the c++ standard library and therefore available in all compilers that comply with that standard (c++11). high_resolution_clock may or might not be monotonic. This can be tested with high_resolution_clock::is_steady
The simples way to use std::chrono to measure execution time is this:
auto start = high_resolution_clock::now();
/*
* multiple iterations of the code you want to benchmark -
* make sure the optimizer doesn't eliminate the whole code
*/
auto end = high_resolution_clock::now();
std::cout << "Execution time (us): " << duration_cast<microseconds>(end - start).count() << std::endl;