Implicit conversion double to unsigned long overflow c++ - c++

I'm testing a timer based on the ctime library using the clock() function.
Please note that the code that follows is only for test purposes.
#include <ctime>
unsigned long Elapsed(void);
clock_t start = 0;
clock_t stop = 0;
int main()
{
start = std::clock();
while(1)
{
sleep(1);
cout << "Elapsed seconds: " << Elapsed() << endl;
}
return 0;
}
unsigned long Elapsed()
{
stop = std::clock();
clock_t ticks = stop - start;
double seconds = (double)ticks / CLOCKS_PER_SEC; //CLOCK_PER_SEC = 1 milion
return seconds;
}
As you can see I'm performing an implicit conversion from double to unsigned long when Elapsed() returns the calculated value.
The unsigned long limit for a 32 bit system is 2,147,483,647 and I get overflow after Elapsed() returns 2146.
Looks like the function converts "ticks" to unsigned long, CLOCK_PER_SEC to unsigned long and then it returns the value. When it converts the "ticks" it overflows.
I expected it, instead, to first calculate the value in double of "ticks"/CLOCK_PER_SEC and THEN convert it to unsigned long.
In the attempt to count more seconds I tried to return a unsigned long long data type, but the variable always overflows at the same value (2147).
Could you explain me why the compiler converts to unsigned long long "a priori" and why even with unsigned long long it overflows at the same value ?
Is there any way to write the Elapsed() function in a better way to prevent the overflow to happen ?

Contrary to popular belief, the behaviour on converting a floating point type such as a double to any integral type is undefined if the value cannot fit into that integral type.
So introducing a double in your function is a poor thing to do indeed.
Why not write return ticks / CLOCKS_PER_SEC; instead if you can allow for truncation and wrap-around effects? Or if not, use a unsigned long long as the return value.

If on your system, clock_t is a 32 bit type, then it's likely it'll wrap around after 2147 seconds like you're seeing. This is expected behavior (ref. clock). And no amount of casting will get around that. Your code needs to be able to deal with the wrap-around (either by ignoring it, or by explicitly accounting for it).

When it converts the "ticks" it overflows.
No, the clock itself "overflows"; the conversion has nothing to do with it. That said, the conversion to double is pointless. Your limitation is the type clock_t. See notes for example from this reference:
The value returned by clock() may wrap around on some implementations. For example, on a machine with 32-bit clock_t, it wraps after 2147 seconds or 36 minutes.
One alternative, if it's available to you, is to rely on POSIX standard instead of C standard library. It provides clock_gettime which can be used to get the CPU time represented in timespec. Not only does it not suffer from this overlflow (until much longer timespan), but it also may have higher resolution than clock. The linked reference page of clock() conveniently shows example usage of clock_gettime as well.

Related

How do I create a timer that I can query during the program and whose format is int, long or double?

I want to start a clock at the beginning of my program and use its elapsed time during the program to do some calculations, so the time should be in a int, long or double format. For example i want to calculate a debounce time but when i try it like this i get errors because the chrono high resolution clock is not in an int, long or double format and therefore i can't subtract 50ms from that (my debounceDelay) or save that value to a double (my lastDebounceTime). Originally i had a working Arduino Game (Pong) with an LCD and i want to convert this into a C++ console application.
On the Arduino there was this function "millis()" that gave me the runtime in ms and this worked perfectly fine. I can't find a similar function for C++.
double lastDebounceTime = 0;
double debounceDelay = 50;
void Player1Pos() {
if ((std::chrono::high_resolution_clock::now() - lastDebounceTime) > debounceDelay) {
if ((GetKeyState('A') & 0x8000) && (Player1Position == 0)) {
Player1Position = 1;
lastDebounceTime = std::chrono::high_resolution_clock::now();
}
else if ((GetKeyState('A') & 0x8000) && (Player1Position == 1)) {
Player1Position = 0;
lastDebounceTime = std::chrono::high_resolution_clock::now();
}
}
I am very new to C++ so any help is greatly appreciated.
Thank you all!
I find the question misguided in its attempt to force the answer to use "int, long, or double". Those are not appropriate types for the task at hand. For references, see A: Cast chrono::milliseconds to uint64_t? and A: C++ chrono - get duration as float or long long. The question should have asked about obtaining the desired functionality (whatever the code block is supposed to do), rather than asking about a pre-chosen approach to the desired functionality. So that is what I will answer first.
Getting the desired result
To get the code block to compile, you just have to drop the insistence that the variables be "int, long, or double". Instead, use time-oriented types for time-oriented data. Here are the first two variables in the code block:
double lastDebounceTime = 0;
double debounceDelay = 50;
The first is supposed to represent a point in time, and the second a time duration. The C++ type for representing a point in time is std::chrono::time_point, and the C++ type for a time duration is a std::chrono::duration. Almost. Technically, these are not types, but type templates. To get actual types, some template arguments need to be supplied. Fortunately, we can get the compiler to synthesize the arguments for us.
The following code block compiles. You might note that I left out details that I consider irrelevant to the question at hand, hence that I feel should have been left out of the minimal reproducible example. Take this as an example of how to simplify code when asking questions in the future.
// Use the <chrono> library when measuring time
#include <chrono>
// Enable use of the `ms` suffix.
using namespace std::chrono_literals;
std::chrono::high_resolution_clock::time_point lastDebounceTime;
// Alternatively, if the initialization to zero is not crucial:
// auto lastDebounceTime = std::chrono::high_resolution_clock::now();
auto debounceDelay = 50ms;
void Player1Pos() {
if ((std::chrono::high_resolution_clock::now() - lastDebounceTime) > debounceDelay) {
// Do stuff
lastDebounceTime = std::chrono::high_resolution_clock::now();
}
}
Subtracting two time_points produces a duration, which can be compared to another duration. The logic works and now is type-safe.
Getting the desired approach
OK, back to the question that was actually asked. You can convert the value returned by now() to an arithmetic type (integer or floating point) with the following code. You should doubts about using this code after reading the comment that goes with it.
// Get the number of [some time units] since [some special time].
std::chrono::high_resolution_clock::now().time_since_epoch().count();
The time units involved are not specified by the standard, but are instead whatever std::chrono::high_resolution_clock::period corresponds to, not necessarily milliseconds. The special time is called the clock's epoch, which could be anything. Fortunately for your use case, the exact epoch does not matter – you just need it to be constant for each run of the program, which it is. However, the unknown units could be a problem, requiring more code to handle correctly.
I find the appropriate types easier to use than trying to get this conversion correct. Especially since it required no change to your function.

Converting from std::chrono:: to 32 bit seconds and nanoseconds?

This could be the inverse of Converting from struct timespec to std::chrono::?
I am getting my time as
const std::Chrono::CRealTimeClock::time_point RealTimeClockTime = std::Chrono::CRealTimeClock::now();
and I have to convert it to a struct timespec.
Actually, I don't, if there is an altrerntive; what I have to do is get the number of seconds since the epoch and the number of nanoseconds since the last last second.
I chose struct timespec becuase
struct timespec
{
time_t tv_sec; // Seconds - >= 0
long tv_nsec; // Nanoseconds - [0, 999999999]
};
The catch is that I need to shoehorn the seconds and nonseconds into uint32_t.
I am aware theat there is a danger of loss of precision, but reckon that we don't care too much about the nanoseconds while the year 208 problem gives me cause for concern.
However, I have to bang out some code now and we can update it later if necessary. The code has to meet another manufacturer's specification and it is likely to take weeks or months to get this problem resolved and use uint64_t.
So, how can I, right now, obtain 32 bit values of second and nanosecond from std::Chrono::CRealTimeClock::now()?
I'm going to ignore std::Chrono::CRealTimeClock::now() and just pretend you wrote std::chrono::system_clock::now(). Hopefully that will give you the tools to deal with whatever clock you actually have.
Assume:
#include <cstdint>
struct my_timespec
{
std::uint32_t tv_sec; // Seconds - >= 0
std::uint32_t tv_nsec; // Nanoseconds - [0, 999999999]
};
Now you can write:
#include <chrono>
my_timespec
now()
{
using namespace std;
using namespace std::chrono;
auto tp = system_clock::now();
auto tp_sec = time_point_cast<seconds>(tp);
nanoseconds ns = tp - tp_sec;
return {static_cast<uint32_t>(tp_sec.time_since_epoch().count()),
static_cast<uint32_t>(ns.count())};
}
Explanation:
I've used function-local using directives to reduce code verbosity and increase readability. If you prefer you can use using declarations instead to bring individual names into scope, or you can explicitly qualify everything.
The first job is to get now() from whatever clock you're using.
Next use std::chrono::time_point_cast to truncate the precision of tp to seconds precision. One important note is that time_point_cast truncates towards zero. So this code assumes that now() is after the clock's epoch and returns a non-negative time_point. If this is not the case, then you should use C++17's floor instead. floor always truncates towards negative infinity. I chose time_point_cast over floor only because of the [c++14] tag on the question.
The expression tp - tp_sec is a std::chrono::duration representing the time duration since the last integral second. This duration is implicitly converted to have units of nanoseconds. This implicit conversion is typically fine as all implementations of system_clock::duration have units that are either nanoseconds or coarser (and thus implicitly convertible to) nanoseconds. If your clock tracks units of picoseconds (for example), then you will need a duration_cast<nanoseconds>(tp - tp_sec) here to truncate picoseconds to nanoseconds precision.
Now you have the {seconds, nanoseconds} information in {tp_sec, ns}. It's just that they are still in std::chrono types and not uint32_t as desired. You can extract the internal integral values with the member functions .time_since_epoch() and .count(), and then static_cast those resultant integral types to uint32_t. The final static_cast are optional as integral conversions can be made implicitly. However their use is considered good style.

long long value in Visual Studio

We know that -2*4^31 + 1 = -9.223.372.036.854.775.807, the lowest value you can store in long long, as being said here: What range of values can integer types store in C++.
So I have this operation:
#include <iostream>
unsigned long long pow(unsigned a, unsigned b) {
unsigned long long p = 1;
for (unsigned i = 0; i < b; i++)
p *= a;
return p;
}
int main()
{
long long nr = -pow(4, 31) + 5 -pow(4,31);
std::cout << nr << std::endl;
}
Why does it show -9.223.372.036.854.775.808 instead of -9.223.372.036.854.775.803? I'm using Visual Studio 2015.
This is a really nasty little problem which has three(!) causes.
Firstly there is a problem that floating point arithmetic is approximate. If the compiler picks a pow function returning float or double, then 4**31 is so large that 5 is less than 1ULP (unit of least precision), so adding it will do nothing (in other words, 4.0**31+5 == 4.0**31). Multiplying by -2 can be done without loss, and the result can be stored in a long long without loss as the wrong answer: -9.223.372.036.854.775.808.
Secondly, a standard header may include other standard headers, but is not required to. Evidently, Visual Studio's version of <iostream> includes <math.h> (which declares pow in the global namespace), but Code::Blocks' version doesn't.
Thirdly, the OP's pow function is not selected because he passes arguments 4, and 31, which are both of type int, and the declared function has arguments of type unsigned. Since C++11, there are lots of overloads (or a function template) of std::pow. These all return float or double (unless one of the arguments is of type long double - which doesn't apply here).
Thus an overload of std::pow will be a better match ... with a double return values, and we get floating point rounding.
Moral of the story: Don't write functions with the same name as standard library functions, unless you really know what you are doing!
Visual Studio has defined pow(double, int), which only requires a conversion of one argument, whereas your pow(unsigned, unsigned) requires conversion of both arguments unless you use pow(4U, 31U). Overloading resolution in C++ is based on the inputs - not the result type.
The lowest long long value can be obtained through numeric_limits. For long long it is:
auto lowest_ll = std::numeric_limits<long long>::lowest();
which results in:
-9223372036854775808
The pow() function that gets called is not yours hence the observed results. Change the name of the function.
The only possible explaination for the -9.223.372.036.854.775.808 result is the use of the pow function from the standard library returning a double value. In that case, the 5 will be below the precision of the double computation, and the result will be exactly -263 and converted to a long long will give 0x8000000000000000 or -9.223.372.036.854.775.808.
If you use you function returning an unsigned long long, you get a warning saying that you apply unary minus to an unsigned type and still get an ULL. So the whole operation should be executed as unsigned long long and should give without overflow 0x8000000000000005 as unsigned value. When you cast it to a signed value, the result is undefined, but all compilers I know simply use the signed integer with same representation which is -9.223.372.036.854.775.803.
But it would be simple to make the computation as signed long long without any warning by just using:
long long nr = -1 * pow(4, 31) + 5 - pow(4,31);
As a addition, you have neither undefined cast nor overflow here so the result is perfectly defined per standard provided unsigned long long is at least 64 bits.
Your first call to pow is using the C standard library's function, which operates on floating points. Try giving your pow function a unique name:
unsigned long long my_pow(unsigned a, unsigned b) {
unsigned long long p = 1;
for (unsigned i = 0; i < b; i++)
p *= a;
return p;
}
int main()
{
long long nr = -my_pow(4, 31) + 5 - my_pow(4, 31);
std::cout << nr << std::endl;
}
This code reports an error: "unary minus operator applied to unsigned type, result still unsigned". So, essentially, your original code called a floating point function, negated the value, applied some integer arithmetic to it, for which it did not have enough precision to give the answer you were looking for (at 19 digits of presicion!). To get the answer you're looking for, change the signature to:
long long my_pow(unsigned a, unsigned b);
This worked for me in MSVC++ 2013. As stated in other answers, you're getting the floating-point pow because your function expects unsigned, and receives signed integer constants. Adding U to your integers invokes your version of pow.

Conversion from unsigned long long to unsigned int in C++

If I write the following code in C/C++:
using namespace std::chrono;
auto time = high_resolution_clock::now();
unsigned long long foo = time.time_since_epoch().count();
unsigned bar = foo;
Which bits of foo are dropped in the conversion to unsigned int? Is there any way I can enforce only the least significant bits to be preserved?
Alternatively, is there a simple way to hash foo into an unsigned int size? (This would be preferred, but I'm doing all of this in an initializer list.)
EDIT: Just realized that preserving the least significant bits could still allow looping. I gues hashing is what I'd be going for then.
2nd EDIT: To clarify what I responded in a comment, I am seeding std::default_random_engine inside a loop and do not want an overflow to cause seed values to repeat. I am looking for a simple way to hash unsigned long long into unsigned int.
Arithmetic with unsigned integral types in C and in C++ deals with out of range values by modular reduction; e.g. if unsigned int is a 32 bit type, then when assigned a value out of range, it reduces the value modulo 2^32 and stores the least nonnegative representative.
In particular, that is exactly what the standard mandates when assigning from a larger to a smaller unsigned integral type.
Note also that the standard doesn't guarantee the size of unsigned int — if you need, for example, a type that is definitely exactly 32 bits wide, use uint32_t.
Good news you're safe:
I just ran time.time_since_epoch().count() and got:
1,465,934,400
You've got a while till you have till you see a repeat value since `numeric_limits is:
4,294,967,295
Copying to a integral type of a smaller size:
Causes dropping of excess higher order bits
So if you just do static_cast<unsigned int>(foo) you won't get a matching output for roughly 136 years: numeric_limits<unsigned int>::max / 60U / 60U / 24U / 356U
PS You won't care if you get a repeat by then.
Replying to the question in edit two, you can seed based on the delta between when you started and when the seed is needed. E.g.:
std::default_random_engine generator;
typedef std::chrono::high_resolution_clock myclock;
myclock::time_point beginning = myclock::now();
for (int i = 0; i < 128; i++) // or whatever type of loop you're using
{
// obtain a seed from the timer
myclock::duration d = myclock::now() - beginning;
unsigned seed = d.count();
generator.seed(seed);
// TODO: magic
}
Okay, I've settled on using std::hash.
std::hash<long long> seeder;
auto seed = seeder(time.time_since_epoch().count());
Since I am doing this in an initializer list, this can be put together in one line and passed to std::default_random_engine (but that looks pretty ugly).
This is quit a hit to performance, but at least it reduces the chance of seeds repeating.

Type of CLOCKS_PER_SEC

What datatype is CLOCKS_PER_SEC typically represented as? long unsigned int? clock_t? Does it vary from implementation to implementation?
I ask because I use CLOCKS_PER_SEC in a return value, and I want to make sure I use the most appropriate type.
All that the C standard promises is that CLOCKS_PER_SEC is a constant expression with the type clock_t which must be an arithmetic type (could be an integer or a floating type).
(C99 7.23 Date and time <time.h>)
I think that clock_t is typically a long, but I wouldn't bet my life that I'm right.
My usually trusty Harbison & Steele (3rd ed) suggests casting clock_t to double for use in your programs so your code can work regardless of the actual clock_t type (18.1 CLOCK, CLOCK_T, CLOCKS_PER_SEC, TIMES):
Here is how the clock function can
be used to time an ANSI C program:
#include <time.h>
clock_t start, finish, duration;
start = clock();
process();
finish = clock();
printf("process() took %f seconds to execute\n",
((double) (finish - start)) / CLOCKS_PER_SEC );
Note how the cast to type double
allows clock_t and CLOCKS_PER_SEC
to be either floating-point or
integral.
You might consider whether this would work for your purposes.
CLOCKS_PER_SEC is a macro, that usually expands to a literal.
The glibc manual says:
In the GNU system, clock_t is
equivalent to long int and
CLOCKS_PER_SEC is an integer value.
But in other systems, both clock_t and
the type of the macro CLOCKS_PER_SEC
can be either integer or
floating-point types. Casting
processor time values to double, as in
the example above, makes sure that
operations such as arithmetic and
printing work properly and
consistently no matter what the
underlying representation is.
CLOCK_PER_SEC is actually specified by POSIX as part of the time.h header.
That says it's a clock_t as described by sys/types.h.
That in turn says:
time_t and clock_t shall be integer or
real-floating types.
So all you can assume in portable code is that it is some integral or floating point type. If you just need to declare a variable to store the value, declare it as "clock_t".