Determine if a double value isSafeInteger in C++? - c++

In JavaScript there is this isSafeInteger method. How check the same thing in C++? The most straightforward way would be:
bool isSafeInteger(double d) noexcept {
auto const i = static_cast<std::int64_t>(d);
return i == d && i <= 9007199254740991 && i >= -9007199254740991;
}
But it doesn't feel right. Is there a better way to do it?

bool isSafeInteger(double d) noexcept {
if (d>=std::numeric_limits<std::int64_t>::max()) return false;
if (d<=std::numeric_limits<std::int64_t>::min()) return false;
if (isnan(d)) return false;
auto as_int=[](double d){return static_cast<std::int64_t>(d);};
return (as_int(d)==d) && (as_int(d+1)!=as_int(d)) && (as_int(d-1)!=as_int(d));
}
this checks it round trips to double, and that adjacent doubles don't round to the same integer, and that the double isn't a NaN (while svoiding triggering any NaN traps).
Finally, guard against out of bounds conversion, which is UB. We use >= and <= to be safe due to +1/-1 usage later.
This also works for float, but not for 128 (or 70ish) sized floats.

Related

Ternary operator slower than if else when returning bool literals?

I know there is a very similar question already:
Ternary operator ?: vs if...else
This is more regarding returning bool literals from a function.
Given the following function:
bool inRange(size_t value, size_t upperBound) const
{
return (value >= 0 && value < upperBound) ? true : false;
}
CLion advices me this can be simplified using an "if else" statement instead.
Would this be actually faster because of return value optimization and/or likelihood of certain if branches? (or some other reason).
Or is it maybe a style guide given by CLion?
The recommendation it is suggesting is to change
bool inRange(size_t value, size_t upperBound) const
{
return (value >= 0 && value < upperBound) ? true : false;
}
to
bool inRange(size_t value, size_t upperBound) const
{
return value >= 0 && value < upperBound;
}
this has nothing to do with performance, the ternary is simply redundant. I am quite positive it would compile to the same assembly (proof), the IDE is just suggesting you simplify your redundant logic.

How to safely compare two unsigned integer counters?

We have two unsigned counters, and we need to compare them to check for some error conditions:
uint32_t a, b;
// a increased in some conditions
// b increased in some conditions
if (a/2 > b) {
perror("Error happened!");
return -1;
}
The problem is that a and b will overflow some day. If a overflowed, it's still OK. But if b overflowed, it would be a false alarm. How to make this check bulletproof?
I know making a and b uint64_t would delay this false-alarm. but it still could not completely fix this issue.
===============
Let me clarify a little bit: the counters are used to tracking memory allocations, and this problem is found in dmalloc/chunk.c:
#if LOG_PNT_SEEN_COUNT
/*
* We divide by 2 here because realloc which returns the same
* pointer will seen_c += 2. However, it will never be more than
* twice the iteration value. We divide by two to not overflow
* iter_c * 2.
*/
if (slot_p->sa_seen_c / 2 > _dmalloc_iter_c) {
dmalloc_errno = ERROR_SLOT_CORRUPT;
return 0;
}
#endif
I think you misinterpreted the comment in the code:
We divide by two to not overflow iter_c * 2.
No matter where the values are coming from, it is safe to write a/2 but it is not safe to write a*2. Whatever unsigned type you are using, you can always divide a number by two while multiplying may result in overflow.
If the condition would be written like this:
if (slot_p->sa_seen_c > _dmalloc_iter_c * 2) {
then roughly half of the input would cause a wrong condition. That being said, if you worry about counters overflowing, you could wrap them in a class:
class check {
unsigned a = 0;
unsigned b = 0;
bool odd = true;
void normalize() {
auto m = std::min(a,b);
a -= m;
b -= m;
}
public:
void incr_a(){
if (odd) ++a;
odd = !odd;
normalize();
}
void incr_b(){
++b;
normalize();
}
bool check() const { return a > b;}
}
Note that to avoid the overflow completely you have to take additional measures, but if a and b are increased more or less the same amount this might be fine already.
The posted code actually doesn’t seem to use counters that may wrap around.
What the comment in the code is saying is that it is safer to compare a/2 > b instead of a > 2*b because the latter could potentially overflow while the former cannot. This particularly true of the type of a is larger than the type of b.
Note overflows as they occur.
uint32_t a, b;
bool aof = false;
bool bof = false;
if (condition_to_increase_a()) {
a++;
aof = a == 0;
}
if (condition_to_increase_b()) {
b++;
bof = b == 0;
}
if (!bof && a/2 + aof*0x80000000 > b) {
perror("Error happened!");
return -1;
}
Each a, b interdependently have 232 + 1 different states reflecting value and conditional increment. Somehow, more than an uint32_t of information is needed. Could use uint64_t, variant code paths or an auxiliary variable like the bool here.
Normalize the values as soon as they wrap by forcing them both to wrap at the same time. Maintain the difference between the two when they wrap.
Try something like this;
uint32_t a, b;
// a increased in some conditions
// b increased in some conditions
if (a or b is at the maximum value) {
if (a > b)
{
a = a-b; b = 0;
}
else
{
b = b-a; a = 0;
}
}
if (a/2 > b) {
perror("Error happened!");
return -1;
}
If even using 64 bits is not enough, then you need to code your own "var increase" method, instead of overload the ++ operator (which may mess your code if you are not careful).
The method would just reset var to '0' or other some meaningfull value.
If your intention is to ensure that action x happens no more than twice as often as action y, I would suggest doing something like:
uint32_t x_count = 0;
uint32_t scaled_y_count = 0;
void action_x(void)
{
if ((uint32_t)(scaled_y_count - x_count) > 0xFFFF0000u)
fault();
x_count++;
}
void action_y(void)
{
if ((uint32_t)(scaled_y_count - x_count) < 0xFFFF0000u)
scaled_y_count+=2;
}
In many cases, it may be desirable to reduce the constants in the comparison used when incrementing scaled_y_count so as to limit how many action_y operations can be "stored up". The above, however, should work precisely in cases where the operations remain anywhere close to balanced in a 2:1 ratio, even if the number of operations exceeds the range of uint32_t.

More general test for same order of magnitude than comparing floor(log10(abs(n)))

I am implementing an optimization algorithm and have diferent heuristics for cases where no or largely different lower and upper bounds for the solution are known or not.
To check, my first approach would be simply taking
if(abs(floor(log10(abs(LBD))) - floor(log10(abs(UBD)))) < 1 )
{ //(<1 e.g. for 6, 13)
//Bounds are sufficiently close for the serious stuff
}
else {
//We need some more black magic
}
But this requires previous checks to be gerneralized to NAN, ±INFINITY.
Also, in the case where LBD is negative and UBD positive we can't assume that the above check alone assures us that they are anywhere close to being of equal order of magnitude.
Is there a dedicated approach to this or am I stuck with this hackery?
Thanks to geza I realized that thw whole thing can be done without the log10:
A working solution is posted below, and a MWE including the log variant posted on ideone.
template <typename T> double sgn(T val) {
return double((T(0) < val) - (val < T(0)))/(val == val);
}
bool closeEnough(double LBD, double UBD, uint maxOrderDiff = 1, uint cutoffOrder = 1) {
double sgn_LBD = sgn(LBD);
double sgn_UBD = sgn(UBD);
double cutoff = pow(10, cutoffOrder);
double maxDiff = pow(10, maxOrderDiff);
if(sgn_LBD == sgn_UBD) {
if(abs(LBD)<cutoff && abs(UBD)<cutoff) return true;
return LBD<UBD && abs(UBD)<abs(LBD)*maxDiff;
}
else if(sgn_UBD > 0) {
return -LBD<cutoff && UBD<cutoff;
}
// if none of the above matches LBD >= UBD or any of the two is NAN
}
As a bonus it can take cutoffs, so if both bounds lie within [-10^cutoffOrder,+10^cutoffOrder] they are considered to be close enough!
The pow computation might also be unecessary, but at least in my case this check is not in a critical code section.
If it would be, I suppose you could just hard code the cutoff and maxDiff.

implementation of isnan() function

I am a beginner to c++ programming and I am given a task of implementation of fixed point math arithmetic in c++. here I am trying to implementation a function isnan() which returns true if the number is not-a-number else will return false.
Test file
#include "fixed_point_header.h"
int main()
{
fp::fixed_point<long long int, 63> a=fp::fixed_point<long long int, 63>::positive_infinity(); // will assign positive infinity value to a from an function from header
fp::fixed_point<long long int, 63> b=fp::fixed_point<long long int, 63>::negative_infinity(); // will assign positive infinity value to b from an function from header
float nan=fp::fixed_point<long long int, 63>::isnan(a,b);
printf( "fixed point nan value == %f\n", float (nan));
}
In the header I want to do somewhat like the code shown below if positive and negative infinity values are added, the isnan function should return 1 else 0.
Header file
#include fixed_point_header
static fp::fixed_point<FP, I, F> isnan (fp::fixed_point<FP, I, F> x,fp::fixed_point<FP, I, F> y){
/*if ( x + y ) happens, ie. x and y are infinities
{
should return 1; }
else {
should return 0; }
} */
can anyone please tell how to proceed with it? or how to solve this paradigm
I am trying to implementation a function isnan() which returns true if the number is not-a-number else will return false.
That's simple enough; define a reserved value to represent nan (as you have for the infinities), and compare with that:
bool isnan(fixed_point x) {
return x == fixed_point::nan();
}
I want to do somewhat like the code shown below if positive and negative infinity values are added, the isnan function should return 1 else 0
It would be the responsibility of the addition operator to check the inputs and return nan if appropriate:
fixed_point operator+(fixed_point x, fixed_point y) {
if (x == fixed_point::nan() || y == fixed_point::nan()) {
return nan;
}
if (x == fixed_point::positive_infinity()) {
return y == fixed_point::negative_infinity() ? fixed_point::nan() : x;
}
// and so on
}
then the test in main becomes:
bool nan = fixed_point::isnan(a+b);

Moving code from Visual Basic to C++ issue

I'm facing an issue regarding the translation in C++ of a bunch of source code written in Visual Basic. In the code there is a call to the method Sign (VB) and various conversions of float to integer... Could you confirm that c++ code for 1, 2, 3 are the same as the VB one? In addition about the implicit conversion I've no idea how the conversion is performed (See 4). Any idea?
1) Method Sign (Visual Basic)
//C++
int sign(float value)
{
if (value < 0) return -1;
else if (value == 0) return 0;
else return 1;
}
2) Method Int (Visual Basic)
//C++
int Int(float value)
{
return ((value >= 0) ? value : floor(value));
}
3) Method CInt (Visual Basic)
//C++
int CInt(const float val)
{
float x = fabs(val - (int)val);
if (fabs(x - 0.5) < 0.0001)
return (int)val;
else
return (int)(val+(val>=0.0?0.5:-0.5));
}
4) And there is also an implicit conversion of double to int. How to make this conversion in c++?
//Visual basic
Dim dt As Integer = -99.2
Thanks you in advance,
1-
It is not the same, floating point values should not be compared to a constant variable (0, in this example). So, this is a better code for it:
const float epsilon = 0.00001f;
if(value < -epsilon) return -1;
if(value > epsilon) return 1;
return 0;
2- It depends on what you want for, for example -5.7. If you want -5, just cast away using (int). for example, if you have a float variable named f, use (int)f. If you want -6, use this function:
int Int(float value)
{
return ((value >= 0) ? (int)value : (int)(value-1));
}
3- It should work but last return statement could be made clearer:
return (int)val + (val>=0.0?1:-1)
4- Doubles are very very similar to floats in C/C++. Do as if you're messing with a float, not double.