C++ Should this be easier? - c++

long-time listener, first-time caller. I am relatively new to programming and was looking back at some of the code I wrote for an old lab. Is there an easier way to tell if a double is evenly divisible by an integer?
double num (//whatever);
int divisor (//an integer);
bool bananas;
if(floor(num)!= num || static_cast<int>(num)%divisor != 0) {
bananas=false;
}
if(bananas==true)
//do stuff;
}

The question is strange, and the checks are as well. The problem is that it makes little sense to speak about divisibility of a floating point number because floating point number are represented imprecisely in binary, and divisibility is about exactitude.
I encourage you to read this article, by David Goldberg: What Every Computer Scientist Should Know About Floating Point Arithmetic. It is a bit long-winded, so you may appreciate this website, instead: The Floating-Point Guide.
The truth is that floor(num) == num is a strange piece of code.
num is a double
floor(num) returns an double, close to an int
The trouble is that this does not check what you really wanted. For example, suppose (for the sake of example) that 5 cannot be represented exactly as a double, therefore, instead of storing 5, the computer will store 4.999999999999.
double num = 5; // 4.999999999999999
double floored = floor(num); // 4.0
assert(num != floored);
In general exact comparisons are meaningless for floating point numbers, because of rounding errors.
If you insist on using floor, I suggest to use floor(num + 0.5) which is better, though slightly biased. A better rounding method is the Banker's rounding because it is unbiased, and the article references others if you wish. Note that the Banker's rounding is the baked in in round...
As for your question, first you need a double aware modulo: fmod, then you need to remember the avoid exact comparisons bit.
A first (naive) attempt:
// divisor is deemed non-zero
// epsilon is a constant
double mod = fmod(num, divisor); // divisor will be converted to a double
if (mod <= epsilon) { }
Unfortunately it fails one important test: the magnitude of mod depends on the magnitude of divisor, thus if divisor is smaller than epsilon to begin with, it will always be true.
A second attempt:
// divisor is deemed non-zero
double const epsilon = divisor / 1000.0;
double mod = fmod(num, divisor);
if (mod <= epsilon) { }
Better, but not quite there: mod and epsilon are signed! Yes, it's a bizarre modulo, th sign of mod is the sign of num
A third attempt:
// divisor is deemed non-zero
double const eps = fabs(divisor / 1000.0);
double mod = fabs(fmod(num, divisor));
if (mod <= eps) { }
Much better.
Should work fairly well too if divisor comes from an integer, as there won't be precision issues... or at least not too much.
EDIT: fourth attempt, by #ybungalobill
The previous attempt does not deal well with situations where num/divisor errors on the wrong side. Like 1.999/1.000 --> 0.999, it's nearly divisor so we should indicate equality, yet it failed.
// divisor is deemed non-zero
mod = fabs(fmod(num/divisor, 1));
if (mod <= 0.001 || fabs(1 - mod) <= 0.001) { }
Looks like a never ending task eh ?
There is still cause for troubles though.
double has a limited precision, that is a limited number of digits that is representable (16 I think ?). This precision might be insufficient to represent an integer:
Integer n = 12345678901234567890;
double d = n; // 1.234567890123457 * 10^20
This truncation means it is impossible to map it back to its original value. This should not cause any issue with double and int, for example on my platform double is 8 bytes and int is 4 bytes, so it would work, but changing double to float or int to long could violate this assumption, oh hell!
Are you sure you really need floating point, by the way ?

Based on the above comments, I believe you can do this...
double num (//whatever);
int divisor (//an integer);
if(fmod(num, divisor) == 0) {
//do stuff;
}

I haven't checked it but why not do this?
if (floor(num) == num && !(static_cast<int>(num) % divisor)) {
// do stuff...
}

Related

How close to division by zero can I get?

I want to avoid dividing by zero so I have an if statement:
float number;
//........
if (number > 0.000000000000001)
number = 1/number;
How small of a value can I safely use in place of 0.000000000000001?
Just use:
if(number > 0)
number = 1/number;
Note the difference between > and >=. If number > 0, then it definitely is not 0.
If number can be negative you can also use:
if(number != 0)
number = 1/number;
Note that, as others have mentioned in the comments, checking that number is not 0 will not prevent your result from being Inf or -Inf.
The number in the if condition depends on what you want to do with the result. In IEEE 754, which is used by (almost?) all C implementations, dividing by 0 is OK: you get positive or negative infinity.
If your goal is to avoid +/- Infinity, then the number in the if condition will depend upon the numerator. When the numerator is 1, you can use DBL_MIN or FLT_MIN from math.h.
If your goal is to avoid huge numbers after the division, you can do the division and then check if fabs(number) is bigger than certain value after the division, and then take whatever action as needed.
There is no single correct answer to your question.
You can simply check:
if (number > 0)
I can't understand why you need the lower limit.
For numeric type T std::numeric_limits gives you anything you need. For example you could do this to make sure that anything above min_invertible has finite reciprocal:
float max_float = std::numeric_limits<float>::max();
float min_float = std::numeric_limits<float>::min(); // or denorm_min()
float min_invertible = (max_float*min_float > 1.0f )? min_float : 1.0f/max_float;
You can't decently check up front. DBL_MAX / 0.5 effectively is a division by zero; the result is the same infinity you'd get from any other division by (almost) zero.
There is a simple solution: just check the result. std::isinf(result) will tell you whether the result overflowed, and IEEE754 tells you that division cannot produce infinity in other cases. (Well, except for INF/x,. That's not really producing infinity but merely preserving it.)
Your risk of producing an unhelpful result through overflow or underflow depends on both numerator and denominator.
A safety check which takes that into consideration is:
if (den == 0.0 || log2(num) - log2(den) >= log2(FLT_MAX))
/* expect overflow */ ;
else
return num / den;
but you might want to shave a small amount off log2(FLT_MAX) to leave wiggle-room for subsequent arithmetic and round-off.
You can do something similar with frexp, which would work for negative values as well:
int max;
int n, d;
frexp(FLT_MAX, &max);
frexp(num, &n);
frexp(den, &d);
if (den == 0.0 || n - d > max)
/* might overflow */ ;
else
return num / den;
This avoids the work of computing the logarithm, which might be more efficient if the compiler can find a suitable way of doing it, but it's not as accurate.
With IEEE 32-bit floats, the smallest possible value greater than 0 is 2^-149.
If you're using IEEE 64-bit, the smallest possible value is 2^-1074.
That said, (x > 0) is probably the better test.

Floating point equality and tolerances

Comparing two floating point number by something like a_float == b_float is looking for trouble since a_float / 3.0 * 3.0 might not be equal to a_float due to round off error.
What one normally does is something like fabs(a_float - b_float) < tol.
How does one calculate tol?
Ideally tolerance should be just larger than the value of one or two of the least significant figures. So if the single precision floating point number is use tol = 10E-6 should be about right. However this does not work well for the general case where a_float might be very small or might be very large.
How does one calculate tol correctly for all general cases? I am interested in C or C++ cases specifically.
This blogpost contains an example, fairly foolproof implementation, and detailed theory behind it
http://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/
it is also one of a series, so you can always read more.
In short: use ULP for most numbers, use epsilon for numbers near zero, but there are still caveats. If you want to be sure about your floating point math i recommend reading whole series.
As far as I know, one doesn't.
There is no general "right answer", since it can depend on the application's requirement for precision.
For instance, a 2D physics simulation working in screen-pixels might decide that 1/4 of a pixel is good enough, while a 3D CAD system used to design nuclear plant internals might not.
I can't see a way to programmatically decide this from the outside.
The C header file <float.h> gives you the constants FLT_EPSILON and DBL_EPSILON, which is the difference between 1.0 and the smallest number larger than 1.0 that a float/double can represent. You can scale that by the size of your numbers and the rounding error you wish to tolerate:
#include <float.h>
#ifndef DBL_TRUE_MIN
/* DBL_TRUE_MIN is a common non-standard extension for the minimum denorm value
* DBL_MIN is the minimum non-denorm value -- use that if TRUE_MIN is not defined */
#define DBL_TRUE_MIN DBL_MIN
#endif
/* return the difference between |x| and the next larger representable double */
double dbl_epsilon(double x) {
int exp;
if (frexp(x, &exp) == 0.0)
return DBL_TRUE_MIN;
return ldexp(DBL_EPSILON, exp-1);
}
Welcome to the world of traps, snares and loopholes. As mentioned elsewhere, a general purpose solution for floating point equality and tolerances does not exist. Given that, there are tools and axioms that a programmer may use in select cases.
fabs(a_float - b_float) < tol has the shortcoming OP mentioned: "does not work well for the general case where a_float might be very small or might be very large." fabs(a_float - ref_float) <= fabs(ref_float * tol) copes with the variant ranges much better.
OP's "single precision floating point number is use tol = 10E-6" is a bit worrisome for C and C++ so easily promote float arithmetic to double and then it's the "tolerance" of double, not float, that comes into play. Consider float f = 1.0; printf("%.20f\n", f/7.0); So many new programmers do not realize that the 7.0 caused a double precision calculation. Recommend using double though out your code except where large amounts of data need the float smaller size.
C99 provides nextafter() which can be useful in helping to gauge "tolerance". Using it, one can determine the next representable number. This will help with the OP "... the full number of significant digits for the storage type minus one ... to allow for roundoff error." if ((nextafter(x, -INF) <= y && (y <= nextafter(x, +INF))) ...
The kind of tol or "tolerance" used is often the crux of the matter. Most often (IMHO) a relative tolerance is important. e. g. "Are x and y within 0.0001%"? Sometimes an absolute tolerance is needed. e.g. "Are x and y within 0.0001"?
The value of the tolerance is often debatable for the best value is often situation dependent. Comparing within 0.01 may work for a financial application for Dollars but not Yen. (Hint: be sure to use a coding style that allows easy updates.)
Rounding error varies according to values used for operations.
Instead of a fixed tolerance, you can probably use a factor of epsilon like:
bool nearly_equal(double a, double b, int factor /* a factor of epsilon */)
{
double min_a = a - (a - std::nextafter(a, std::numeric_limits<double>::lowest())) * factor;
double max_a = a + (std::nextafter(a, std::numeric_limits<double>::max()) - a) * factor;
return min_a <= b && max_a >= b;
}
Although the value of the tolerance depends on the situation, if you are looking for precision comparasion you could used as tolerance the machine epsilon value, numeric_limits::epsilon() (Library limits). The function returns the difference between 1 and the smallest value greater than 1 that is representable for the data type.
http://msdn.microsoft.com/en-us/library/6x7575x3.aspx
The value of epsilon differs if you are comparing floats or doubles. For instance, in my computer, if comparing floats the value of epsilon is 1.1920929e-007 and if comparing doubles the value of epsilon is 2.2204460492503131e-016.
For a relative comparison between x and y, multiply the epsilon by the maximum absolute value of x and y.
The result above could be multiplied by the ulps (units in the last place) which allows you to play with the precision.
#include <iostream>
#include <cmath>
#include <limits>
template<class T> bool are_almost_equal(T x, T y, int ulp)
{
return std::abs(x-y) <= std::numeric_limits<T>::epsilon() * std::max(std::abs(x), std::abs(y)) * ulp
}
When I need to compare floats, I use code like this
bool same( double a, double b, double error ) {
double x;
if( a == 0 ) {
x = b;
} else if( b == 0 ) {
x = a;
} else {
x = (a-b) / a;
}
return fabs(x) < error;
}

Can I trust a real-to-int conversion of the result of ceil()?

Suppose I have some code such as:
float a, b = ...; // both positive
int s1 = ceil(sqrt(a/b));
int s2 = ceil(sqrt(a/b)) + 0.1;
Is it ever possible that s1 != s2? My concern is when a/b is a perfect square. For example, perhaps a=100.0 and b=4.0, then the output of ceil should be 5.00000 but what if instead it is 4.99999?
Similar question: is there a chance that 100.0/4.0 evaluates to say 5.00001 and then ceil will round it up to 6.00000?
I'd prefer to do this in integer math but the sqrt kinda screws that plan.
EDIT: suggestions on how to better implement this would be appreciated too! The a and b values are integer values, so actual code is more like: ceil(sqrt(float(a)/b))
EDIT: Based on levis501's answer, I think I will do this:
float a, b = ...; // both positive
int s = sqrt(a/b);
while (s*s*b < a) ++s;
Thank you all!
I don't think it's possible. Regardless of the value of sqrt(a/b), what it produces is some value N that we use as:
int s1 = ceil(N);
int s2 = ceil(N) + 0.1;
Since ceil always produces an integer value (albeit represented as a double), we will always have some value X, for which the first produces X.0 and the second X.1. Conversion to int will always truncate that .1, so both will result in X.
It might seem like there would be an exception if X was so large that X.1 overflowed the range of double. I don't see where this could be possible though. Except close to 0 (where overflow isn't a concern) the square root of a number will always be smaller than the input number. Therefore, before ceil(N)+0.1 could overflow, the a/b being used as an input in sqrt(a/b) would have to have overflowed already.
You may want to write an explicit function for your case. e.g.:
/* return the smallest positive integer whose square is at least x */
int isqrt(double x) {
int y1 = ceil(sqrt(x));
int y2 = y1 - 1;
if ((y2 * y2) >= x) return y2;
return y1;
}
This will handle the odd case where the square root of your ratio a/b is within the precision of double.
Equality of floating point numbers is indeed an issue, but IMHO not if we deal with integer numbers.
If you have the case of 100.0/4.0, it should perfectly evaluate to 25.0, as 25.0 is exactly representable as a float, as opposite to e.g. 25.1.
Yes, it's entirely possible that s1 != s2. Why is that a problem, though?
It seems natural enough that s1 != (s1 + 0.1).
BTW, if you would prefer to have 5.00001 rounded to 5.00000 instead of 6.00000, use rint instead of ceil.
And to answer the actual question (in your comment) - you can use sqrt to get a starting point and then just find the correct square using integer arithmetic.
int min_dimension_greater_than(int items, int buckets)
{
double target = double(items) / buckets;
int min_square = ceil(target);
int dim = floor(sqrt(target));
int square = dim * dim;
while (square < min_square) {
seed += 1;
square = dim * dim;
}
return dim;
}
And yes, this can be improved a lot, it's just a quick sketch.
s1 will always equal s2.
The C and C++ standards do not say much about the accuracy of math routines. Taken literally, it is impossible for the standard to be implemented, since the C standard says sqrt(x) returns the square root of x, but the square root of two cannot be exactly represented in floating point.
Implementing routines with good performance that always return a correctly rounded result (in round-to-nearest mode, this means the result is the representable floating-point number that is nearest to the exact result, with ties resolved in favor of a low zero bit) is a difficult research problem. Good math libraries target accuracy less than 1 ULP (so one of the two nearest representable numbers is returned), perhaps something slightly more than .5 ULP. (An ULP is the Unit of Least Precision, the value of the low bit given a particular value in the exponent field.) Some math libraries may be significantly worse than this. You would have to ask your vendor or check the documentation for more information.
So sqrt may be slightly off. If the exact square root is an integer (within the range in which integers are exactly representable in floating-point) and the library guarantees errors are less than 1 ULP, then the result of sqrt must be exactly correct, because any result other than the exact result is at least 1 ULP away.
Similarly, if the library guarantees errors are less than 1 ULP, then ceil must return the exact result, again because the exact result is representable and any other result would be at least 1 ULP away. Additionally, the nature of ceil is such that I would expect any reasonable math library to always return an integer, even if the rest of the library were not high quality.
As for overflow cases, if ceil(x) were beyond the range where all integers are exactly representable, then ceil(x)+.1 is closer to ceil(x) than it is to any other representable number, so the rounded result of adding .1 to ceil(x) should be ceil(x) in any system implementing the floating-point standard (IEEE 754). That is provided you are in the default rounding mode, which is round-to-nearest. It is possible to change the rounding mode to something like round-toward-infinity, which could cause ceil(x)+.1 to be an integer higher than ceil(x).

Can float values add to a sum of zero? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Most effective way for float and double comparison
I have two values(floats) I am attempting to add together and average. The issue I have is that occasionally these values would add up to zero, thus not requiring them to be averaged.
The situation I am in specifically contains the values "-1" and "1", yet when added together I am given the value "-1.19209e-007" which is clearly not 0. Any information on this?
I'm sorry but this doesn't make sense to me.
Two floating point values, if they are exactly the same but with opposite sign, subtracted will produce always 0. This is how floating point operations works.
float a = 0.2f;
float b = -0.2f;
float f = (a - b) / 2;
printf("%f %d\n", f, f != 0); // will print out 0.0000 0
Will be always 0 also if the compiler doesn't optimize the code.
There is not any kind of rounding error to take in account if a and b have the same value but opposite sign! That is, if the higher bit of a is 0 and the higher bit of b is 1 and all other bits are the same, the result cannot be other than 0.
But if a and b are slightly different, of course, the result can be non-zero.
One possible solution to avoid this can be using a tolerance...
float f = (a + b) / 2;
if (abs(f) < 0.000001f)
f = 0;
We are using a simple tolerance to see if our value is near to zero.
A nice example code to show this is...
int main(int argc)
{
for (int i = -10000000; i <= 10000000 * argc; ++i)
{
if (i != 0)
{
float a = 3.14159265f / i;
float b = -a + (argc - 1);
float f = (a + b) / 2;
if (f != 0)
printf("%f %d\n", a, f);
}
}
printf("completed\n");
return 0;
}
I'm using "argc" here as a trick to force the compiler to not optimize out our code.
At least right off, this sounds like typical floating point imprecision.
The usual way to deal with it is to round your numbers to the correct number of significant digits. In this case, your average would be -1.19209e-08 (i.e., 0.00000001192). To (say) six or seven significant digits, that is zero.
Takes the sum of all your numbers, divide by your count. Round off your answer to something reasonable before you do prints, reports comparisons, or whatever you're doing.
again, do some searching on this but here is the basic explanation ...
the computer approximates floating point numbers by base 2 instead of base 10. this means that , for example, 0.2 (when converted to binary) is actually 0.001100110011 ... on forever. since the computer cannot add these on forever, it must approximate it.
because of these approximations, we lose "precision" of calculations. hence "single" and "double" precision floating point numbers. this is why you never test for a float to be actually 0. instead, you test whether is below some threshhold which you want to use as zero.

C++: how can I test if a number is power of ten?

I want to test if a number double x is an integer power of 10. I could perhaps use cmath's log10 and then test if x == (int) x?
edit: Actually, my solution does not work because doubles can be very big, much bigger than int, and also very small, like fractions.
A lookup table will be by far the fastest and most precise way to do this; only about 600 powers of 10 are representable as doubles. You can use a hash table, or if the table is ordered from smallest to largest, you can rapidly search it with binary chop.
This has the advantage that you will get a "hit" if and only if your number is exactly the closest possible IEEE double to some power of 10. If this isn't what you want, you need to be more precise about exactly how you would like your solution to handle the fact that many powers of 10 can't be exactly represented as doubles.
The best way to construct the table is probably to use string -> float conversion; that way hopefully your library authors will already have solved the problem of how to do the conversion in a way that gives the most precise answer possible.
Your solution sounds good but I would replace the exact comparison with a tolerance one.
double exponent = log10(value);
double rounded = floor(exponent + 0.5);
if (fabs(exponent - rounded) < some_tolerance) {
//Power of ten
}
I am afraid you're in for a world of hurt. There is no way to cast down a very large or very small floating point number to a BigInt class because you lost precision when using the small floating point number.
For example float only has 6 digits of precision. So if you represent 109 as a float chances are it will be converted back as 1 000 000 145 or something like that: nothing guarantees what the last digits will be, they are off the precision.
You can of course use a much more precise representation, like double which has 15 digits of precision. So normally you should be able to represent integers from 0 to 1014 faithfully.
Finally some platforms may have a long long type with an ever greater precision.
But anyway, as soon as your value exceed the number of digits available to be converted back to an integer without loss... you can't test it for being a power of ten.
If you really need this precision, my suggestion is not to use a floating point number. There are mathematical libraries available with BigInt implementations or you can roll your own (though efficiency is difficult to achieve).
bool power_of_ten(double x) {
if(x < 1.0 || x > 10E15) {
warning("IEEE754 doubles can only precisely represent powers "
"of ten between 1 and 10E15, answer will be approximate.");
}
double exponent;
// power of ten if log10 of absolute value has no fractional part
return !modf(log10(fabs(x)), &exponent);
}
Depending on the platform your code needs to run on the log might be very expensive.
Since the amount of numbers that are 10^n (where n is natural) is very small,
it might be faster to just use a hardcoded lookup table.
(Ugly pseudo code follows:)
bool isPowerOfTen( int16 x )
{
if( x == 10 // n=1
|| x == 100 // n=2
|| x == 1000 // n=3
|| x == 10000 ) // n=4
return true;
return false;
}
This covers the whole int16 range and if that is all you need might be a lot faster.
(Depending on the platform.)
How about a code like this:
#include <stdio.h>
#define MAX 20
bool check_pow10(double num)
{
char arr[MAX];
sprintf(arr,"%lf",num);
char* ptr = arr;
bool isFirstOne = true;
while (*ptr)
{
switch (*ptr++)
{
case '1':
if (isFirstOne)
isFirstOne = false;
else
return false;
break;
case '0':
break;
case '.':
break;
default:
return false;
}
}
return true;
}
int main()
{
double number;
scanf("%lf",&number);
printf("isPower10: %s\n",check_pow10(number)?"yes":"no");
}
That would not work for negative powers of 10 though.
EDIT: works for negative powers also.
if you don't need it to be fast, use recursion. Pseudocode:
bool checkifpoweroften(double Candidadte)
if Candidate>=10
return (checkifpoweroften(Candidadte/10)
elsif Candidate<=0.1
return (checkifpoweroften(Candidadte*10)
elsif Candidate == 1
return 1
else
return 0
You still need to choose between false positives and false negatives and add tolerances accordingly, as other answers pointed out. The tolerances should apply to all comparisons, or else, for exemple, 9.99999999 would fail the >=10 comparison.
how about that:
bool isPow10(double number, double epsilon)
{
if (number > 0)
{
for (int i=1; i <16; i++)
{
if ( (number >= (pow((double)10,i) - epsilon)) &&
(number <= (pow((double)10,i) + epsilon)))
{
return true;
}
}
}
return false;
}
I guess if performance is an issue the few values could be precomputed, with or without the epsilon according to the needs.
A variant of this one:
double log10_value= log10(value);
double integer_value;
double fractional_value= modf(log10_value, &integer_value);
return fractional_value==0.0;
Note that the comparison to 0.0 is exact rather than within a particular epsilon since you want to ensure that log10_value is an integer.
EDIT: Since this sparked a bit of controversy due to log10 possibly being imprecise and the generic understanding that you shouldn't compare doubles without an epsilon, here's a more precise way of determining if a double is a power of 10 using only properties of powers of 10 and IEEE 754 doubles.
First, a clarification: a double can represent up to 1E22, as 1e22 has only 52 significant bits. Luckily, 5^22 also only has 52 significant bits, so we can determine if a double is (2*5)^n for n= [0, 22]:
bool is_pow10(double value)
{
int exponent;
double mantissa= frexp(value, &exponent);
int exponent_adjustment= exponent/10;
int possible_10_exponent= (exponent - exponent_adjustment)/3;
if (possible_10_exponent>=0 &&
possible_10_exponent<=22)
{
mantissa*= pow(2.0, exponent - possible_10_exponent);
return mantissa==pow(5.0, possible_10_exponent);
}
else
{
return false;
}
}
Since 2^10==1024, that adds an extra bit of significance that we have to remove from the possible power of 5.