Compute arithmetic-geometric mean without using an epsilon in C++ [closed] - c++

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
Is it possible to compute arithmetic-geometric mean without using an epsilon in C++?
Here is my code:
double agm(double a, double b)
{
if(a > b)
{
a = a + b;
b = a - b;
a = a - b;
}
double aCurrent(a), bCurrent(b),
aNext(a), bNext(b);
while(aCurrent - bCurrent != 0)
{
aNext = sqrt(aCurrent*bCurrent);
bNext = (aCurrent+bCurrent)*0.5;
aCurrent = aNext;
bCurrent = bNext;
}
return aCurrent;
}
double sqrt(double x)
{
double res(x * 0.5);
do
{
res = (res + x/res) * 0.5;
} while(abs(res*res - x) > 1.0e-9);
return res;
}
And it runs forever.
Actually it is very clear what I was asking. It is just that you never met the problem and maybe lazy to think about it and are saying at once that there is nothing to talk about.
So, here is the solution I was looking for:
Instead of eps we can just add the following condition
if(aCurrent <= aPrev || bPrev <= bCurrent || bCurrent <= aCurrent )
And if the condition is true, then it means that we have computed the arithmetic-geometric mean with the most precision possible on our machine. As you can see there is no eps.
Using an eps in the question and answer means comparing that we say that two double numbers are equal when the difference between them is less than eps.
Please, reconsider opening the question.

Of course you can. It suffices to limit the number of iterations to the maximum required for convergence in any case, which should be close to the logarithm of the number of significant bits in the floating-point representation.
The same reasoning holds for the square root. (With a good starting approximation based on the floating-point exponent, i.e. at most a factor 2 away from the exact root, 5 iterations always suffice for doubles).
As a side note, avoid using absolute tolerances. Floating-point values can vary in a very wide range. They can be so large that the tolerance is 0 in comparison, or so tiny that they are below the tolerance itself. Prefer relative tolerances, with the extra difficulty that there is no relative tolerance to 0.

No, it's not possible without using an epsilon. Floating point arithmetic is an approximation of real arithmetic, and usually generates roundoff errors. As a result, it's unlikely the two calculation sequences used to compute the AGM will ever converge to exactly the same floating point numbers. So rather than test whether two floating point numbers are equal, you need to test whether they're close enough to each other to consider them effectively equal. And that's done by calculating the difference and testing whether it's really small.
You can either use a hard-coded epsilon value, or calculate it relative to the size of the numbers. The latter tends to be better, because it allows you to work with different number scales. E.g. you shouldn't use the same epsilon to try to calculate the square root of 12345 and 0.000012345; 0.01 might be adequate for the large number, but you'd need something like 0.000001 for the small number.
See What every programmer should know about floating point

Related

How do I safely convert a double into an integer in C++? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Is there a way to convert a double into an integer without risking any undesired errors in the process? I read in Programming - Principles and Practice Using C++ (a book written by the creator of c++) that doubles cannot be turned into integers, but I've put it to the test, and it converts properly about 80% of the time. What's the best way to do this with no risk at all, if it's even possible?
So for example, this converts properly.
double bruh = 10.0;
int a = bruh;
cout << bruh << "\n";
But this doesn't.
double bruh = 10.9;
int a = bruh;
cout << bruh << "\n";
In short, it doesn't round automatically so I think that's what constitutes it as "unsafe".
It it not possible to convert all doubles to integers with no risk of losing data.
First, if the double contains a fractional part (42.9), that fractional part will be lost.
Second, doubles can hold a much larger range of values than most integers, something around 1.7e308, so when you get into the larger values you simply won't be able to store them into an integer.
way to convert a double into an integer without risking any undesired errors
in short, it doesn't round automatically so I think that's what constitutes it as "unsafe"
To convert to an integer value:
x = round(x);
To convert to an integer type:
Start with a round function like long lround(double x);. It "Returns the integer value that is nearest in value to x, with halfway cases rounded away from zero."
If the round result is outside the long range, problems occur and code may want to test for that first.
// Carefully form a double the is 1 more than LONG_MAX
#define LONG_MAXP1 ((LONG_MAX/2 + 1)*2.0)
long val = 0;
if (x - LONG_MAXP1 < -0.5 && x - LONG_MIN > -0.5) {
val = lround(x);
} else {
Handle_error();
}
Detail: in order to test if a double is in range to round to a long, it is important to test the endpoints carefully. The mathematical valid range is (LONG_MIN-0.5 ... LONG_MAX + 0.5), yet those endpoints may not be exactly representable as a double. Instead code uses nearby LONG_MIN and LONG_MAXP1 whose magnitudes are powers of 2 and easy to represent exactly as a double.

How to check a number for irrationality [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
How can I check a number for irrationality? We enter an irrational number and use only the standard std library.
All floating-point numbers can be represented in the form x = significand × 2exponent, where significand and exponent are integers. All such numbers are rational by definition. That's why this problem has only an approximate solution.
One possible approach is to expand a number into a continued fraction. If you encounter very small or zero denominator, the input number is approximately rational. Or, better, if all denominators are not small, then the number is approximately irrational.
Rough idea:
bool is_rational(double x) {
x = std::abs(x);
for (int i = 0; i < 20; ++i) {
const auto a = std::floor(x);
if (x - a < 1e-8)
return true;
x = 1 / (x - a);
}
return false;
}
int main() {
std::cout << std::boolalpha;
std::cout << is_rational(2019. / 9102.) << std::endl; // Output: true
std::cout << is_rational(std::sqrt(2)) << std::endl; // Output: false
}
One should think about the optimal choice of magic numbers inside is_rational().
As #Jabberwocky pointed out, you won't be able to verify whether the number is truly irrational with computational means.
Looking at it like a typical student's assignment, my best bet it trying to approach the number by division without creating an endless loop. Consider using Hurwitz's Theorem or Dirichlet's approximation theorem.
Either way you will have to set some boundary for your computation (your precision), after how many digits you consider the number irrational.
Irrational number are... Well.. irational. Meaning you can't represent them with a Numerical Value, that's why we write them via a symbole (eg: π). Most you can do is an approximation. For exemple: In Math.h there is approximation for Pi as long double.
If you want something more accurate you can use a byte array to reimplements something like BigNum but it will still be an approximation.
We enter an irrational number
If your question was about an Input (cin) then just grab it as a string and do an aproximation.
The amount of memory your computer has is finite. The number of rational numbers is infinite. More importantly, the amount of rational number with a decimal expression too long for your computer to store it is infinite as well (and this is the same to every computer in the world). This is due to the fact that you can build a rational number as "long" as you want just by adding random digits to it.
A computer can't really work with (all) real numbers. As far as I know the only thing computers do is work with a limited accuracy (although accurate enough to be very useful) . All of the numbers they work with are rational.
Given a number, the only thing you can ask your computer to do for you is to calculate its decimal expression to some extent after enough time. All numbers your computer is able to do this with are known as the set of computable numbers. But even this set is "small" when compared with the real numbers set in terms of cardinality.
Therefore, a computer has no way to decide wheter if a number is rational or not, as the concept of number they work with is too simple for doing that.

What can std::numeric_limits<double>::epsilon() be used for?

unsigned int updateStandardStopping(unsigned int numInliers, unsigned int totPoints, unsigned int sampleSize)
{
double max_hypotheses_=85000;
double n_inliers = 1.0;
double n_pts = 1.0;
double conf_threshold_=0.95
for (unsigned int i = 0; i < sampleSize; ++i)
{
n_inliers *= numInliers - i;//n_linliers=I(I-1)...(I-m+1)
n_pts *= totPoints - i;//totPoints=N(N-1)(N-2)...(N-m+1)
}
double prob_good_model = n_inliers/n_pts;
if ( prob_good_model < std::numeric_limits<double>::epsilon() )
{
return max_hypotheses_;
}
else if ( 1 - prob_good_model < std::numeric_limits<double>::epsilon() )
{
return 1;
}
else
{
double nusample_s = log(1-conf_threshold_)/log(1-prob_good_model);
return (unsigned int) ceil(nusample_s);
}
}
Here is a selection statement:
if ( prob_good_model < std::numeric_limits<double>::epsilon() )
{...}
To my understanding, the judgement statement is the same as(or an approximation to)
prob_good_model < 0
So whether or not I am right and where std::numeric_limits<double>::epsilon() can be used besides that?
The point of epsilon is to make it (fairly) easy for you to figure out the smallest difference you could see between two numbers.
You don't normally use it exactly as-is though. You need to scale it based on the magnitudes of the numbers you're comparing. If you have two numbers around 1e-100, then you'd use something on the order of: std::numeric_limits<double>::epsilon() * 1.0e-100 as your standard of comparison. Likewise, if your numbers are around 1e+100, your standard would be std::numeric_limits<double>::epsilon() * 1e+100.
If you try to use it without scaling, you can get drastically incorrect (utterly meaningless) results. For example:
if (std::abs(1e-100 - 1e-200) < std::numeric_limits<double>::epsilon())
Yup, that's going to show up as "true" (i.e., saying the two are equal) even though they differ by 100 orders of magnitude. In the other direction, if the numbers are much larger than 1, comparing to an (unscaled) epsilon is equivalent to saying if (x != y)--it leaves no room for rounding errors at all.
At least in my experience, the epsilon specified for the floating point type isn't often of a great deal of use though. With proper scaling, it tells you the smallest difference there could possibly be between two numbers of a given magnitude (for a particular floating point implementation).
In real use, however, that's of relatively little real use. A more realistic number will typically be based on the precision of the inputs, and an estimate of the amount of precision you're likely to have lost due to rounding (and such).
For example, let's assume you started with values measured to a precision of 1 part per million, and you did only a few calculations, so you believe you may have lost as much as 2 digits of precision due to rounding errors. In this case, the "epsilon" you care about is roughly 1e-4, scaled to the magnitude of the numbers you're dealing with. That is to say, under those circumstances, you can expect on the order of 4 digits of precision to be meaningful, so if you see a difference in the first four digits, it probably means the values aren't equal, but if they differ only in the fifth (or later) digits, you should probably treat them as equal.
The fact that the floating point type you're using can represent (for example) 16 digits of precision doesn't mean that every measurement you use is going to be nearly the precise--in fact, it's relatively rare the anything based on physical measurements has any hope of being even close to that precise. It does, however, give you a limit on what you can hope for from a calculation--even if you start with a value that's precise to, say, 30 digits, the most you can hope for after calculation is going to be defined by std::numeric_limits<T>::epsilon.
It can be used on situation where a function is undefined, but you still need a value at that point. You lose a bit of accuracy, especially in extreme cases, but sometimes it's alright.
Like let's say you're using 1/x somewhere but your range of x is [0, n[. you can use 1/(x + std::numeric_limits<double>::epsilon()) instead so that 0 is still defined. That being said, you have to be careful with how the value is used, it might not work for every case.

round to inferior odd or even integer in C [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Working in C language, I would like to round a float number to its inferior odd integer and its inferior even integer.
The speed of the solution is very important (because it is computed 2M*20 times per seconds).
I propose this solution :
x_even = (int)floor(x_f) & ~1;
x_odd = ((int)ceil(x_f) & ~1) -1;
I presume that the weak point is the floor and ceil operations, but I'm not even sure of that.
Does someone have a comment on this solution ; I'm interested about it's speed of execution, but if you have another solution to share, I'll be very happy to test it :-).
You don't explain what you mean by 'inferior', but assuming you mean 'greatest even/odd integer less than the given number', and assuming you have a 2s-complement machine, you want:
x_i = (int)floor(x_f);
x_even = x_i & ~1;
x_odd = x_i - (~x_i & 1);
If you want to avoid the implementation dependency of doing bitwise ops on possibly negative signed numbers, you could instead do it entirely in float:
x_even = 2.0 * floor(x_f * 0.5);
x_odd = x_even + 1.0 > x_f ? x_even - 1.0 : x_even + 1.0;
This also has the advantage of not overflowing for large numbers, though it does give you x_odd == x_even for large numbers (those too big for the floating point representation to represent an odd number).
Perhaps the ceil and floor function won't be necessary as transtypage from a double to an int is equivalent to the floor function for positive integer.
Try something like this for POSITIVE INTEGERs :
double k = 68.8 ; // Because we need something to seed with.
int even = ((int) k & ~1) ; // What you did
int test = ((int) (k+1) & ~1) ; // Little trick
int odd = (test>k) ? odd+1 : odd - 1 ;
I tested it on codepad, and it works well for on http://codepad.org/y3t0KgwW for C++, I think it will in C. If you test this solution, I'd be glad to know how fast it can be...
Notice that :
This is not a good answer as it shadows the existence of negative integers.
The range is limited to integers'.
I swapped odd and even numbers, I corrected it thank's to Chris' comment.
I'm just adding my humble stone :)

Can float values add to a sum of zero? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Most effective way for float and double comparison
I have two values(floats) I am attempting to add together and average. The issue I have is that occasionally these values would add up to zero, thus not requiring them to be averaged.
The situation I am in specifically contains the values "-1" and "1", yet when added together I am given the value "-1.19209e-007" which is clearly not 0. Any information on this?
I'm sorry but this doesn't make sense to me.
Two floating point values, if they are exactly the same but with opposite sign, subtracted will produce always 0. This is how floating point operations works.
float a = 0.2f;
float b = -0.2f;
float f = (a - b) / 2;
printf("%f %d\n", f, f != 0); // will print out 0.0000 0
Will be always 0 also if the compiler doesn't optimize the code.
There is not any kind of rounding error to take in account if a and b have the same value but opposite sign! That is, if the higher bit of a is 0 and the higher bit of b is 1 and all other bits are the same, the result cannot be other than 0.
But if a and b are slightly different, of course, the result can be non-zero.
One possible solution to avoid this can be using a tolerance...
float f = (a + b) / 2;
if (abs(f) < 0.000001f)
f = 0;
We are using a simple tolerance to see if our value is near to zero.
A nice example code to show this is...
int main(int argc)
{
for (int i = -10000000; i <= 10000000 * argc; ++i)
{
if (i != 0)
{
float a = 3.14159265f / i;
float b = -a + (argc - 1);
float f = (a + b) / 2;
if (f != 0)
printf("%f %d\n", a, f);
}
}
printf("completed\n");
return 0;
}
I'm using "argc" here as a trick to force the compiler to not optimize out our code.
At least right off, this sounds like typical floating point imprecision.
The usual way to deal with it is to round your numbers to the correct number of significant digits. In this case, your average would be -1.19209e-08 (i.e., 0.00000001192). To (say) six or seven significant digits, that is zero.
Takes the sum of all your numbers, divide by your count. Round off your answer to something reasonable before you do prints, reports comparisons, or whatever you're doing.
again, do some searching on this but here is the basic explanation ...
the computer approximates floating point numbers by base 2 instead of base 10. this means that , for example, 0.2 (when converted to binary) is actually 0.001100110011 ... on forever. since the computer cannot add these on forever, it must approximate it.
because of these approximations, we lose "precision" of calculations. hence "single" and "double" precision floating point numbers. this is why you never test for a float to be actually 0. instead, you test whether is below some threshhold which you want to use as zero.