This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 9 years ago.
I had divided 9501/100.0f expecting to get result of 95.01f, but for some deviant reason the result was 95.01000000002f.
I am aware of rounding errors and also that dividing two bigger floats can give improper result, but these two numbers are relative small, and they should not give bad answer.
I have changed floats to doubles, only to see the same result.
So my answer is, why am I seeing this false output?
And eventually workaround without copying number to string and back.
Floating point numbers are not precise, and dealing with them has lots of idiosyncrasies.
What Every Computer Scientist Should Know About Floating-Point Arithmetic
I also enjoy Bruce Dawson's blog entries on floating point values.
Floating point numbers are numbers represented in binary with limited precision.
The error between expected result and actual result is caused by the fact, that the number 95.01 is infinitely periodical in binary representation.
Double has only 51 binary digits, thus there has to be some rounding before the number is stored in the double precision. Single precision has only 23 digits.
It is not possible to represent 95.01 in finite precision floatin point number without any error.
However, you may trust the first 6-9 decimal digits, thus you should format the number with some meaningfull format.
Ahh good, another one of us has become a man in the church of programming :)
Floating points are not exact, the precision will vary from machine to machine. 1.0f != 1.00000000000000000000000000000000000 and so on, it's more like 1.0000001002003400011 and so on (I just picked arbitrary numbers here).
Related
I'm having trouble with rounding floats. I'm solving a task where you need to round your result to two decimal points. But I can't do it when the third decimal point is 5 because it's stored incorrectly.
For example: My result is equal to 1.005 and that should be rounded to 1.01. But C++ rounds it to 1.00 because the original float is stored as 1.0049999... and not 1.005.
I've already tried always adding a very small float to the result but there are some other test cases which are then rounded up but should be rounded down.
I know how floating-point works and that it is often not completely accurate. I'm just wondering whether anyone has found a way around this specific problem.
When you say "my result is equal to 1.005", you are assuming some count of true decimal digits. This can be 1.005 (three digits of fractional part), 1.0050 (four digits), 1.005000, and so on.
So, you should first round, using some usual rounding, to that count of digits. It is simpler to do this in integers: for example, with 6 fractional digits, it means some usual round(), rint(), etc. after multiplication by 1,000,000. With this step, you are getting exact decimal number. After this, you are able to make the required final rounding to what you need.
In your example, this will round 1,004,999.99... to 1,005,000. Then, divide by 10000 and round again.
(Notice that there are suggestions to make this rounding in yet specific way. The General Decimal Arithmetic specification and IBM arithmetic manuals suggest this rounding is done in the way that exact fractional part 0.5 shall be rounded away from zero unless least significant result bit becomes 0 or 5, in that case it is rounded toward zero. But, if you have no such rounding available, a general away-from-zero is also suitable.)
If you are implementing arithmetic for money accounting, it is reasonable to avoid floating point at all and use fixed-point arithmetic (emulated with integers, if needed). This is better because you the methods I've described for rounding are inevitably containing conversion to integers (and back), so, it's cheaper to use such integers directly. You will get inexact operation checking as well (by cost of explicit integer overflow).
If you can use a library like boost with its Multiprecision support.
Another option would be to use a long double, maybe that's precise enough for you.
How does a program (MySQL is an example) store a float like 0.9 and then return it to me as 0.9? Why does it not return 0.899...?
The issue I am currently experiencing is retrieving floating point values from MySQL with C++ and then reprinting those values.
There are software libraries, like Gnu MP that implement arbitrary precision arithmetic, that calculate floating point numbers to specified precision. Using Gnu MP you can, for example, add 0.3 to 0.6, and get exactly 0.9. No more, no less.
Database servers do pretty much the same thing.
For normal, run of the mill applications, native floating point arithmetic is fast, and it's good enough. But database servers typically have plenty of spare CPU cycles. Their limiting factors will not be available CPU, but things like available I/O bandwidth. They'll have plenty of CPU cycles to burn on executing complicated arbitrary precision arithmetic calculations.
There are a number of algorithms for rounding floating point numbers in a way that will result in the same internal representation when read back in. For an overview of the subject, with links to papers with full details of the algorithms, see
Printing Floating-Point Numbers
What's happening, in a nutshell, is that the function which converts the floating-point approximation of 0.9 to decimal text is actually coming up with a value like 0.90000....0123 or 0.89999....9573. This gets rounded to 0.90000...0. And then these trailing zeros are trimmed off so you get a tidy looking 0.9.
Although floating-point numbers are inexact, and often do not use base 10 internally, they can in fact precisely save and recover a decimal representation. For instance, an IEEE 754 64 bit representation has enough precision to preserve 15 decimal digits. This is often mapped to the C language type double, and that language has the constant DBL_DIG, which will be 15 when double is this IEEE type.
If a decimal number with 15 digits or less is converted to double, it can be coverted back to exactly that number. The conversion routine just has to round it off at 15 digits; of course if the conversion routine uses, say, 40 digits, there will be messy trailing digits representing the error between the floating-point value and the original number. The more digits you print, the more accurately rendered is that error.
There is also the opposite problem: given a floating-point object, can it be printed into decimal such that the resulting decimal can be scanned back to reproduce that object? For an IEEE 64 bit double, the number of decimal digits required for that is 17.
I am just wondering if we can make rules for the form of the approximation of real numbers using floating point numbers.
For intance is a floating point number can be terminated by 1.xxx777777 (so terminated by infinite 7 by instance and eventually a random digit at the end ) ?
I believe that there is only this form of floating point number :
1. exact value.
2. value like 1.23900008721.... so where 1.239 is approximated with digits that appears as "noise" but with 0 between the exact value and this noise
3. value like 3.2599995, where 3.26 is approximated by adding 9999.. and a final digit (like 5), so approximated with a floating number just below the real number
4. value like 2.000001, where 2.0 is approximated with a floating number just above the real number
You are thinking in terms of decimal numbers, that is, numbers that can be represented as n*(10^e), with e either positive or negative. These numbers occur naturally in your thought processes for historical reasons having to do with having ten fingers.
Computer numbers are represented in binary, for technical reasons that have to do with an electrical signal being either present or absent.
When you are dealing with smallish integer numbers, it does not matter much that the computer representation does not match your own, because you are thinking of an accurate approximation of the mathematical number, and so is the computer, so by transitivity, you and the computer are thinking about the same thing.
With either very large or very small numbers, you will tend to think in terms of powers of ten, and the computer will definitely think in terms of powers of two. In these cases you can observe a difference between your intuition and what the computer does, and also, your classification is nonsense. Binary floating-point numbers are neither more dense or less dense near numbers that happen to have a compact representation as decimal numbers. They are simply represented in binary, n*(2^p), with p either positive or negative. Many real numbers have only an approximative representation in decimal, and many real numbers have only an approximative representation in binary. These numbers are not the same (binary numbers can be represented in decimal, but not always compactly. Some decimal numbers cannot be represented exactly in binary at all, for instance 0.1).
If you want to understand the computer's floating-point numbers, you must stop thinking in decimal. 1.23900008721.... is not special, and neither is 1.239. 3.2599995 is not special, and neither is 3.26. You think they are special because they are either exactly or close to compact decimal numbers. But that does not make any difference in binary floating-point.
Here are a few pieces of information that may amuse you, since you tagged your question C++:
If you print a double-precision number with the format %.16e, you get a decimal number that converts back to the original double. But it does not always represent the exact value of the original double. To see the exact value of the double in decimal, you must use %.53e. If you write 0.1 in a program, the compiler interprets this as meaning 1.000000000000000055511151231257827021181583404541015625e-01, which is a relatively compact number in binary. Your question speaks of 3.2599995 and 2.000001 as if these were floating-point numbers, but they aren't. If you write these numbers in a program, the compiler will interpret them as 3.25999950000000016103740563266910612583160400390625
and
2.00000100000000013977796697872690856456756591796875. So the pattern you are looking for is simple: the decimal representation of a floating-point number is always 17 significant digits followed by 53-17=36 “noise” digits as you call them. The noise digits are sometimes all zeroes, and the significant digits can end in a bunch of zeroes too.
Floating point is presented by bits. What this means is:
1 bit flipped after the decimal is 0.5 or 1/2
01 bits is 0.25 or 1/4
etc.
This means floating point is always approximately close but not exact if it's not an exact power of 2, when represented in terms of what the machine can handle.
Rational numbers can very accurately be represented by the machine (not precisely of course if not a power of two below the decimal point), but irrational numbers will always carry an error. In terms of this your question is not so much related to c++ as to computer architecture.
I am trying to get the decimal part from the double and this is my code to get the decimal part
double decimalvalue = 23423.1234-23423.0;
0.12340000000040163
But after the subtraction I am expecting decimalvalue to be 0.1234 but I get 0.12340000000040163. Please help me to understand this behavior and if there is any workaround for it.
I suggest you have a look at
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Wikipedia: IEEE 754
There are a finite number of values you can specify in a floating point number, but an infinite number of floating point numbers in the represented range.
Some floating point numbers therefore cannot be represented exactly in any floating/double style data type.
The typical way to handle your specific problem is to avoid a direct equality comparison, but rather do an epsilon test: See if the expected and computed values are within some small number (compared to the values being subtracted), called epsilon, of each other.
Indirectly related is the concept of Machine Epsilon, worth having a look at for a complete understanding
This is a rounding error. In base ten you cannot perfectly represent 1/3 in a given number of digits (say 15). In base 2 there are a lot more things you can not represent, 0.1234 happens to be one of them. The precision depends on the scale, but it's about 15 decimal digits for a double. I would suggest taking a look at http://en.wikipedia.org/wiki/IEEE_floating_point for more details on floating point numbers.
If you are trying to make a base 10 system (like a human used calculator for instance) and you need exact results you should use BCD.
I mean, for example, I have the following number encoded in IEEE-754 single precision:
"0100 0001 1011 1110 1100 1100 1100 1100" (approximately 23.85 in decimal)
The binary number above is stored in literal string.
The question is, how can I convert this string into IEEE-754 double precision representation(somewhat like the following one, but the value is not the same), WITHOUT losing precision?
"0100 0000 0011 0111 1101 1001 1001 1001 1001 1001 1001 1001 1001 1001 1001 1010"
which is the same number encoded in IEEE-754 double precision.
I have tried using the following algorithm to convert the first string back to decimal number first, but it loses precision.
num in decimal = (sign) * (1 + frac * 2^(-23)) * 2^(exp - 127)
I'm using Qt C++ Framework on Windows platform.
EDIT: I must apologize maybe I didn't get the question clearly expressed.
What I mean is that I don't know the true value 23.85, I only got the first string and I want to convert it to double precision representation without precision loss.
Well: keep the sign bit, rewrite the exponent (minus old bias, plus new bias), and pad the mantissa with zeros on the right...
(As #Mark says, you have to treat some special cases separately, namely when the biased exponent is either zero or max.)
IEEE-754 (and floating point in general) cannot represent periodic binary decimals with full precision. Not even when they, in fact, are rational numbers with relatively small integer numerator and denominator. Some languages provide a rational type that may do it (they are the languages that also support unbounded precision integers).
As a consequence those two numbers you posted are NOT the same number.
They in fact are:
10111.11011001100110011000000000000000000000000000000000000000 ...
10111.11011001100110011001100110011001100110011001101000000000 ...
where ... represent an infinite sequence of 0s.
Stephen Canon in a comment above gives you the corresponding decimal values (did not check them, but I have no reason to doubt he got them right).
Therefore the conversion you want to do cannot be done as the single precision number does not have the information you would need (you have NO WAY to know if the number is in fact periodic or simply looks like being because there happens to be a repetition).
First of all, +1 for identifying the input in binary.
Second, that number does not represent 23.85, but slightly less. If you flip its last binary digit from 0 to 1, the number will still not accurately represent 23.85, but slightly more. Those differences cannot be adequately captured in a float, but they can be approximately captured in a double.
Third, what you think you are losing is called accuracy, not precision. The precision of the number always grows by conversion from single precision to double precision, while the accuracy can never improve by a conversion (your inaccurate number remains inaccurate, but the additional precision makes it more obvious).
I recommend converting to a float or rounding or adding a very small value just before displaying (or logging) the number, because visual appearance is what you really lost by increasing the precision.
Resist the temptation to round right after the cast and to use the rounded value in subsequent computation - this is especially risky in loops. While this might appear to correct the issue in the debugger, the accummulated additional inaccuracies could distort the end result even more.
It might be easiest to convert the string into an actual float, convert that to a double, and convert it back to a string.
Binary floating points cannot, in general, represent decimal fraction values exactly. The conversion from a decimal fractional value to a binary floating point (see "Bellerophon" in "How to Read Floating-Point Numbers Accurately" by William D.Clinger) and from a binary floating point back to a decimal value (see "Dragon4" in "How to Print Floating-Point Numbers Accurately" by Guy L.Steele Jr. and Jon L.White) yield the expected results because one converts a decimal number to the closest representable binary floating point and the other controls the error to know which decimal value it came from (both algorithms are improved on and made more practical in David Gay's dtoa.c. The algorithms are the basis for restoring std::numeric_limits<T>::digits10 decimal digits (except, potentially, trailing zeros) from a floating point value stored in type T.
Unfortunately, expanding a float to a double wrecks havoc on the value: Trying to format the new number will in many cases not yield the decimal original because the float padded with zeros is different from the closest double Bellerophon would create and, thus, Dragon4 expects. There are basically two approaches which work reasonably well, however:
As someone suggested convert the float to a string and this string into a double. This isn't particularly efficient but can be proven to produce the correct results (assuming a correct implementation of the not entirely trivial algorithms, of course).
Assuming your value is in a reasonable range, you can multiply it by a power of 10 such that the least significant decimal digit is non-zero, convert this number to an integer, this integer to a double, and finally divide the resulting double by the original power of 10. I don't have a proof that this yields the correct number but for the range of value I'm interested in and which I want to store accurately in a float, this works.
One reasonable approach to avoid this entirely issue is to use decimal floating point values as described for C++ in the Decimal TR in the first place. Unfortunately, these are not, yet, part of the standard but I have submitted a proposal to the C++ standardization committee to get this changed.