I used to think that float can use max 6 digit and double 15, after comma. But if I print limits here:
typedef std::numeric_limits<float> fl;
typedef std::numeric_limits<double> dbl;
int main()
{
std::cout << fl::max_digits10 << std::endl;
std::cout << dbl::max_digits10 << std::endl;
}
It prints float 9 and double 17?
You're confusing digits10 and max_digits10.
If digits10 is 6, then any number with six decimal digits can be converted to the floating point type, and back, and when rounded back to six decimal digits, produces the original value.
If max_digits10 is 9, then there exist at least two floating point numbers that when converted to decimal produce the same initial 8 decimal digits.
digits10 is the number you're looking for, based on your description. It's about converting from decimal to binary floating point back to decimal.
max_digits10 is a number about converting from binary floating point to decimal back to binary floating point.
From cppreference:
Unlike most mathematical operations, the conversion of a floating-point value to text and back is exact as long as at least max_digits10 were used (9 for float, 17 for double): it is guaranteed to produce the same floating-point value, even though the intermediate text representation is not exact. It may take over a hundred decimal digits to represent the precise value of a float in decimal notation.
For example (I am using http://www.exploringbinary.com/floating-point-converter/ to facilitate the conversion) and double as the precision format:
1.1e308 => 109999999999999997216016380169010472601796114571365898835589230322558260940308155816455878138416026219051443651421887588487855623732463609216261733330773329156055234383563489264255892767376061912596780024055526930962873899746391708729279405123637426157351830292874541601579169431016577315555383826285225574400
Using 16 significant digits:
1.099999999999999e308 => 109999999999999897424000903433019889783160462729437595463026208549681185812946033955861284690212736971153169019636833121365513414107701410594362313651090292197465320141992473263972245213092236035710707805906167798295036672550192042188756649080117981714588407890666666245533825643214495197630622309084729180160
Using 17 significant digits:
1.0999999999999999e308 => 109999999999999997216016380169010472601796114571365898835589230322558260940308155816455878138416026219051443651421887588487855623732463609216261733330773329156055234383563489264255892767376061912596780024055526930962873899746391708729279405123637426157351830292874541601579169431016577315555383826285225574400
which is the same as the original
More than 17 significant digits:
1.09999999999999995555e308 => 109999999999999997216016380169010472601796114571365898835589230322558260940308155816455878138416026219051443651421887588487855623732463609216261733330773329156055234383563489264255892767376061912596780024055526930962873899746391708729279405123637426157351830292874541601579169431016577315555383826285225574400
Continue to be the same as the original.
There isn't an exact correspondence between decimal digits and binary digits.
IEEE 754 single precision uses 23 bits plus 1 for the implicit leading 1. Double precision uses 52+1 bits.
To get the equivalent decimal precision, use
log10(2^binary_digits) = binary_digits*log10(2)
For single precision this is
24*log10(2) = 7.22
and for double precision
53*log10(2) = 15.95
See here and also the Wikipedia page which I don't find to be particularly concise.
Related
I am confused about what max_digits10 represents. According to its documentation, it is 0 for all integral types. The formula for floating-point types for max_digits10 looks similar to int's digits10's.
To put it simple,
digits10 is the number of decimal digits guaranteed to survive text → float → text round-trip.
max_digits10 is the number of decimal digits needed to guarantee correct float → text → float round-trip.
There will be exceptions to both but these values give the minimum guarantee. Read the original proposal on max_digits10 for a clear example, Prof. W. Kahan's words and further details. Most C++ implementations follow IEEE 754 for their floating-point data types. For an IEEE 754 float, digits10 is 6 and max_digits10 is 9; for a double it is 15 and 17. Note that both these numbers should not be confused with the actual decimal precision of floating-point numbers.
Example digits10
char const *s1 = "8.589973e9";
char const *s2 = "0.100000001490116119384765625";
float const f1 = strtof(s1, nullptr);
float const f2 = strtof(s2, nullptr);
std::cout << "'" << s1 << "'" << '\t' << std::scientific << f1 << '\n';
std::cout << "'" << s2 << "'" << '\t' << std::fixed << std::setprecision(27) << f2 << '\n';
Prints
'8.589973e9' 8.589974e+009
'0.100000001490116119384765625' 0.100000001490116119384765625
All digits up to the 6th significant digit were preserved, while the 7th digit didn't survive for the first number. However, all 27 digits of the second survived; this is an exception. However, most numbers become different beyond 7 digits and all numbers would be the same within 6 digits.
In summary, digits10 gives the number of significant digits you can count on in a given float as being the same as the original real number in its decimal form from which it was created i.e. the digits that survived after the conversion into a float.
Example max_digits10
void f_s_f(float &f, int p) {
std::ostringstream oss;
oss << std::fixed << std::setprecision(p) << f;
f = strtof(oss.str().c_str(), nullptr);
}
float f3 = 3.145900f;
float f4 = std::nextafter(f3, 3.2f);
std::cout << std::hexfloat << std::showbase << f3 << '\t' << f4 << '\n';
f_s_f(f3, std::numeric_limits<float>::max_digits10);
f_s_f(f4, std::numeric_limits<float>::max_digits10);
std::cout << f3 << '\t' << f4 << '\n';
f_s_f(f3, 6);
f_s_f(f4, 6);
std::cout << f3 << '\t' << f4 << '\n';
Prints
0x1.92acdap+1 0x1.92acdcp+1
0x1.92acdap+1 0x1.92acdcp+1
0x1.92acdap+1 0x1.92acdap+1
Here two different floats, when printed with max_digits10 digits of precision, they give different strings and these strings when read back would give back the original floats they are from. When printed with lesser precision they give the same output due to rounding and hence when read back lead to the same float, when in reality they are from different values.
In summary, max_digits10 are at least required to disambiguate two floats in their decimal form, so that when converted back to a binary float, we get the original bits again and not of the one slightly before or after it due to rounding errors.
In my opinion, it is explained sufficiently at the linked site (and the site for digits10):
digits10 is the (max.) amount of "decimal" digits where numbers
can be represented by a type in any case, independent of their actual value.
A usual 4-byte unsigned integer as example: As everybody should know, it has exactly 32bit,
that is 32 digits of a binary number.
But in terms of decimal numbers?
Probably 9.
Because, it can store 100000000 as well as 999999999.
But if take numbers with 10 digits: 4000000000 can be stored, but 5000000000 not.
So, if we need a guarantee for minimum decimal digit capacity, it is 9.
And that is the result of digits10.
max_digits10 is only interesting for float/double... and gives the decimal digit count
which we need to output/save/process... to take the whole precision
the floating point type can offer.
Theoretical example: A variable with content 123.112233445566
If you show 123.11223344 to the user, it is not as precise as it can be.
If you show 123.1122334455660000000 to the user, it makes no sense because
you could omit the trailing zeros (because your variable can´t hold that much anyways)
Therefore, max_digits10 says how many digits precision you have available in a type.
Lets build some context
After going through lots of answers and reading stuff following is the simplest and layman answer i could reach upto for this.
Floating point numbers in computers (Single precision i.e float type in C/C++ etc. OR double precision i.e double in C/C++ etc.) have to be represented using fixed number of bits.
float is a 32-bit IEEE 754 single precision Floating Point Number – 1
bit for the sign, 8 bits for the exponent, and 23* for the value.
float has 7 decimal digits of precision.
And for double type
The C++ double should have a floating-point precision of up to 15
digits as it contains a precision that is twice the precision of the
float data type. When you declare a variable as double, you should
initialize it with a decimal value
What the heck above means to me?
Its possible that sometimes the floating point number which you have cannot fit into the number of bits available for that type. for eg. float value of 0.1 cannot FIT into available number of BITS in a computer. You may ask why. Try converting this value to binary and you will see that the binary representation is never ending and we have only finite number of bits so we need to stop at one point even though the binary conversion logic says keep going on.
If the given floating point number can be represented by the number of bits available, then we are good. If its not possible to represent the given floating point number in the available number of bits, then the bits are stored a value which is as close as possible to the actual value. This is also known as "Rounding the float value" OR "Rounding error". Now how this value is calculated depends of specific implementation but its safe to assume that given a specific implementation, the most closest value is chosen.
Now lets come to std::numeric_limits<T>::digits10
The value of std::numeric_limits::digits10 is the number of
base-10 digits that are necessary to uniquely represent all distinct
values of the type T, such as necessary for
serialization/deserialization to text. This constant is meaningful for
all floating-point types.
What this std::numeric_limits<T>::digits10 is saying is that whenever you fall into a scenario where rounding MUST happen then you can be assured that after given floating point value is rounded to its closest representable value by the computer, then its guarantied that the closest representable value's std::numeric_limits<T>::digits10 number of Decimal digits will be exactly same as your input floating point. For single precision floating point value this number is usually 6 and for double precision float value this number is usually 15.
Now you may ask why i used the word "guarantied". Well i used this because its possible that more number of digits may survive while conversion to float BUT if you ask me give me a guarantee that how many will survive in all the cases, then that number is std::numeric_limits<T>::digits10. Not convinced yet?
OK, consider example of unsigned char which has 8 bits of storage. When you convert a decimal value to unsigned char, then what's the guarantee that how many decimal digits will survive? I will say "2". Then you will say that even 145 will survive, so it should be 3. BUT i will say NO. Because if you take 256, then it won't survive. Of course 255 will survive, but since you are asking for guarantee so i can only guarantee that 2 digits will survive because answer 3 is not true if i am trying to use values higher than 255.
Now use the same analogy for floating number types when someone asks for a guarantee. That guarantee is given by std::numeric_limits<T>::digits10
Now what the heck is std::numeric_limits<T>::max_digits10
Here comes a bit of another level of complexity. BUT I will try to explain as simple as I can
As i mentioned previously that due to limited number of bits available to represent a floating type on a computer, its not possible to represent every float value exactly. Few can be represented exactly BUT not all values. Now lets consider a hypothetical situation. Someone asks you to write down all the possible float values which the computer can represent (ooohhh...i know what you are thinking). Luckily you don't have write all those :)
Just imagine that you started and reached the last float value which a computer can represent. The max float value which the computer can represent will have certain number of decimal digits. These are the number of decimal digits which std::numeric_limits<T>::max_digits10 tells us. BUT an actual explanation for std::numeric_limits<T>::max_digits10 is the maximum number of decimal digits you need to represent all possible representable values. Thats why i asked you to write all the value initially and you will see that you need maximum std::numeric_limits<T>::max_digits10 of decimal digits to write all representable values of type T.
Please note that this max float value is also the float value which can survive the text to float to text conversion but its number of decimal digits are NOT the guaranteed number of digits (remember the unsigned char example i gave where 3 digits of 255 doesn't mean all 3 digits values can be stored in unsigned char?)
Hope this attempt of mine gives people some understanding. I know i may have over simplified things BUT I have spent sleepless night thinking and reading stuff and this is the explanation which was able to give me some peace of mind.
Cheers !!!
The IEEE 754 double precision floating point format has a binary precision of 53 bits, which translates into log10(2^53) ~ 16 significant decimal digits.
If the double precision format is used to store a floating point number in a 64 bit-long word in the memory, with 52 bits for the significand and 1 hidden bit, but a larger precision is used to output the number to the screen, what data is actually read from the memory and written to the output?
How can it even be read, when the total length of the word is 64 bit, does the read-from-memory operation on the machine just simply read more bits and interprets them as an addition to the significand of the number?
For example, take the number 0.1. It does not have an exact binary floating point representation regardless of the precision used, because it has an indefinitely repeating binary floating point pattern in the significand.
If 0.1 is stored with the double precision, and printed to the screen with the precision >16 like this in the C++ language:
#include <iostream>
#include <iomanip>
using namespace std;
int main()
{
double x = 0.1;
cout << setprecision(50) << "x= " << x << endl;
};
The output (on my machine at the point of execution), is:
x = 0.1000000000000000055511151231257827021181583404541
If the correct rounding is used with 2 guard bits and 1 sticky bits, can I trust the decimal values given by the first three non-zero binary floating point digits in the error 5.551115123125783e-17?
Every binary fraction is exactly equal to some decimal fraction. If, as is usually the case, double is a binary floating point type, each double number has an exactly equal decimal representation.
For what follows, I am assuming your system uses IEEE 754 64-bit binary floating point to represent double. That is not required by the standard, but is very common. The closest number to 0.1 in that format has exact value 0.1000000000000000055511151231257827021181583404541015625
Although this number has a lot of digits, it is exactly equal to 3602879701896397/255. Multiplying both numerator and denominator by 555 converts it to a decimal fraction, while increasing the number of digits in the numerator.
One common approach, consistent with the result in the question, is to use round-to-nearest to the number of digits required by the format. That will indeed give useful information about the rounding error on conversion of a string to double.
For what I'm learning, once I convert a floating point value to a decimal one, the "significant digits" I need are a fixed number (17 for double, for example). 17 totals: before and after decimal separator.
So for example this code:
typedef std::numeric_limits<double> dbl;
int main()
{
std::cout.precision(dbl::max_digits10);
//std::cout << std::fixed;
double value1 = 1.2345678912345678912345;
double value2 = 123.45678912345678912345;
double value3 = 123456789123.45678912345;
std::cout << value1 << std::endl;
std::cout << value2 << std::endl;
std::cout << value3 << std::endl;
}
will correctly "show me" 17 values:
1.2345678912345679
123.45678912345679
123456789123.45679
But if I increase precision for the cout (i.e. std::cout.precision(100)), I can see there are other numbers after the 17 range:
1.2345678912345678934769921397673897445201873779296875
123.456789123456786683163954876363277435302734375
123456789123.456787109375
Why should ignore them? They are stored within the variables/double as well, so they will affect the whole "math" later (division, multiplication, sum, and so on).
What does it means "significant digits"? There is other...
Can you help me to understand what “significant digits” means in floating point math?
With FP numbers, like mathematical real numbers, significant digits is the leading digits of a value that do not begin with 0 and then, depending on context, to 1) the decimal point, 2) the last non-zero digit, or 3) the last printed digit.
123. // 3 significant decimal digits
123.125 // 6 significant decimal digits
0.0078125 // 5 significant decimal digits
0x0.00123p45 // 3 significant hexadecimal digits
123000.0 // 3, 6, or 7 significant decimal digits depending on context
When concerned about decimal significant digits and FP types like double. the issue is often "How many decimal significant digits are needed or of concern?"
Nearly all C FP implementations use a binary encoding such that all finite FP are exact sums of power of 2. Each finite FP is exact. Common encoding affords most double to have 53 binary digits is it significand - so 53 significant binary digits. How this appears as a decimal is often the source of confusion.
// Example 0.1 is not an exact sum of powers of 2 so a nearby value is used.
double x = 0.1;
// x takes on the exact value of
// 0.1000000000000000055511151231257827021181583404541015625
// aka 0x1.999999999999ap-4
// aka base2: 0.000110011001100110011001100110011001100110011001100110011010
// The preceding and subsequent doubles
// 0.09999999999999999167332731531132594682276248931884765625
// 0.10000000000000001942890293094023945741355419158935546875
// 123456789012345678901234567890123456789012345678901234567890
Looking at above, one could say x has over 50 decimal significant digits. Yet the value matches the intended 0.1 to 16 decimal significant digits. Or yet since the preceding and subsequent possible double values differ in the 17 place, one could say x has 17 decimal significant digits.
What does it means "significant digits"?
Various meanings of significant digits exist, but for C, 2 common ones are:
The number of decimal significant digits that a textual value to double converts as expected for all double. This is typically 15. C specifies this as DBL_DIG and must be at least 10.
The number of decimal significant digits that a textual value of double needs to be printed to distinguish from another double. This is typically 17. C specifies this as DBL_DECIMAL_DIG and must be at least 10.
Why should ignore them?
It depends of coding goals. Rarely are all digits of the exact value needed. (DBL_TRUE_MIN might have 752 od them.) For most applications, DBL_DECIMAL_DIG is enough. In select apps, DBL_DIG will do. So usually, ignoring digits past 17 does not cause problems.
Keep in mind that floating-point values are not real numbers. There are gaps between the values, and all those extra digits, while meaningful for real numbers, don’t reflect any difference in the floating-point value. When you convert a floating-point value to text, having std::numeric_limits<...>::max_digits10 digits ensures that you can convert the text string back to floating-point and get the original value. The extra digits don’t affect the result.
The extra digits that you see when you ask for more digits are the result of the conversion algorithm trying to do what you asked. The algorithm typically just keeps extracting digits until it reaches the desired precision; it could be written to start outputting zeros after it’s written max_digits10 digits, but that’s an additional complication that nobody bothers with. It wouldn’t really be helpful.
just to add to Pete Becker's answer, I think you're confusing the problem of finding the exact decimal representation of a binary mantissa, with the problem of finding some decimal representation uniquely representing that binary mantissa ( given some fixed rounding scheme ).
Now, regarding the first problem, you always need a finite number of decimal digits to exactly represent a binary mantissa ( because 2 divides 10 ).
For example, you need 18 decimal digits to exactly represent the binary 1.0000000000000001, being 1.00000762939453125 in decimal.
but you need just 17 digits to represent it uniquely as 1.0000076293945312 because no other number having exact value 1.0000076293945312xyz... where 0<=x<5 can exist as a double ( more precisely, the next and prior exactly representable values being 1.0000076293945314720446049250313080847263336181640625 and 1.0000076293945310279553950749686919152736663818359375 ).
Of course, this does not mean that given some decimal number you can ignore all digits past the 17th; it just means that if you apply the same rounding scheme used to produce the decimal at the 17th position and assign it back to a double you'll get the same original double.
I'm seeing some error when simply assigning a floating point value which contains only 4 significant figures. I wrote a short program to debug and I don't understand what the problem is. After verifying the limits of a float on my platform is seems like there shouldn't be any error. What's causing this?
#include <stdlib.h>
#include <stdio.h>
#include <limits>
#include <iostream>
int main(){
printf("float size: %lu\n", sizeof(float));
printf("float max: %e\n", std::numeric_limits<float>::max());
printf("float significant figures: %i\n", std::numeric_limits<float>::digits10);
float a = 760.5e6;
printf("%.9f\n", a);
std::cout.precision(9);
std::cout << a << std::endl;
double b = 760.5e6;
printf("%.9f\n", b);
std::cout << b << std::endl;
return 0;
}
The output:
float size: 4
float max: 3.402823e+38
float significant figures: 6
760499968.000000000
760499968
760500000.000000000
760500000
A float has 24 bits of precision, which is roughly equivalent to 7 decimal digits. A double has 53 bits of precision, which is roughly equivalent to 16 decimal digits.
As mentioned in the comments, 760.5e6 is not exactly representable by float; however, it is exactly representable by double. This is why the printed results for double are exact, and those from float are not.
It is legal to request printing of more decimal digits than are representable by your floating point number, as you did. The results you report are not an error -- they are simply the result of the decimal printing algorithm doing the best it can.
The stored number in your float is 760499968. This is expected behavior for an IEEE 754 binary32 floating point numbers, as floats usually are.
IEEE 754 floating point numbers are stored in three parts: a sign bit, an exponent, and a mantissa. Since all these values are stored as bits the resulting number is sort of the binary equivalent of scientific notation. The mantissa bits are one less than the number of binary digits allowed as significant figures in the binary scientific notation.
Just like with decimal scientific numbers, if the exponent exceeds the significant figures, you're going to lose integer precision.
The analogy only extends so far: the mantissa is a modification of the coefficient found in the decimal scientific notation you might be familiar with, and there are certain bit patterns that have special meaning in the standard.
The ultimate result of this storage mechanism is that the integer 760500000 cannot be exactly represented by IEEE 754 binary32 with its 23-bit mantissa: it loses integer-level precision after the integer at 2^(mantissa_bits + 1), which is 16777217 for 23-bit mantissa floats. The closest integers to 76050000 that can be represented by a float are 760499968 and 76050032, the former of which is chosen for representation due to the round-ties-to-even rule, and printing the integer at a greater precision than the floating point number can represent will naturally result in apparent inaccuracies.
A double, which has 64 bit size in your case, naturally has more precision than a float, which is 32 bit in your case. Therefore, this is an expected result
Specifications do not enforce that any type should correctly represent all numbers less than std::numeric_limits::max() with all their precision.
The number you display is off only in the 8th digit and after. That is well within the 6 digits of accuracy you are guaranteed for a float. If you only printed 6 digits, the output would get rounded and you'd see the value you expect.
printf("%0.6g\n", a);
See http://ideone.com/ZiHYuT
According to The C++ Programming Language - 4th, section 6.2.5:
There are three floating-points types: float (single-precision), double (double-precision), and long double (extended-precision)
Refer to: http://en.wikipedia.org/wiki/Single-precision_floating-point_format
The true significand includes 23 fraction bits to the right of the binary point and an implicit leading bit (to the left of the binary point) with value 1 unless the exponent is stored with all zeros. Thus only 23 fraction bits of the significand appear in the memory format but the total precision is 24 bits (equivalent to log10(224) ≈ 7.225 decimal digits).
→ The maximum digits of floating point number is 7 digits on binary32 interchange format. (a computer number format that occupies 4 bytes (32 bits) in computer memory)
When I test on different compilers (like GCC, VC compiler)
→ It always outputs 6 as the value.
Take a look into float.h of each compiler
→ I found that 6 is fixed.
Question:
Do you know why there is a different here (between actual value theoretical value - 7 - and actual value - 6)?
It sounds like "7" is more reasonable because when I test using below code, the value is still valid, while "8" is invalid
Why don't the compilers check the interchange format for giving decision about the numbers of digits represented in floating-point (instead of using a fixed value)?
Code:
#include <iostream>
#include <limits>
using namespace std;
int main( )
{
cout << numeric_limits<float> :: digits10 << endl;
float f = -9999999;
cout.precision ( 10 );
cout << f << endl;
}
You're not reading the documentation.
std::numeric_limits<float>::digits10 is 6:
The value of std::numeric_limits<T>::digits10 is the number of base-10 digits that can be represented by the type T without change, that is, any number with this many decimal digits can be converted to a value of type T and back to decimal form, without change due to rounding or overflow. For base-radix types, it is the value of digits (digits-1 for floating-point types) multiplied by log10(radix) and rounded down.
The standard 32-bit IEEE 754 floating-point type has a 24 bit fractional part (23 bits written, one implied), which may suggest that it can represent 7 digit decimals (24 * std::log10(2) is 7.22), but relative rounding errors are non-uniform and some floating-point values with 7 decimal digits do not survive conversion to 32-bit float and back: the smallest positive example is 8.589973e9, which becomes 8.589974e9 after the roundtrip. These rounding errors cannot exceed one bit in the representation, and digits10 is calculated as (24-1)*std::log10(2), which is 6.92. Rounding down results in the value 6.
std::numeric_limits<float>::max_digits10 is 9:
The value of std::numeric_limits<T>::max_digits10 is the number of base-10 digits that are necessary to uniquely represent all distinct values of the type T, such as necessary for serialization/deserialization to text. This constant is meaningful for all floating-point types.
Unlike most mathematical operations, the conversion of a floating-point value to text and back is exact as long as at least max_digits10 were used (9 for float, 17 for double): it is guaranteed to produce the same floating-point value, even though the intermediate text representation is not exact. It may take over a hundred decimal digits to represent the precise value of a float in decimal notation.
std::numeric_limits<float>::digits10 equates to FLT_DIG, which is defined by the C standard :
number of decimal digits, q, such that any floating-point number with q decimal digits can be rounded into a floating-point number with p radix b digits and back again without change to the q decimal digits,
⎧ p log10 b if b is a power of 10
⎨
⎩ ⎣( p − 1) log10 b⎦ otherwise
FLT_DIG 6
DBL_DIG 10
LDBL_DIG 10
The reason for the value 6 (and not 7), is due to rounding errors - not all floating point values with 7 decimal digits can be losslessly represented by a 32-bit float. Rounding errors are limited to 1 bit though, so the FLT_DIG value was calculated based on 23 bits (instead of the full 24) :
23 * log10(2) = 6.92
which is rounded down to 6.