static_casting from a floating point to an integer simply strips the fractional point of the number. For example static_cast<int>(13.9999999) yields 13.
Not all integers are representable as floating point numbers. For example internally the closest float to 13,000,000 may be: 12999999.999999.
In this hypothetical case, I'd expect to get an unexpected result from:
const auto foo = 12'999'999.5F;
const auto bar = static_cast<long long>(ceil(foo));
My assumption is that such a breakdown does occur at some point, if not necessarily at 13,000,000. I'd just like to know the range over which I can trust static_cast<long long>(ceif(foo))?
For example internally the closest float to 13,000,000 may be: 12999999.999999.
That is not possible in any normal floating-point format. The floating-point representation of numbers is equivalent to M•be, where b is a fixed base (e.g., 2 for binary floating-point) and M and e are integers with some restrictions on their values. In order for a value like 13,000,000-x to be represented, where x is some positive value less than 1, e must be negative (because M•be for a non-negative e is an integer). If so, then M•b0 is an integer larger than M•be, so it is larger than 13,000,000, and so 13,000,000 can be represented as M'•b0, where M' is a positive integer less than M and hence fits in the range of allowed values for M (in any normal floating-point format). (Perhaps some bizarre floating-point format might impose a strange range on M or e that prevents this, but no normal format does.)
Regarding your code:
auto test = 0LL;
const auto floater = 0.5F;
for(auto i = 0LL; i == test; i = std::ceil(i + floater)) ++test;
cout << test << endl;
When i was 8,388,608, the mathematical result of 8,388,608 + .5 is 8,388,608.5. This is not representable in the float format on your system, so it was rounded to 8,388,608. The ceil of this is 8,388,608. At this point, test was 8,388,609, so the loop stopped. So this code does not demonstrate that 8,388,608.5 is representable and 8,388,609 is not.
Behavior seems to return to normal if I do: ceil(8'388'609.5F) which will correctly return 8,388,610.
8,388,609.5 is not representable in the float format on your system, so it was rounded by the rule “round to nearest, ties to even.” The two nearest representable values are 8,388,609, and 8,388,610. Since they are equally far apart, the result was 8,388,610. That value was passed to ceil, which of course returned 8,388,610.
On Visual Studio 2015 I got 8,388,609 which is a horrifying small safe range.
In the IEEE-754 basic 32-bit binary format, all integers from -16,777,216 to +16,777,216 are representable, because the format has a 24-bit significand.
Floating point numbers are represented by 3 integers, cbq where:
c is the mantissa (so for the number: 12,999,999.999999 c would be 12,999,999,999,999)
q is the exponent (so for the number: 12,999,999.999999 q would be -6)
b is the base (IEEE-754 requires b to be either 10 or 2; in the representation above b is 10)
From this it's easy to see that a floating point with the capability of representing 12,999,999.999999 also has the capability of representing 13,000,000.000000 using a c of 1,300,000,000,000 and a q of -5.
This example is a bit contrived in that the chosen b is 10, where in almost all implementations the chosen base is 2. But it's worth pointing out that even with a b of 2 the q functions as a shift left or right of the mantissa.
Next let's talk about a range here. Obviously a 32-bit floating point cannot represent all the integers represented by a 32-bit integer, as the floating point must also represent so many much larger or smaller numbers. Since the exponent is simply shifting the mantissa, a floating point number can always exactly represent every integer that can be represented by it's mantissa. Given the traditional IEEE-754 binary base floating point numbers:
A 32-bit (float) has a 24-bit mantissa so it can represent all integers in the range [-16,777,215, 16,777,215]
A 64-bit (double) has a 53-bit mantissa so it can represent all integers in the range [-9,007,199,254,740,991, 9,007,199,254,740,991]
A 128-bit (long double depending upon implementation) has a 113-bit mantissa so it can represent all integers in the range [-103,845,937,170,696,552,570,609,926,584,40,191, 103,845,937,170,696,552,570,609,926,584,40,191]
[source]
c++ provides digits as a method of finding this number for a given floating point type. (Though admittedly even a long long is too small to represent a 113-bit mantissa.) For example a float's maximum mantissa could be found by:
(1LL << numeric_limits<float>::digits) - 1LL
Having thoroughly explained the mantissa, let's revisit the exponent section to talk about how a floating point is actually stored. Take 13,000,000.0 that could be represented as:
c = 13, q = 6, b = 10
c = 130, q = 5, b = 10
c = 1,300, q = 4, b = 10
And so on. For the traditional binary format IEEE-754 requires:
The representation is made unique by choosing the smallest representable exponent that retains the most significant bit (MSB) within the selected word size and format. Further, the exponent is not represented directly, but a bias is added so that the smallest representable exponent is represented as 1, with 0 used for subnormal numbers
To explain this in the more familiar base-10 if our mantissa has 14 decimal places, the implementation would look like this:
c = 13,000,000,000,000 so the MSB will be used in the represented number
q = 6 This is a little confusing, it's cause of the bias introduced here; logically q = -6 but the bias is set so that when q = 0 only the MSB of c is immediately to the left of the decimal point, meaning that c = 13,000,000,000,000, q = 0, b = 10 will represent 1.3
b = 10 again the above rules are really only required for base-2 but I've shown them as they would apply to base-10 for the purpose of explaination
Translated back to base-2 this means that a q of numeric_limits<T>::digits - 1 has only zeros after the decimal place. ceil only has an effect if there is a fractional part of the number.
A final point of explanation here, is the range over which ceil will have an effect. After the exponent of a floating point is larger than numeric_limits<T>::digits continuing to increase it only introduces trailing zeros to the resulting number, thus calling ceil when q is greater than or equal to numeric_limits<T>::digits - 2LL. And since we know the MSB of c will be used in the number this means that c must be smaller than (1LL << numeric_limits<T>::digits - 1LL) - 1LL Thus for ceil to have an effect on the traditional binary IEEE-754 floating point:
A 32-bit (float) must be smaller than 8,388,607
A 64-bit (double) must be smaller than 4,503,599,627,370,495
A 128-bit (long double depending upon implementation) must be smaller than 5,192,296,858,534,827,628,530,496,329,220,095
For what I'm learning, once I convert a floating point value to a decimal one, the "significant digits" I need are a fixed number (17 for double, for example). 17 totals: before and after decimal separator.
So for example this code:
typedef std::numeric_limits<double> dbl;
int main()
{
std::cout.precision(dbl::max_digits10);
//std::cout << std::fixed;
double value1 = 1.2345678912345678912345;
double value2 = 123.45678912345678912345;
double value3 = 123456789123.45678912345;
std::cout << value1 << std::endl;
std::cout << value2 << std::endl;
std::cout << value3 << std::endl;
}
will correctly "show me" 17 values:
1.2345678912345679
123.45678912345679
123456789123.45679
But if I increase precision for the cout (i.e. std::cout.precision(100)), I can see there are other numbers after the 17 range:
1.2345678912345678934769921397673897445201873779296875
123.456789123456786683163954876363277435302734375
123456789123.456787109375
Why should ignore them? They are stored within the variables/double as well, so they will affect the whole "math" later (division, multiplication, sum, and so on).
What does it means "significant digits"? There is other...
Can you help me to understand what “significant digits” means in floating point math?
With FP numbers, like mathematical real numbers, significant digits is the leading digits of a value that do not begin with 0 and then, depending on context, to 1) the decimal point, 2) the last non-zero digit, or 3) the last printed digit.
123. // 3 significant decimal digits
123.125 // 6 significant decimal digits
0.0078125 // 5 significant decimal digits
0x0.00123p45 // 3 significant hexadecimal digits
123000.0 // 3, 6, or 7 significant decimal digits depending on context
When concerned about decimal significant digits and FP types like double. the issue is often "How many decimal significant digits are needed or of concern?"
Nearly all C FP implementations use a binary encoding such that all finite FP are exact sums of power of 2. Each finite FP is exact. Common encoding affords most double to have 53 binary digits is it significand - so 53 significant binary digits. How this appears as a decimal is often the source of confusion.
// Example 0.1 is not an exact sum of powers of 2 so a nearby value is used.
double x = 0.1;
// x takes on the exact value of
// 0.1000000000000000055511151231257827021181583404541015625
// aka 0x1.999999999999ap-4
// aka base2: 0.000110011001100110011001100110011001100110011001100110011010
// The preceding and subsequent doubles
// 0.09999999999999999167332731531132594682276248931884765625
// 0.10000000000000001942890293094023945741355419158935546875
// 123456789012345678901234567890123456789012345678901234567890
Looking at above, one could say x has over 50 decimal significant digits. Yet the value matches the intended 0.1 to 16 decimal significant digits. Or yet since the preceding and subsequent possible double values differ in the 17 place, one could say x has 17 decimal significant digits.
What does it means "significant digits"?
Various meanings of significant digits exist, but for C, 2 common ones are:
The number of decimal significant digits that a textual value to double converts as expected for all double. This is typically 15. C specifies this as DBL_DIG and must be at least 10.
The number of decimal significant digits that a textual value of double needs to be printed to distinguish from another double. This is typically 17. C specifies this as DBL_DECIMAL_DIG and must be at least 10.
Why should ignore them?
It depends of coding goals. Rarely are all digits of the exact value needed. (DBL_TRUE_MIN might have 752 od them.) For most applications, DBL_DECIMAL_DIG is enough. In select apps, DBL_DIG will do. So usually, ignoring digits past 17 does not cause problems.
Keep in mind that floating-point values are not real numbers. There are gaps between the values, and all those extra digits, while meaningful for real numbers, don’t reflect any difference in the floating-point value. When you convert a floating-point value to text, having std::numeric_limits<...>::max_digits10 digits ensures that you can convert the text string back to floating-point and get the original value. The extra digits don’t affect the result.
The extra digits that you see when you ask for more digits are the result of the conversion algorithm trying to do what you asked. The algorithm typically just keeps extracting digits until it reaches the desired precision; it could be written to start outputting zeros after it’s written max_digits10 digits, but that’s an additional complication that nobody bothers with. It wouldn’t really be helpful.
just to add to Pete Becker's answer, I think you're confusing the problem of finding the exact decimal representation of a binary mantissa, with the problem of finding some decimal representation uniquely representing that binary mantissa ( given some fixed rounding scheme ).
Now, regarding the first problem, you always need a finite number of decimal digits to exactly represent a binary mantissa ( because 2 divides 10 ).
For example, you need 18 decimal digits to exactly represent the binary 1.0000000000000001, being 1.00000762939453125 in decimal.
but you need just 17 digits to represent it uniquely as 1.0000076293945312 because no other number having exact value 1.0000076293945312xyz... where 0<=x<5 can exist as a double ( more precisely, the next and prior exactly representable values being 1.0000076293945314720446049250313080847263336181640625 and 1.0000076293945310279553950749686919152736663818359375 ).
Of course, this does not mean that given some decimal number you can ignore all digits past the 17th; it just means that if you apply the same rounding scheme used to produce the decimal at the 17th position and assign it back to a double you'll get the same original double.
I used to think that float can use max 6 digit and double 15, after comma. But if I print limits here:
typedef std::numeric_limits<float> fl;
typedef std::numeric_limits<double> dbl;
int main()
{
std::cout << fl::max_digits10 << std::endl;
std::cout << dbl::max_digits10 << std::endl;
}
It prints float 9 and double 17?
You're confusing digits10 and max_digits10.
If digits10 is 6, then any number with six decimal digits can be converted to the floating point type, and back, and when rounded back to six decimal digits, produces the original value.
If max_digits10 is 9, then there exist at least two floating point numbers that when converted to decimal produce the same initial 8 decimal digits.
digits10 is the number you're looking for, based on your description. It's about converting from decimal to binary floating point back to decimal.
max_digits10 is a number about converting from binary floating point to decimal back to binary floating point.
From cppreference:
Unlike most mathematical operations, the conversion of a floating-point value to text and back is exact as long as at least max_digits10 were used (9 for float, 17 for double): it is guaranteed to produce the same floating-point value, even though the intermediate text representation is not exact. It may take over a hundred decimal digits to represent the precise value of a float in decimal notation.
For example (I am using http://www.exploringbinary.com/floating-point-converter/ to facilitate the conversion) and double as the precision format:
1.1e308 => 109999999999999997216016380169010472601796114571365898835589230322558260940308155816455878138416026219051443651421887588487855623732463609216261733330773329156055234383563489264255892767376061912596780024055526930962873899746391708729279405123637426157351830292874541601579169431016577315555383826285225574400
Using 16 significant digits:
1.099999999999999e308 => 109999999999999897424000903433019889783160462729437595463026208549681185812946033955861284690212736971153169019636833121365513414107701410594362313651090292197465320141992473263972245213092236035710707805906167798295036672550192042188756649080117981714588407890666666245533825643214495197630622309084729180160
Using 17 significant digits:
1.0999999999999999e308 => 109999999999999997216016380169010472601796114571365898835589230322558260940308155816455878138416026219051443651421887588487855623732463609216261733330773329156055234383563489264255892767376061912596780024055526930962873899746391708729279405123637426157351830292874541601579169431016577315555383826285225574400
which is the same as the original
More than 17 significant digits:
1.09999999999999995555e308 => 109999999999999997216016380169010472601796114571365898835589230322558260940308155816455878138416026219051443651421887588487855623732463609216261733330773329156055234383563489264255892767376061912596780024055526930962873899746391708729279405123637426157351830292874541601579169431016577315555383826285225574400
Continue to be the same as the original.
There isn't an exact correspondence between decimal digits and binary digits.
IEEE 754 single precision uses 23 bits plus 1 for the implicit leading 1. Double precision uses 52+1 bits.
To get the equivalent decimal precision, use
log10(2^binary_digits) = binary_digits*log10(2)
For single precision this is
24*log10(2) = 7.22
and for double precision
53*log10(2) = 15.95
See here and also the Wikipedia page which I don't find to be particularly concise.
I'm trying to write floating point vars into my ini file and i encountered a problem with format specifiers.
I have a float value, let it be 101.9716. Now i want to write it to my ini file, but the problem is i have another float values, which have less preceision (such as 15.85), and that values are writing to ini file in the same loop.
so i do:
sprintf(valLineY, "%g", grade[i].yArr[j]);
All my other variables become nice chars like "20" (if it was 20.00000), "13.85" (if it was 13.850000) and so on. But 101.9716 becomes "101.972" for some reason.
Can you please tell me why does this happen and how to make it "101.9716" without ruining my ideology (which is about removing trailing zeroes and unneeded perceisions).
Thanks for any help.
Why this happens?
I tested:
double f = 101.9716;
printf("%f\n", f);
printf("%e\n", f);
printf("%g\n", f);
And it output:
101.971600
1.019716e+02 // Notice the exponent +02
101.972
Here's what C standard (N1570 7.21.6.1) says about conversion specifier g:
A double argument representing a floating-point number is converted in
style f or e (or in style F or E in the case of a G conversion specifier), depending on the value converted and the precision. Let P equal the precision if nonzero, 6 if the precision is omitted, or 1 if the precision is zero. Then, if a conversion with style E would have an exponent of X:
— if P > X ≥ −4, the conversion is with style f (or F) and precision
P − (X + 1).
— otherwise, the conversion is with style e (or E) and precision P − 1.
So given above, P will equal 6, because precision is not specified, and X will equal 2, because it's the exponent on style e.
Formula 6 > 2 >= -4 is thus true, and style f is selected. And precision will then be 6 - (2 + 1) = 3.
How to fix?
Unlike f, style g will strip unnecessary zeroes, even when precision is set.
Finally, unless the # flag is used, any trailing zeros are removed from the fractional portion of the result and the decimal-point character is removed if there is no fractional portion remaining.
So set high enough precision:
printf("%.8g\n", f);
prints:
101.9716
According to The C++ Programming Language - 4th, section 6.2.5:
There are three floating-points types: float (single-precision), double (double-precision), and long double (extended-precision)
Refer to: http://en.wikipedia.org/wiki/Single-precision_floating-point_format
The true significand includes 23 fraction bits to the right of the binary point and an implicit leading bit (to the left of the binary point) with value 1 unless the exponent is stored with all zeros. Thus only 23 fraction bits of the significand appear in the memory format but the total precision is 24 bits (equivalent to log10(224) ≈ 7.225 decimal digits).
→ The maximum digits of floating point number is 7 digits on binary32 interchange format. (a computer number format that occupies 4 bytes (32 bits) in computer memory)
When I test on different compilers (like GCC, VC compiler)
→ It always outputs 6 as the value.
Take a look into float.h of each compiler
→ I found that 6 is fixed.
Question:
Do you know why there is a different here (between actual value theoretical value - 7 - and actual value - 6)?
It sounds like "7" is more reasonable because when I test using below code, the value is still valid, while "8" is invalid
Why don't the compilers check the interchange format for giving decision about the numbers of digits represented in floating-point (instead of using a fixed value)?
Code:
#include <iostream>
#include <limits>
using namespace std;
int main( )
{
cout << numeric_limits<float> :: digits10 << endl;
float f = -9999999;
cout.precision ( 10 );
cout << f << endl;
}
You're not reading the documentation.
std::numeric_limits<float>::digits10 is 6:
The value of std::numeric_limits<T>::digits10 is the number of base-10 digits that can be represented by the type T without change, that is, any number with this many decimal digits can be converted to a value of type T and back to decimal form, without change due to rounding or overflow. For base-radix types, it is the value of digits (digits-1 for floating-point types) multiplied by log10(radix) and rounded down.
The standard 32-bit IEEE 754 floating-point type has a 24 bit fractional part (23 bits written, one implied), which may suggest that it can represent 7 digit decimals (24 * std::log10(2) is 7.22), but relative rounding errors are non-uniform and some floating-point values with 7 decimal digits do not survive conversion to 32-bit float and back: the smallest positive example is 8.589973e9, which becomes 8.589974e9 after the roundtrip. These rounding errors cannot exceed one bit in the representation, and digits10 is calculated as (24-1)*std::log10(2), which is 6.92. Rounding down results in the value 6.
std::numeric_limits<float>::max_digits10 is 9:
The value of std::numeric_limits<T>::max_digits10 is the number of base-10 digits that are necessary to uniquely represent all distinct values of the type T, such as necessary for serialization/deserialization to text. This constant is meaningful for all floating-point types.
Unlike most mathematical operations, the conversion of a floating-point value to text and back is exact as long as at least max_digits10 were used (9 for float, 17 for double): it is guaranteed to produce the same floating-point value, even though the intermediate text representation is not exact. It may take over a hundred decimal digits to represent the precise value of a float in decimal notation.
std::numeric_limits<float>::digits10 equates to FLT_DIG, which is defined by the C standard :
number of decimal digits, q, such that any floating-point number with q decimal digits can be rounded into a floating-point number with p radix b digits and back again without change to the q decimal digits,
⎧ p log10 b if b is a power of 10
⎨
⎩ ⎣( p − 1) log10 b⎦ otherwise
FLT_DIG 6
DBL_DIG 10
LDBL_DIG 10
The reason for the value 6 (and not 7), is due to rounding errors - not all floating point values with 7 decimal digits can be losslessly represented by a 32-bit float. Rounding errors are limited to 1 bit though, so the FLT_DIG value was calculated based on 23 bits (instead of the full 24) :
23 * log10(2) = 6.92
which is rounded down to 6.