This question already has answers here:
What range of numbers can be represented in a 16-, 32- and 64-bit IEEE-754 systems?
(7 answers)
Closed 8 years ago.
I'm trying to make a function that enables me to print floats.
Right now, I'm encountering two strange behaviors :
Sometimes, values like 1.3 come out as 1.2999999 instead of 1.3000000,and sometimes values like 1.234567 come out as 1.2345672 instead of 1.2345670.
Here's the source code :
int ft_putflt(float f)
{
int ret;
int intpart;
int i;
ret = 0;
i = 0;
intpart = (int)f;
ft_putnbr(intpart);
ret = ft_nbrlen(intpart) + 8;
write(1, ".", 1);
while (i++ < 7)
{
f *= 10;
ft_putchar(48 + ((int)f % 10));
}
return (ret);
}
ft_putnbr is OK AFAIK.
ft_putchar is a simple call to "write(1, &c, 1)".
test values (value : output)
1.234567 : 1.2345672 (!)
1.2345670 : 1.2345672 (!)
1.0000001 : 1.0000001 OK
0.1234567 : 0.1234567 OK
0.67 : 0.6700000 OK
1.3 : 1.3000000 OK (fixed it)
1.321012 : 1.3210119 (!)
1.3210121 : 1.3210122 (!)
This all seems a bit mystic to me... Loss of precision when casting to int maybe ?
Yes, you lose precision when messing with floats and ints.
If both floats have differing magnitude and both are using the complete precision range (of about 7 decimal digits) then yes, you will see some loss in the last places, because floats are stored in the form of (sign) (mantissa) × 2(exponent). If two values have differing exponents and you add them, then the smaller value will get reduced to less digits in the mantissa (because it has to adapt to the larger exponent):
PS> [float]([float]0.0000001 + [float]1)
1
In relation to integers, a normal 32-bit integer is capable of representing values exactly which do not fit exactly into a float. A float can still store approximately the same number, but no longer exactly. Of course, this only applies to numbers that are large enough, i. e. longer than 24 bits.Because a float has 24 bits of precision and (32-bit) integers have 32, float will still be able to retain the magnitude and most of the significant digits, but the last places may likely differ:
PS> [float]2100000050 + [float]100
2100000100
This is inherent in the use of finite-precision numerical representation schemes. Given any number that can be represented, A, there is some number that is the smallest number greater than A that can be represented, call that B. Numbers between A and B cannot be represented exactly and must be approximated.
For example, let's consider using six decimal digits because that's an easier system to understand as a starting point. If A is .333333, then B is .333334. Numbers between A and B, such 1/3, cannot be exactly represented. So if you take 1/3 and add it to itself twice (or multiply it by 3), you will get .999999, not 1. You should expect to see imprecision at the limits of the representation.
Related
This question already has answers here:
strange output in comparison of float with float literal
(8 answers)
Closed 8 years ago.
The mantissa bits in a float variable are 23 in total while in a double variable 53.
This means that the digits that can be represented precisely by
a float variable are log10(2^23) = 6.92368990027 = 6 in total
and by a double variable log10(2^53) = 15.9545897702 = 15
Let's look at this code:
float b = 1.12391;
double d = b;
cout<<setprecision(15)<<d<<endl;
It prints
1.12390995025635
however this code:
double b = 1.12391;
double d = b;
cout<<setprecision(15)<<d<<endl;
prints 1.12391
Can someone explain why I get different results? I converted a float variable of 6 digits to double, the compiler must know that these 6 digits are important. Why? Because I'm not using more digits that can't all be represented correctly in a float variable. So instead of printing the correct value it decides to print something else.
Converting from float to double preserves the value. For this reason, in the first snippet, d contains exactly the approximation to float precision of 112391/100000. The rational 112391/100000 is stored in the float format as 9428040 / 223. If you carry out this division, the result is exactly 1.12390995025634765625: the float approximation is not very close. cout << prints the representation to 14 digits after the decimal point. The first omitted digit is 7, so the last printed digit, 4, is rounded up to 5.
In the second snippet, d contains the approximation to double precision of the value of 112391/100000, 1.123909999999999964614971759147010743618011474609375 (in other words 5061640657197974 / 252). This approximation is much closer to the rational. If it was printed with 14 digits after the decimal point, the last digits would all be zeroes (after rounding because the first omitted digit would be 9). cout << does not print trailing zeroes, so you see 1.12391 as output.
Because I'm not using more digits that can't all be represented correctly in a float variable
When you incorrectly apply log10 to 223 (it should be 224), you get the number of decimal digits that can be stored in a float. Because float's representation is not decimal, the digits after these seven or so are not zeroes in general. They are the digits that happen to be there in decimal for the closest binary representation that the compiler chose for the number you wrote.
float b = 1.12391;
The problem is here, and here:
double b = 1.12391;
These assignments are already imprecise. Calculations or casts using them will therefore also be imprecise.
You're mistaken in assuming that the first 6 digits will be precisely the same. When we say that float is precise to within 6 (decimal) digits, we mean that the relative difference between the actual and intended value is less than 10-6. So, 1.12390995 and 1.12391 differ by 0.0000005. That's much better than the 10-6 you can rely on.
I am writing a program that prints floating point literals to be used inside another program.
How many digits do I need to print in order to preserve the precision of the original float?
Since a float has 24 * (log(2) / log(10)) = 7.2247199 decimal digits of precision, my initial thought was that printing 8 digits should be enough. But if I'm unlucky, those 0.2247199 get distributed to the left and to the right of the 7 significant digits, so I should probably print 9 decimal digits.
Is my analysis correct? Is 9 decimal digits enough for all cases? Like printf("%.9g", x);?
Is there a standard function that converts a float to a string with the minimum number of decimal digits required for that value, in the cases where 7 or 8 are enough, so I don't print unnecessary digits?
Note: I cannot use hexadecimal floating point literals, because standard C++ does not support them.
In order to guarantee that a binary->decimal->binary roundtrip recovers the original binary value, IEEE 754 requires
The original binary value will be preserved by converting to decimal and back again using:[10]
5 decimal digits for binary16
9 decimal digits for binary32
17 decimal digits for binary64
36 decimal digits for binary128
For other binary formats the required number of decimal digits is
1 + ceiling(p*log10(2))
where p is the number of significant bits in the binary format, e.g. 24 bits for binary32.
In C, the functions you can use for these conversions are snprintf() and strtof/strtod/strtold().
Of course, in some cases even more digits can be useful (no, they are not always "noise", depending on the implementation of the decimal conversion routines such as snprintf() ). Consider e.g. printing dyadic fractions.
24 * (log(2) / log(10)) = 7.2247199
That's pretty representative for the problem. It makes no sense whatsoever to express the number of significant digits with an accuracy of 0.0000001 digits. You are converting numbers to text for the benefit of a human, not a machine. A human couldn't care less, and would much prefer, if you wrote
24 * (log(2) / log(10)) = 7
Trying to display 8 significant digits just generates random noise digits. With non-zero odds that 7 is already too much because floating point error accumulates in calculations. Above all, print numbers using a reasonable unit of measure. People are interested in millimeters, grams, pounds, inches, etcetera. No architect will care about the size of a window expressed more accurately than 1 mm. No window manufacturing plant will promise a window sized as accurate as that.
Last but not least, you cannot ignore the accuracy of the numbers you feed into your program. Measuring the speed of an unladen European swallow down to 7 digits is not possible. It is roughly 11 meters per second, 2 digits at best. So performing calculations on that speed and printing a result that has more significant digits produces nonsensical results that promise accuracy that isn't there.
If you have a C library that is conforming to C99 (and if your float types have a base that is a power of 2 :) the printf format character %a can print floating point values without lack of precision in hexadecimal form, and utilities as scanf and strod will be able to read them.
If the program is meant to be read by a computer, I would do the simple trick of using char* aliasing.
alias float* to char*
copy into an unsigned (or whatever unsigned type is sufficiently large) via char* aliasing
print the unsigned value
Decoding is just reversing the process (and on most platform a direct reinterpret_cast can be used).
The floating-point-to-decimal conversion used in Java is guaranteed to be produce the least number of decimal digits beyond the decimal point needed to distinguish the number from its neighbors (more or less).
You can copy the algorithm from here: http://www.docjar.com/html/api/sun/misc/FloatingDecimal.java.html
Pay attention to the FloatingDecimal(float) constructor and the toJavaFormatString() method.
If you read these papers (see below), you'll find that there are some algorithm that print the minimum number of decimal digits such that the number can be re-interpreted unchanged (i.e. by scanf).
Since there might be several such numbers, the algorithm also pick the nearest decimal fraction to the original binary fraction (I named float value).
A pity that there's no such standard library in C.
http://www.cs.indiana.edu/~burger/FP-Printing-PLDI96.pdf
http://grouper.ieee.org/groups/754/email/pdfq3pavhBfih.pdf
You can use sprintf. I am not sure whether this answers your question exactly though, but anyways, here is the sample code
#include <stdio.h>
int main( void )
{
float d_n = 123.45;
char s_cp[13] = { '\0' };
char s_cnp[4] = { '\0' };
/*
* with sprintf you need to make sure there's enough space
* declared in the array
*/
sprintf( s_cp, "%.2f", d_n );
printf( "%s\n", s_cp );
/*
* snprinft allows to control how much is read into array.
* it might have portable issues if you are not using C99
*/
snprintf( s_cnp, sizeof s_cnp - 1 , "%f", d_n );
printf( "%s\n", s_cnp );
getchar();
return 0;
}
/* output :
* 123.45
* 123
*/
With something like
def f(a):
b=0
while a != int(a): a*=2; b+=1
return a, b
(which is Python) you should be able to get mantissa and exponent in a loss-free way.
In C, this would probably be
struct float_decomp {
float mantissa;
int exponent;
}
struct float_decomp decomp(float x)
{
struct float_decomp ret = { .mantissa = x, .exponent = 0};
while x != floor(x) {
ret.mantissa *= 2;
ret.exponent += 1;
}
return ret;
}
But be aware that still not all values can be represented in that way, it is just a quick shot which should give the idea, but probably needs improvement.
This question already has answers here:
Write your own implementation of math's floor function, C
(5 answers)
Closed 1 year ago.
I would like to know how to write my own floor function to round a float down.
Is it possible to do this by setting the bits of a float that represent the numbers after the comma to 0?
If yes, then how can I access and modify those bits?
Thanks.
You can do bit twiddling on floating point numbers, but getting it right depends on knowing exactly what the floating point binary representation is. For most machines these days its IEEE-754, which is reasonably straight-forward. For example IEEE-754 32-bit floats have 1 sign bit, 8 exponent bits, and 23 mantissa bits, so you can use shifts and masks to extract those fields and do things with them. So doing trunc (round to integer towards 0) is pretty easy:
float trunc(float x) {
union {
float f;
uint32_t i;
} val;
val.f = x;
int exponent = (val.i >> 23) & 0xff; // extract the exponent field;
int fractional_bits = 127 + 23 - exponent;
if (fractional_bits > 23) // abs(x) < 1.0
return 0.0;
if (fractional_bits > 0)
val.i &= ~((1U << fractional_bits) - 1);
return val.f;
}
First, we extract the exponent field, and use that to calculate how many bits after the
decimal point are present in the number. If there are more than the size of the mantissa, then we just return 0. Otherwise, if there's at least 1, we mask off (clear) that many low bits. Pretty simple. We're ignoring denormal, NaN, and infinity her, but that works out ok, as they have exponents of all 0s or all 1s, which means we end up converting denorms to 0 (they get caught in the first if, along with small normal numbers), and leaving NaN/Inf unchanged.
To do a floor, you'd also need to look at the sign, and rounds negative numbers 'up' towards negative infinity.
Note that this is almost certainly slower than using dedicated floating point intructions, so this sort of thing is really only useful if you need to use floating point numbers on hardware that has no native floating point support. Or if you just want to play around and learn how these things work at a low level.
Define from scratch. And no, setting the bits of your floating point number representing the numbers after the comma to 0 will not work. If you look at IEEE-754, you will see that you basically have all your floating-point numbers in the form:
0.xyzxyzxyz 2^(abc)
So to implement flooring, you can get the xyzxyzxyz and shift left by abc+1 times. Drop the rest. I suggest you read up on the binary representation of a floating point number (link above), this should shed light on the solution I suggested.
NOTE: You also need to take care of the sign bit. And the mantissa of your number is off by 127.
Here is an example, Let's say you have the number pi: 3.14..., you want to get 3.
Pi is represented in binary as
0 10000000 10010010000111111011011
This translate to
sign = 0 ; e = 1 ; s = 110010010000111111011011
The above I get directly from Wikipedia. Since e is 1. You will want to shift left s by 1 + 1 = 2, so you get 11 => 3.
#include <iostream>
#include <iomanip>
double round(double input, double roundto) {
return int(input / roundto) * roundto;
}
int main() {
double pi = 3.1415926353898;
double almostpi = round(pi, 0.0001);
std::cout << std::setprecision(14) << pi << '\n' << std::setprecision(14) << almostpi;
}
http://ideone.com/mdqFA
output:
3.1415926353898
3.1415
This will pretty much be faster than any bit twiddling you can come up with. And it works on all computers (with floats) instead of just one type.
Casting to unsigned while returning as a double does what you are seeking, but under the hood. This simple piece of code works for any POSITIVE number.
#include <iostream>
double floor(const double& num) {
return (unsigned long long) num;
}
This has been tested on tio.run (Try It Online) and onlinegdb.com. The function itself doesn't require any #include files, but to print out the answers, I have included stdio.h (in the tio.run and onlinegdb.com, not here). Here it is:
long double myFloor(long double x) /* Change this to your liking: long double might
be float in your situation. */
{
long double xcopy=x<0?x*-1:x;
unsigned int zeros=0;
long double n=1;
for(n=1;xcopy>n*10;n*=10,++zeros);
for(xcopy-=n;zeros!=-1;xcopy-=n)
if(xcopy<0)
{
xcopy+=n;
n/=10;
--zeros;
}
xcopy+=n;
return x<0?(xcopy==0?x:x-(1-xcopy)):(x-xcopy);
}
This function works everywhere (pretty sure) because it just removes all of the non-decimal parts instead of trying to work with the parts of floats.
The floor of a floating point number is the biggest integer less than or equal to it. Here are a some examples:
floor(5.7) = 5
floor(3) = 3
floor(9.9) = 9
floor(7.0) = 7
floor(-7.9) = -8
floor(-5.0) = -5
floor(-3.3) = -3
floor(0) = 0
floor(-0.0) = -0
floor(-0) = -0
Note: this is almost an exact copy from my other answer which answered a question that was basically the same as this one.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Why does Visual Studio 2008 tell me .9 - .8999999999999995 = 0.00000000000000055511151231257827?
c++
Hey so i'm making a function to return the number of a digits in a number data type given, but i'm having some trouble with doubles.
I figure out how many digits are in it by multiplying it by like 10 billion and then taking away digits 1 by 1 until the double ends up being 0. however when putting in a double of value say .7904 i never exit the function as it keeps taking away digits which never end up being 0 as the resut of .7904 ends up being 7,903,999,988 and not 7,904,000,000.
How can i solve this problem?? Thanks =) ! oh and any other feed back on my code is WELCOME!
here's the code of my function:
/////////////////////// Numb_Digits() ////////////////////////////////////////////////////
enum{DECIMALS = 10, WHOLE_NUMBS = 20, ALL = 30};
template<typename T>
unsigned long int Numb_Digits(T numb, int scope)
{
unsigned long int length= 0;
switch(scope){
case DECIMALS: numb-= (int)numb; numb*=10000000000; // 10 bil (10 zeros)
for(; numb != 0; length++)
numb-=((int)(numb/pow((double)10, (double)(9-length))))* pow((double)10, (double)(9-length)); break;
case WHOLE_NUMBS: numb= (int)numb; numb*=10000000000;
for(; numb != 0; length++)
numb-=((int)(numb/pow((double)10, (double)(9-length))))* pow((double)10, (double)(9-length)); break;
case ALL: numb = numb; numb*=10000000000;
for(; numb != 0; length++)
numb-=((int)(numb/pow((double)10, (double)(9-length))))* pow((double)10, (double)(9-length)); break;
default: break;}
return length;
};
int main()
{
double test = 345.6457;
cout << Numb_Digits(test, ALL) << endl;
cout << Numb_Digits(test, DECIMALS) << endl;
cout << Numb_Digits(test, WHOLE_NUMBS) << endl;
return 0;
}
It's because of their binary representation, which is discussed in depth here:
http://en.wikipedia.org/wiki/IEEE_754-2008
Basically, when a number can't be represented as is, an approximation is used instead.
To compare floats for equality, check if their difference is lesser than an arbitrary precision.
The easy summary about floating point arithmetic :
http://floating-point-gui.de/
Read this and you'll see the light.
If you're more on the math side, Goldberg paper is always nice :
http://cr.yp.to/2005-590/goldberg.pdf
Long story short : real numbers are stored with a fixed, irregular precision, leading to non obvious behaviors. This is unrelated to the language but more a design choice of how to handle real numbers as a whole.
This is because C++ (like most other languages) can not store floating point numbers with infinte precision.
Floating points are stored like this:
sign * coefficient * 10^exponent if you're using base 10.
The problem is that both the coefficient and exponent are stored as finite integers.
This is a common problem with storing floating point in computer programs, you usually get a tiny rounding error.
The most common way of dealing with this is:
Store the number as a fraction (x/y)
Use a delta that allows small deviations (if abs(x-y) < delta)
Use a third party library such as GMP that can store floating point with perfect precision.
Regarding your question about counting decimals.
There is no way of dealing with this if you get a double as input. You cannot be sure that the user actually sent 1.819999999645634565360 and not 1.82.
Either you have to change your input or change the way your function works.
More info on floating point can be found here: http://en.wikipedia.org/wiki/Floating_point
This is because of the way the IEEE floating point standard is implemented, which will vary depending on operations. It is an approximation of precision. Never use logic of if(float == float), ever!
Float numbers are represented in the form Significant digits × baseexponent(IEEE 754). In your case, float 1.82 = 1 + 0.5 + 0.25 + 0.0625 + ...
Since only a limited digits could be stored, therefore there will be a round error if the float number cannot be represented as a terminating expansion in the relevant base (base 2 in the case).
You should always check relative differences with floating point numbers, not absolute values.
You need to read this, too.
Computers don't store floating point numbers exactly. To accomplish what you are doing, you could store the original input as a string, and count the number of characters.
How do you print a double to a stream so that when it is read in you don't lose precision?
I tried:
std::stringstream ss;
double v = 0.1 * 0.1;
ss << std::setprecision(std::numeric_limits<T>::digits10) << v << " ";
double u;
ss >> u;
std::cout << "precision " << ((u == v) ? "retained" : "lost") << std::endl;
This did not work as I expected.
But I can increase precision (which surprised me as I thought that digits10 was the maximum required).
ss << std::setprecision(std::numeric_limits<T>::digits10 + 2) << v << " ";
// ^^^^^^ +2
It has to do with the number of significant digits and the first two don't count in (0.01).
So has anybody looked at representing floating point numbers exactly?
What is the exact magical incantation on the stream I need to do?
After some experimentation:
The trouble was with my original version. There were non-significant digits in the string after the decimal point that affected the accuracy.
So to compensate for this we can use scientific notation to compensate:
ss << std::scientific
<< std::setprecision(std::numeric_limits<double>::digits10 + 1)
<< v;
This still does not explain the need for the +1 though.
Also if I print out the number with more precision I get more precision printed out!
std::cout << std::scientific << std::setprecision(std::numeric_limits<double>::digits10) << v << "\n";
std::cout << std::scientific << std::setprecision(std::numeric_limits<double>::digits10 + 1) << v << "\n";
std::cout << std::scientific << std::setprecision(std::numeric_limits<double>::digits) << v << "\n";
It results in:
1.000000000000000e-02
1.0000000000000002e-02
1.00000000000000019428902930940239457413554200000000000e-02
Based on #Stephen Canon answer below:
We can print out exactly by using the printf() formatter, "%a" or "%A". To achieve this in C++ we need to use the fixed and scientific manipulators (see n3225: 22.4.2.2.2p5 Table 88)
std::cout.flags(std::ios_base::fixed | std::ios_base::scientific);
std::cout << v;
For now I have defined:
template<typename T>
std::ostream& precise(std::ostream& stream)
{
std::cout.flags(std::ios_base::fixed | std::ios_base::scientific);
return stream;
}
std::ostream& preciselngd(std::ostream& stream){ return precise<long double>(stream);}
std::ostream& precisedbl(std::ostream& stream) { return precise<double>(stream);}
std::ostream& preciseflt(std::ostream& stream) { return precise<float>(stream);}
Next: How do we handle NaN/Inf?
It's not correct to say "floating point is inaccurate", although I admit that's a useful simplification. If we used base 8 or 16 in real life then people around here would be saying "base 10 decimal fraction packages are inaccurate, why did anyone ever cook those up?".
The problem is that integral values translate exactly from one base into another, but fractional values do not, because they represent fractions of the integral step and only a few of them are used.
Floating point arithmetic is technically perfectly accurate. Every calculation has one and only one possible result. There is a problem, and it is that most decimal fractions have base-2 representations that repeat. In fact, in the sequence 0.01, 0.02, ... 0.99, only a mere 3 values have exact binary representations. (0.25, 0.50, and 0.75.) There are 96 values that repeat and therefore are obviously not represented exactly.
Now, there are a number of ways to write and read back floating point numbers without losing a single bit. The idea is to avoid trying to express the binary number with a base 10 fraction.
Write them as binary. These days, everyone implements the IEEE-754 format so as long as you choose a byte order and write or read only that byte order, then the numbers will be portable.
Write them as 64-bit integer values. Here you can use the usual base 10. (Because you are representing the 64-bit aliased integer, not the 52-bit fraction.)
You can also just write more decimal fraction digits. Whether this is bit-for-bit accurate will depend on the quality of the conversion libraries and I'm not sure I would count on perfect accuracy (from the software) here. But any errors will be exceedingly small and your original data certainly has no information in the low bits. (None of the constants of physics and chemistry are known to 52 bits, nor has any distance on earth ever been measured to 52 bits of precision.) But for a backup or restore where bit-for-bit accuracy might be compared automatically, this obviously isn't ideal.
Don't print floating-point values in decimal if you don't want to lose precision. Even if you print enough digits to represent the number exactly, not all implementations have correctly-rounded conversions to/from decimal strings over the entire floating-point range, so you may still lose precision.
Use hexadecimal floating point instead. In C:
printf("%a\n", yourNumber);
C++0x provides the hexfloat manipulator for iostreams that does the same thing (on some platforms, using the std::hex modifier has the same result, but this is not a portable assumption).
Using hex floating point is preferred for several reasons.
First, the printed value is always exact. No rounding occurs in writing or reading a value formatted in this way. Beyond the accuracy benefits, this means that reading and writing such values can be faster with a well tuned I/O library. They also require fewer digits to represent values exactly.
I got interested in this question because I'm trying to (de)serialize my data to & from JSON.
I think I have a clearer explanation (with less hand waiving) for why 17 decimal digits are sufficient to reconstruct the original number losslessly:
Imagine 3 number lines:
1. for the original base 2 number
2. for the rounded base 10 representation
3. for the reconstructed number (same as #1 because both in base 2)
When you convert to base 10, graphically, you choose the tic on the 2nd number line closest to the tic on the 1st. Likewise when you reconstruct the original from the rounded base 10 value.
The critical observation I had was that in order to allow exact reconstruction, the base 10 step size (quantum) has to be < the base 2 quantum. Otherwise, you inevitably get the bad reconstruction shown in red.
Take the specific case of when the exponent is 0 for the base2 representation. Then the base2 quantum will be 2^-52 ~= 2.22 * 10^-16. The closest base 10 quantum that's less than this is 10^-16. Now that we know the required base 10 quantum, how many digits will be needed to encode all possible values? Given that we're only considering the case of exponent = 0, the dynamic range of values we need to represent is [1.0, 2.0). Therefore, 17 digits would be required (16 digits for fraction and 1 digit for integer part).
For exponents other than 0, we can use the same logic:
exponent base2 quant. base10 quant. dynamic range digits needed
---------------------------------------------------------------------
1 2^-51 10^-16 [2, 4) 17
2 2^-50 10^-16 [4, 8) 17
3 2^-49 10^-15 [8, 16) 17
...
32 2^-20 10^-7 [2^32, 2^33) 17
1022 9.98e291 1.0e291 [4.49e307,8.99e307) 17
While not exhaustive, the table shows the trend that 17 digits are sufficient.
Hope you like my explanation.
In C++20 you'll be able to use std::format to do this:
std::stringstream ss;
double v = 0.1 * 0.1;
ss << std::format("{}", v);
double u;
ss >> u;
assert(v == u);
The default floating-point format is the shortest decimal representation with a round-trip guarantee. The advantage of this method compared to using the precision of max_digits10 (not digits10 which is not suitable for round trip through decimal) from std::numeric_limits is that it doesn't print unnecessary digits.
In the meantime you can use the {fmt} library, std::format is based on. For example (godbolt):
fmt::print("{}", 0.1 * 0.1);
Output (assuming IEEE754 double):
0.010000000000000002
{fmt} uses the Dragonbox algorithm for fast binary floating point to decimal conversion. In addition to giving the shortest representation it is 20-30x faster than common standard library implementations of printf and iostreams.
Disclaimer: I'm the author of {fmt} and C++20 std::format.
A double has the precision of 52 binary digits or 15.95 decimal digits. See http://en.wikipedia.org/wiki/IEEE_754-2008. You need at least 16 decimal digits to record the full precision of a double in all cases. [But see fourth edit, below].
By the way, this means significant digits.
Answer to OP edits:
Your floating point to decimal string runtime is outputing way more digits than are significant. A double can only hold 52 bits of significand (actually, 53, if you count a "hidden" 1 that is not stored). That means the the resolution is not more than 2 ^ -53 = 1.11e-16.
For example: 1 + 2 ^ -52 = 1.0000000000000002220446049250313 . . . .
Those decimal digits, .0000000000000002220446049250313 . . . . are the smallest binary "step" in a double when converted to decimal.
The "step" inside the double is:
.0000000000000000000000000000000000000000000000000001 in binary.
Note that the binary step is exact, while the decimal step is inexact.
Hence the decimal representation above,
1.0000000000000002220446049250313 . . .
is an inexact representation of the exact binary number:
1.0000000000000000000000000000000000000000000000000001.
Third Edit:
The next possible value for a double, which in exact binary is:
1.0000000000000000000000000000000000000000000000000010
converts inexactly in decimal to
1.0000000000000004440892098500626 . . . .
So all of those extra digits in the decimal are not really significant, they are just base conversion artifacts.
Fourth Edit:
Though a double stores at most 16 significant decimal digits, sometimes 17 decimal digits are necessary to represent the number. The reason has to do with digit slicing.
As I mentioned above, there are 52 + 1 binary digits in the double. The "+ 1" is an assumed leading 1, and is neither stored nor significant. In the case of an integer, those 52 binary digits form a number between 0 and 2^53 - 1. How many decimal digits are necessary to store such a number? Well, log_10 (2^53 - 1) is about 15.95. So at most 16 decimal digits are necessary. Let's label these d_0 to d_15.
Now consider that IEEE floating point numbers also have an binary exponent. What happens when we increment the exponet by, say, 2? We have multiplied our 52-bit number, whatever it was, by 4. Now, instead of our 52 binary digits aligning perfectly with our decimal digits d_0 to d_15, we have some significant binary digits represented in d_16. However, since we multiplied by something less than 10, we still have significant binary digits represented in d_0. So our 15.95 decimal digits now occuply d_1 to d_15, plus some upper bits of d_0 and some lower bits of d_16. This is why 17 decimal digits are sometimes needed to represent a IEEE double.
Fifth Edit
Fixed numerical errors
The easiest way (for IEEE 754 double) to guarantee a round-trip conversion is to always use 17 significant digits. But that has the disadvantage of sometimes including unnecessary noise digits (0.1 → "0.10000000000000001").
An approach that's worked for me is to sprintf the number with 15 digits of precision, then check if atof gives you back the original value. If it doesn't, try 16 digits. If that doesn't work, use 17.
You might want to try David Gay's algorithm (used in Python 3.1 to implement float.__repr__).
Thanks to ThomasMcLeod for pointing out the error in my table computation
To guarantee round-trip conversion using 15 or 16 or 17 digits is only possible for a comparatively few cases. The number 15.95 comes from taking 2^53 (1 implicit bit + 52 bits in the significand/"mantissa") which comes out to an integer in the range 10^15 to 10^16 (closer to 10^16).
Consider a double precision value x with an exponent of 0, i.e. it falls into the floating point range range 1.0 <= x < 2.0. The implicit bit will mark the 2^0 component (part) of x. The highest explicit bit of the significand will denote the next lower exponent (from 0) <=> -1 => 2^-1 or the 0.5 component.
The next bit 0.25, the ones after 0.125, 0.0625, 0.03125, 0.015625 and so on (see table below). The value 1.5 will thus be represented by two components added together: the implicit bit denoting 1.0 and the highest explicit significand bit denoting 0.5.
This illustrates that from the implicit bit downward you have 52 additional, explicit bits to represent possible components where the smallest is 0 (exponent) - 52 (explicit bits in significand) = -52 => 2^-52 which according to the table below is ... well you can see for yourselves that it comes out to quite a bit more than 15.95 significant digits (37 to be exact). To put it another way the smallest number in the 2^0 range that is != 1.0 itself is 2^0+2^-52 which is 1.0 + the number next to 2^-52 (below) = (exactly) 1.0000000000000002220446049250313080847263336181640625, a value which I count as being 53 significant digits long. With 17 digit formatting "precision" the number will display as 1.0000000000000002 and this would depend on the library converting correctly.
So maybe "round-trip conversion in 17 digits" is not really a concept that is valid (enough).
2^ -1 = 0.5000000000000000000000000000000000000000000000000000
2^ -2 = 0.2500000000000000000000000000000000000000000000000000
2^ -3 = 0.1250000000000000000000000000000000000000000000000000
2^ -4 = 0.0625000000000000000000000000000000000000000000000000
2^ -5 = 0.0312500000000000000000000000000000000000000000000000
2^ -6 = 0.0156250000000000000000000000000000000000000000000000
2^ -7 = 0.0078125000000000000000000000000000000000000000000000
2^ -8 = 0.0039062500000000000000000000000000000000000000000000
2^ -9 = 0.0019531250000000000000000000000000000000000000000000
2^-10 = 0.0009765625000000000000000000000000000000000000000000
2^-11 = 0.0004882812500000000000000000000000000000000000000000
2^-12 = 0.0002441406250000000000000000000000000000000000000000
2^-13 = 0.0001220703125000000000000000000000000000000000000000
2^-14 = 0.0000610351562500000000000000000000000000000000000000
2^-15 = 0.0000305175781250000000000000000000000000000000000000
2^-16 = 0.0000152587890625000000000000000000000000000000000000
2^-17 = 0.0000076293945312500000000000000000000000000000000000
2^-18 = 0.0000038146972656250000000000000000000000000000000000
2^-19 = 0.0000019073486328125000000000000000000000000000000000
2^-20 = 0.0000009536743164062500000000000000000000000000000000
2^-21 = 0.0000004768371582031250000000000000000000000000000000
2^-22 = 0.0000002384185791015625000000000000000000000000000000
2^-23 = 0.0000001192092895507812500000000000000000000000000000
2^-24 = 0.0000000596046447753906250000000000000000000000000000
2^-25 = 0.0000000298023223876953125000000000000000000000000000
2^-26 = 0.0000000149011611938476562500000000000000000000000000
2^-27 = 0.0000000074505805969238281250000000000000000000000000
2^-28 = 0.0000000037252902984619140625000000000000000000000000
2^-29 = 0.0000000018626451492309570312500000000000000000000000
2^-30 = 0.0000000009313225746154785156250000000000000000000000
2^-31 = 0.0000000004656612873077392578125000000000000000000000
2^-32 = 0.0000000002328306436538696289062500000000000000000000
2^-33 = 0.0000000001164153218269348144531250000000000000000000
2^-34 = 0.0000000000582076609134674072265625000000000000000000
2^-35 = 0.0000000000291038304567337036132812500000000000000000
2^-36 = 0.0000000000145519152283668518066406250000000000000000
2^-37 = 0.0000000000072759576141834259033203125000000000000000
2^-38 = 0.0000000000036379788070917129516601562500000000000000
2^-39 = 0.0000000000018189894035458564758300781250000000000000
2^-40 = 0.0000000000009094947017729282379150390625000000000000
2^-41 = 0.0000000000004547473508864641189575195312500000000000
2^-42 = 0.0000000000002273736754432320594787597656250000000000
2^-43 = 0.0000000000001136868377216160297393798828125000000000
2^-44 = 0.0000000000000568434188608080148696899414062500000000
2^-45 = 0.0000000000000284217094304040074348449707031250000000
2^-46 = 0.0000000000000142108547152020037174224853515625000000
2^-47 = 0.0000000000000071054273576010018587112426757812500000
2^-48 = 0.0000000000000035527136788005009293556213378906250000
2^-49 = 0.0000000000000017763568394002504646778106689453125000
2^-50 = 0.0000000000000008881784197001252323389053344726562500
2^-51 = 0.0000000000000004440892098500626161694526672363281250
2^-52 = 0.0000000000000002220446049250313080847263336181640625
#ThomasMcLeod: I think the significant digit rule comes from my field, physics, and means something more subtle:
If you have a measurement that gets you the value 1.52 and you cannot read any more detail off the scale, and say you are supposed to add another number (for example of another measurement because this one's scale was too small) to it, say 2, then the result (obviously) has only 2 decimal places, i.e. 3.52.
But likewise, if you add 1.1111111111 to the value 1.52, you get the value 2.63 (and nothing more!).
The reason for the rule is to prevent you from kidding yourself into thinking you got more information out of a calculation than you put in by the measurement (which is impossible, but would seem that way by filling it with garbage, see above).
That said, this specific rule is for addition only (for addition: the error of the result is the sum of the two errors - so if you measure just one badly, though luck, there goes your precision...).
How to get the other rules:
Let's say a is the measured number and δa the error. Let's say your original formula was:
f:=m a
Let's say you also measure m with error δm (let that be the positive side).
Then the actual limit is:
f_up=(m+δm) (a+δa)
and
f_down=(m-δm) (a-δa)
So,
f_up =m a+δm δa+(δm a+m δa)
f_down=m a+δm δa-(δm a+m δa)
Hence, now the significant digits are even less:
f_up ~m a+(δm a+m δa)
f_down~m a-(δm a+m δa)
and so
δf=δm a+m δa
If you look at the relative error, you get:
δf/f=δm/m+δa/a
For division it is
δf/f=δm/m-δa/a
Hope that gets the gist across and hope I didn't make too many mistakes, it's late here :-)
tl,dr: Significant digits mean how many of the digits in the output actually come from the digits in your input (in the real world, not the distorted picture that floating point numbers have).
If your measurements were 1 with "no" error and 3 with "no" error and the function is supposed to be 1/3, then yes, all infinite digits are actual significant digits. Otherwise, the inverse operation would not work, so obviously they have to be.
If significant digit rule means something completely different in another field, carry on :-)