Related
Let's say I have an input 1.251564.
How can I find how many elements are after "." to have an output as follows:
int numFloating;
// code to go here that leads to
// numFloating == 6
p.s. Sorry for not providing any code, I just have no idea how that should be implemented :(
Thanks for your answers!
Let us consider your number, 1.251564. When you store this in a double, it is stored in the binary IEEE754 format. And you might find that the number is not representable. So, let us check for this number. The closest representable double is:
1.25156 39999 99999 89880 45035 73046 53152 82344 81811 52343 75
This probably comes as something of a surprise to you. There are 52 decimal digits following the decimal point.
The lesson that you need to take away from this is that if you want to ask questions about decimal representations, you need to use a decimal data type rather than double. Once you can actually represent the value exactly, then you will be able to reason about it in a manner that matches your expectations.
Simplest way would be to store it in string.
std::string str("1.1234");
size_t length = str.length();
size_t found = str.find('.', 0 );
size_t count = length-found-1;
int finallyGotTheCount = static_cast<int>(count);
This won't end up well. The problem is that sometimes there are float errors when representing numbers in binary (which is what your computer does).
For example, when adding 1 / 3 + 1 / 3 + 1 / 3 you might get 0.999999... and the number of decimal places varies greatly.
ravi already provided a good way to calculate it, so I'll provide a different one:
double number = 0; // should be equal to the number you want to check
int numFloating = 0;
while ((double)(int)number != number){
number *= 10;
numFloating++;
}
number is a double variable that holds the number you want to check for decimal places.
If you have a fractional number. Lets say .1234
Repeatedly multiply by 10 and throw away the integer portion of the number until you get zero. The number of steps will be the number of decimals. e.g:
.1234 * 10 = 1.234
.234 * 10 = 2.34
.34 * 10 = 3.4
.4 * 10 = 4.0
Problems will however occur when you have a number that is "floating" like 1.199999999.
int numFloating = 0;
double orgin = 1.251564;
double value = orgin - floor(orgin);
while(value == 0)
{
value *= 10;
value = value - floor(value);
numFloating ++;
}
By using this code sometimes answer is wrong. exp: zero in floating point is equal to (2^31)-1.
Obviously output depends on how it realy stored.
Given two doubles I need to calculate the percentage and express them to upto 2 decimal places. What is the most efficient way to do so?
For example
x = 10.2476
y = 100
I would return 10.25.
Efficient as in runtime speed. It needs to express x/y*100 in 2 decimal places.
Use an integer representation, if you really need a fixed point representation and scale your numbers by 100. So x = 10.2476 becomes xi = 1025:
double x = 10.2476;
int xi = ( x + 0.005 ) * 100;
In many cases, floating point representation are not needed, even when numbers smaller than 1 are used.
"express them to upto 2 decimal places" means you have only 2 mantissa digits in the output.
10.2476-> 10
102.476 -> 1.0E+2, NOT 100!
0.00102476 -> 1.0E-3
So, working with mantissa, it is impossible to show the result not using floats or doubles.
In printf, the keys g and G set the number of significant digits. %2g - exactly 2 decimal places.
As for counting, the decimal division is not a problem operation (as + or -). So, simply divide them, multiply by 100, to show in percents - no problems.
Someone wanting less precision would write
999 format ('The answer is x = ', F8.3)
Others wanting higher output precision may write
999 format ('The answer is x = ', F18.12)
Thus it totally depends on what the user desires. What is the format
statement that exactly matches the precision used in the calculation?
(Note, this may vary from system to system)
It is a difficult question because you request "the precision of the calculation", which depends on so many factors. For example: if I solve f(x)=0 via Newton's method to a tolerance of 1E-6, would you want a format with seven digits?
On the other hand, if you mean the "highest precision attainable by the type" (e. g., double or single precision) then you can simply find the corresponding epsilon (machine eps, or precision) and use that as the format flag. If epsilon is 1E-15, then you can use a format flag that does not have more than 16 digits.
In Fortran you can use the EPSILON(X) function to get this number (the answer will depend on the type of X), the you can take the floor of the absolute value of the logarithm (base 10) of epsilon, and make that the number of decimals in your float representation.
For example, if epsilon is 1E-12, the log is -12, the abs is 12, and the floor is 12, so you want a format like 15.12F (12 decimals + 1 point + the zero + the sign = 15 places)
The problem with floating point numbers is that there is no precision as such: only significant digits.
For instance, if you are calculating longitudes in real*1, near the UK, you'd be accurate to 6 decimal places but if you were in Colorado Springs, it would only be accurate to 4 decimal places. It would not make any sense to print the number in F format it is just rubbish after the 4th decimal place.
If you wish to print to maximum precision, print in E format. Since it is always n.nn..nEnn, you get all the significant digits.
Edit - user4050's query
Try the following example
program main
real intpart, multiplier
integer ii
multiplier = 1
do ii = 1, 6
intpart = 9.87654321
intpart = intpart * multiplier
print '(F15.7 E15.7 G15.8)', intpart, intpart, intpart
multiplier = multiplier * 10
end do
stop
end program
What you will get is something like
9.8765430 0.9876543E+01 9.8765430
98.7654266 0.9876543E+02 98.765427
987.6542969 0.9876543E+03 987.65430
9876.5429688 0.9876543E+04 9876.5430
98765.4296875 0.9876543E+05 98765.430
987654.3125000 0.9876543E+06 987654.31
Notice that the precision changes as the number gets bigger because a float only has 7 significant figures.
Problem description
During my fluid simulation, the physical time is marching as 0, 0.001, 0.002, ..., 4.598, 4.599, 4.6, 4.601, 4.602, .... Now I want to choose time = 0.1, 0.2, ..., 4.5, 4.6, ... from this time series and then do the further analysis. So I wrote the following code to judge if the fractpart hits zero.
But I am so surprised that I found the following two division methods are getting two different results, what should I do?
double param, fractpart, intpart;
double org = 4.6;
double ddd = 0.1;
// This is the correct one I need. I got intpart=46 and fractpart=0
// param = org*(1/ddd);
// This is not what I want. I got intpart=45 and fractpart=1
param = org/ddd;
fractpart = modf(param , &intpart);
Info<< "\n\nfractpart\t=\t"
<< fractpart
<< "\nAnd intpart\t=\t"
<< intpart
<< endl;
Why does it happen in this way?
And if you guys tolerate me a little bit, can I shout loudly: "Could C++ committee do something about this? Because this is confusing." :)
What is the best way to get a correct remainder to avoid the cut-off error effect? Is fmod a better solution? Thanks
Respond to the answer of
David Schwartz
double aTmp = 1;
double bTmp = 2;
double cTmp = 3;
double AAA = bTmp/cTmp;
double BBB = bTmp*(aTmp/cTmp);
Info<< "\n2/3\t=\t"
<< AAA
<< "\n2*(1/3)\t=\t"
<< BBB
<< endl;
And I got both ,
2/3 = 0.666667
2*(1/3) = 0.666667
Floating point values cannot exactly represent every possible number, so your numbers are being approximated. This results in different results when used in calculations.
If you need to compare floating point numbers, you should always use a small epsilon value rather than testing for equality. In your case I would round to the nearest integer (not round down), subtract that from the original value, and compare the abs() of the result against an epsilon.
If the question is, why does the sum differ, the simple answer is that they are different sums. For a longer explanation, here are the actual representations of the numbers involved:
org: 4.5999999999999996 = 0x12666666666666 * 2^-50
ddd: 0.10000000000000001 = 0x1999999999999a * 2^-56
1/ddd: 10 = 0x14000000000000 * 2^-49
org * (1/ddd): 46 = 0x17000000000000 * 2^-47
org / ddd: 45.999999999999993 = 0x16ffffffffffff * 2^-47
You will see that neither input value is exactly represented in a double, each having been rounded up or down to the nearest value. org has been rounded down, because the next bit in the sequence would be 0. ddd has been rounded up, because the next bit in that sequence would be a 1.
Because of this, when mathematical operations are performed the rounding can either cancel, or accumulate, depending on the operation and how the original numbers have been rounded.
In this case, 1/0.1 happens to round neatly back to exactly 10.
Multiplying org by 10 happens to round up.
Dividing org by ddd happens to round down (I say 'happens to', but you're dividing a rounded-down number by a rounded-up number, so it's natural that the result is less).
Different inputs will round differently.
It's only a single bit of error, which can be easily ignored with even a tiny epsilon.
If I understand your question correctly, it's this: Why, with limited-precision arithmetic, is X/Y not the same is X * (1/Y)?
And the reason is simple: Consider, for example, using six digits of decimal precision. While this is not what doubles actually do, the concept is precisely the same.
With six decimal digits, 1/3 is .333333. But 2/3 is .666667. So:
2 / 3 = .666667
2 * (1/3) = 2 * .333333 = .6666666
That's just the nature of fixed-precision math. If you can't tolerate this behavior, don't use limited-precision types.
Hm not really sure what you want to achieve, but if you want get a value and then want to
do some refine in the range of 1/1000, why not use integers instead of floats/doubles?
You would have a divisor, which is 1000, and have values that you iterate over that you need to multiply by your divisor.
So you would get something like
double org = ... // comes from somewhere
int divisor = 1000;
int referenceValue = org * div;
for (size_t step = referenceValue - 10; step < referenceValue + 10; ++step) {
// use (double) step / divisor to feed to your algorithm
}
You can't represent 4.6 precisely: http://www.binaryconvert.com/result_double.html?decimal=052046054
Use rounding before separating integer and fraction parts.
UPDATE
You may wish to use rational class from Boost library: http://www.boost.org/doc/libs/1_52_0/libs/rational/rational.html
CONCERNING YOUR TASK
To find required double take precision into account, for example, to find 4.6 calculate "closeness" to it:
double time;
...
double epsilon = 0.001;
if( abs(time-4.6) <= epsilon ) {
// found!
}
How do you print a double to a stream so that when it is read in you don't lose precision?
I tried:
std::stringstream ss;
double v = 0.1 * 0.1;
ss << std::setprecision(std::numeric_limits<T>::digits10) << v << " ";
double u;
ss >> u;
std::cout << "precision " << ((u == v) ? "retained" : "lost") << std::endl;
This did not work as I expected.
But I can increase precision (which surprised me as I thought that digits10 was the maximum required).
ss << std::setprecision(std::numeric_limits<T>::digits10 + 2) << v << " ";
// ^^^^^^ +2
It has to do with the number of significant digits and the first two don't count in (0.01).
So has anybody looked at representing floating point numbers exactly?
What is the exact magical incantation on the stream I need to do?
After some experimentation:
The trouble was with my original version. There were non-significant digits in the string after the decimal point that affected the accuracy.
So to compensate for this we can use scientific notation to compensate:
ss << std::scientific
<< std::setprecision(std::numeric_limits<double>::digits10 + 1)
<< v;
This still does not explain the need for the +1 though.
Also if I print out the number with more precision I get more precision printed out!
std::cout << std::scientific << std::setprecision(std::numeric_limits<double>::digits10) << v << "\n";
std::cout << std::scientific << std::setprecision(std::numeric_limits<double>::digits10 + 1) << v << "\n";
std::cout << std::scientific << std::setprecision(std::numeric_limits<double>::digits) << v << "\n";
It results in:
1.000000000000000e-02
1.0000000000000002e-02
1.00000000000000019428902930940239457413554200000000000e-02
Based on #Stephen Canon answer below:
We can print out exactly by using the printf() formatter, "%a" or "%A". To achieve this in C++ we need to use the fixed and scientific manipulators (see n3225: 22.4.2.2.2p5 Table 88)
std::cout.flags(std::ios_base::fixed | std::ios_base::scientific);
std::cout << v;
For now I have defined:
template<typename T>
std::ostream& precise(std::ostream& stream)
{
std::cout.flags(std::ios_base::fixed | std::ios_base::scientific);
return stream;
}
std::ostream& preciselngd(std::ostream& stream){ return precise<long double>(stream);}
std::ostream& precisedbl(std::ostream& stream) { return precise<double>(stream);}
std::ostream& preciseflt(std::ostream& stream) { return precise<float>(stream);}
Next: How do we handle NaN/Inf?
It's not correct to say "floating point is inaccurate", although I admit that's a useful simplification. If we used base 8 or 16 in real life then people around here would be saying "base 10 decimal fraction packages are inaccurate, why did anyone ever cook those up?".
The problem is that integral values translate exactly from one base into another, but fractional values do not, because they represent fractions of the integral step and only a few of them are used.
Floating point arithmetic is technically perfectly accurate. Every calculation has one and only one possible result. There is a problem, and it is that most decimal fractions have base-2 representations that repeat. In fact, in the sequence 0.01, 0.02, ... 0.99, only a mere 3 values have exact binary representations. (0.25, 0.50, and 0.75.) There are 96 values that repeat and therefore are obviously not represented exactly.
Now, there are a number of ways to write and read back floating point numbers without losing a single bit. The idea is to avoid trying to express the binary number with a base 10 fraction.
Write them as binary. These days, everyone implements the IEEE-754 format so as long as you choose a byte order and write or read only that byte order, then the numbers will be portable.
Write them as 64-bit integer values. Here you can use the usual base 10. (Because you are representing the 64-bit aliased integer, not the 52-bit fraction.)
You can also just write more decimal fraction digits. Whether this is bit-for-bit accurate will depend on the quality of the conversion libraries and I'm not sure I would count on perfect accuracy (from the software) here. But any errors will be exceedingly small and your original data certainly has no information in the low bits. (None of the constants of physics and chemistry are known to 52 bits, nor has any distance on earth ever been measured to 52 bits of precision.) But for a backup or restore where bit-for-bit accuracy might be compared automatically, this obviously isn't ideal.
Don't print floating-point values in decimal if you don't want to lose precision. Even if you print enough digits to represent the number exactly, not all implementations have correctly-rounded conversions to/from decimal strings over the entire floating-point range, so you may still lose precision.
Use hexadecimal floating point instead. In C:
printf("%a\n", yourNumber);
C++0x provides the hexfloat manipulator for iostreams that does the same thing (on some platforms, using the std::hex modifier has the same result, but this is not a portable assumption).
Using hex floating point is preferred for several reasons.
First, the printed value is always exact. No rounding occurs in writing or reading a value formatted in this way. Beyond the accuracy benefits, this means that reading and writing such values can be faster with a well tuned I/O library. They also require fewer digits to represent values exactly.
I got interested in this question because I'm trying to (de)serialize my data to & from JSON.
I think I have a clearer explanation (with less hand waiving) for why 17 decimal digits are sufficient to reconstruct the original number losslessly:
Imagine 3 number lines:
1. for the original base 2 number
2. for the rounded base 10 representation
3. for the reconstructed number (same as #1 because both in base 2)
When you convert to base 10, graphically, you choose the tic on the 2nd number line closest to the tic on the 1st. Likewise when you reconstruct the original from the rounded base 10 value.
The critical observation I had was that in order to allow exact reconstruction, the base 10 step size (quantum) has to be < the base 2 quantum. Otherwise, you inevitably get the bad reconstruction shown in red.
Take the specific case of when the exponent is 0 for the base2 representation. Then the base2 quantum will be 2^-52 ~= 2.22 * 10^-16. The closest base 10 quantum that's less than this is 10^-16. Now that we know the required base 10 quantum, how many digits will be needed to encode all possible values? Given that we're only considering the case of exponent = 0, the dynamic range of values we need to represent is [1.0, 2.0). Therefore, 17 digits would be required (16 digits for fraction and 1 digit for integer part).
For exponents other than 0, we can use the same logic:
exponent base2 quant. base10 quant. dynamic range digits needed
---------------------------------------------------------------------
1 2^-51 10^-16 [2, 4) 17
2 2^-50 10^-16 [4, 8) 17
3 2^-49 10^-15 [8, 16) 17
...
32 2^-20 10^-7 [2^32, 2^33) 17
1022 9.98e291 1.0e291 [4.49e307,8.99e307) 17
While not exhaustive, the table shows the trend that 17 digits are sufficient.
Hope you like my explanation.
In C++20 you'll be able to use std::format to do this:
std::stringstream ss;
double v = 0.1 * 0.1;
ss << std::format("{}", v);
double u;
ss >> u;
assert(v == u);
The default floating-point format is the shortest decimal representation with a round-trip guarantee. The advantage of this method compared to using the precision of max_digits10 (not digits10 which is not suitable for round trip through decimal) from std::numeric_limits is that it doesn't print unnecessary digits.
In the meantime you can use the {fmt} library, std::format is based on. For example (godbolt):
fmt::print("{}", 0.1 * 0.1);
Output (assuming IEEE754 double):
0.010000000000000002
{fmt} uses the Dragonbox algorithm for fast binary floating point to decimal conversion. In addition to giving the shortest representation it is 20-30x faster than common standard library implementations of printf and iostreams.
Disclaimer: I'm the author of {fmt} and C++20 std::format.
A double has the precision of 52 binary digits or 15.95 decimal digits. See http://en.wikipedia.org/wiki/IEEE_754-2008. You need at least 16 decimal digits to record the full precision of a double in all cases. [But see fourth edit, below].
By the way, this means significant digits.
Answer to OP edits:
Your floating point to decimal string runtime is outputing way more digits than are significant. A double can only hold 52 bits of significand (actually, 53, if you count a "hidden" 1 that is not stored). That means the the resolution is not more than 2 ^ -53 = 1.11e-16.
For example: 1 + 2 ^ -52 = 1.0000000000000002220446049250313 . . . .
Those decimal digits, .0000000000000002220446049250313 . . . . are the smallest binary "step" in a double when converted to decimal.
The "step" inside the double is:
.0000000000000000000000000000000000000000000000000001 in binary.
Note that the binary step is exact, while the decimal step is inexact.
Hence the decimal representation above,
1.0000000000000002220446049250313 . . .
is an inexact representation of the exact binary number:
1.0000000000000000000000000000000000000000000000000001.
Third Edit:
The next possible value for a double, which in exact binary is:
1.0000000000000000000000000000000000000000000000000010
converts inexactly in decimal to
1.0000000000000004440892098500626 . . . .
So all of those extra digits in the decimal are not really significant, they are just base conversion artifacts.
Fourth Edit:
Though a double stores at most 16 significant decimal digits, sometimes 17 decimal digits are necessary to represent the number. The reason has to do with digit slicing.
As I mentioned above, there are 52 + 1 binary digits in the double. The "+ 1" is an assumed leading 1, and is neither stored nor significant. In the case of an integer, those 52 binary digits form a number between 0 and 2^53 - 1. How many decimal digits are necessary to store such a number? Well, log_10 (2^53 - 1) is about 15.95. So at most 16 decimal digits are necessary. Let's label these d_0 to d_15.
Now consider that IEEE floating point numbers also have an binary exponent. What happens when we increment the exponet by, say, 2? We have multiplied our 52-bit number, whatever it was, by 4. Now, instead of our 52 binary digits aligning perfectly with our decimal digits d_0 to d_15, we have some significant binary digits represented in d_16. However, since we multiplied by something less than 10, we still have significant binary digits represented in d_0. So our 15.95 decimal digits now occuply d_1 to d_15, plus some upper bits of d_0 and some lower bits of d_16. This is why 17 decimal digits are sometimes needed to represent a IEEE double.
Fifth Edit
Fixed numerical errors
The easiest way (for IEEE 754 double) to guarantee a round-trip conversion is to always use 17 significant digits. But that has the disadvantage of sometimes including unnecessary noise digits (0.1 → "0.10000000000000001").
An approach that's worked for me is to sprintf the number with 15 digits of precision, then check if atof gives you back the original value. If it doesn't, try 16 digits. If that doesn't work, use 17.
You might want to try David Gay's algorithm (used in Python 3.1 to implement float.__repr__).
Thanks to ThomasMcLeod for pointing out the error in my table computation
To guarantee round-trip conversion using 15 or 16 or 17 digits is only possible for a comparatively few cases. The number 15.95 comes from taking 2^53 (1 implicit bit + 52 bits in the significand/"mantissa") which comes out to an integer in the range 10^15 to 10^16 (closer to 10^16).
Consider a double precision value x with an exponent of 0, i.e. it falls into the floating point range range 1.0 <= x < 2.0. The implicit bit will mark the 2^0 component (part) of x. The highest explicit bit of the significand will denote the next lower exponent (from 0) <=> -1 => 2^-1 or the 0.5 component.
The next bit 0.25, the ones after 0.125, 0.0625, 0.03125, 0.015625 and so on (see table below). The value 1.5 will thus be represented by two components added together: the implicit bit denoting 1.0 and the highest explicit significand bit denoting 0.5.
This illustrates that from the implicit bit downward you have 52 additional, explicit bits to represent possible components where the smallest is 0 (exponent) - 52 (explicit bits in significand) = -52 => 2^-52 which according to the table below is ... well you can see for yourselves that it comes out to quite a bit more than 15.95 significant digits (37 to be exact). To put it another way the smallest number in the 2^0 range that is != 1.0 itself is 2^0+2^-52 which is 1.0 + the number next to 2^-52 (below) = (exactly) 1.0000000000000002220446049250313080847263336181640625, a value which I count as being 53 significant digits long. With 17 digit formatting "precision" the number will display as 1.0000000000000002 and this would depend on the library converting correctly.
So maybe "round-trip conversion in 17 digits" is not really a concept that is valid (enough).
2^ -1 = 0.5000000000000000000000000000000000000000000000000000
2^ -2 = 0.2500000000000000000000000000000000000000000000000000
2^ -3 = 0.1250000000000000000000000000000000000000000000000000
2^ -4 = 0.0625000000000000000000000000000000000000000000000000
2^ -5 = 0.0312500000000000000000000000000000000000000000000000
2^ -6 = 0.0156250000000000000000000000000000000000000000000000
2^ -7 = 0.0078125000000000000000000000000000000000000000000000
2^ -8 = 0.0039062500000000000000000000000000000000000000000000
2^ -9 = 0.0019531250000000000000000000000000000000000000000000
2^-10 = 0.0009765625000000000000000000000000000000000000000000
2^-11 = 0.0004882812500000000000000000000000000000000000000000
2^-12 = 0.0002441406250000000000000000000000000000000000000000
2^-13 = 0.0001220703125000000000000000000000000000000000000000
2^-14 = 0.0000610351562500000000000000000000000000000000000000
2^-15 = 0.0000305175781250000000000000000000000000000000000000
2^-16 = 0.0000152587890625000000000000000000000000000000000000
2^-17 = 0.0000076293945312500000000000000000000000000000000000
2^-18 = 0.0000038146972656250000000000000000000000000000000000
2^-19 = 0.0000019073486328125000000000000000000000000000000000
2^-20 = 0.0000009536743164062500000000000000000000000000000000
2^-21 = 0.0000004768371582031250000000000000000000000000000000
2^-22 = 0.0000002384185791015625000000000000000000000000000000
2^-23 = 0.0000001192092895507812500000000000000000000000000000
2^-24 = 0.0000000596046447753906250000000000000000000000000000
2^-25 = 0.0000000298023223876953125000000000000000000000000000
2^-26 = 0.0000000149011611938476562500000000000000000000000000
2^-27 = 0.0000000074505805969238281250000000000000000000000000
2^-28 = 0.0000000037252902984619140625000000000000000000000000
2^-29 = 0.0000000018626451492309570312500000000000000000000000
2^-30 = 0.0000000009313225746154785156250000000000000000000000
2^-31 = 0.0000000004656612873077392578125000000000000000000000
2^-32 = 0.0000000002328306436538696289062500000000000000000000
2^-33 = 0.0000000001164153218269348144531250000000000000000000
2^-34 = 0.0000000000582076609134674072265625000000000000000000
2^-35 = 0.0000000000291038304567337036132812500000000000000000
2^-36 = 0.0000000000145519152283668518066406250000000000000000
2^-37 = 0.0000000000072759576141834259033203125000000000000000
2^-38 = 0.0000000000036379788070917129516601562500000000000000
2^-39 = 0.0000000000018189894035458564758300781250000000000000
2^-40 = 0.0000000000009094947017729282379150390625000000000000
2^-41 = 0.0000000000004547473508864641189575195312500000000000
2^-42 = 0.0000000000002273736754432320594787597656250000000000
2^-43 = 0.0000000000001136868377216160297393798828125000000000
2^-44 = 0.0000000000000568434188608080148696899414062500000000
2^-45 = 0.0000000000000284217094304040074348449707031250000000
2^-46 = 0.0000000000000142108547152020037174224853515625000000
2^-47 = 0.0000000000000071054273576010018587112426757812500000
2^-48 = 0.0000000000000035527136788005009293556213378906250000
2^-49 = 0.0000000000000017763568394002504646778106689453125000
2^-50 = 0.0000000000000008881784197001252323389053344726562500
2^-51 = 0.0000000000000004440892098500626161694526672363281250
2^-52 = 0.0000000000000002220446049250313080847263336181640625
#ThomasMcLeod: I think the significant digit rule comes from my field, physics, and means something more subtle:
If you have a measurement that gets you the value 1.52 and you cannot read any more detail off the scale, and say you are supposed to add another number (for example of another measurement because this one's scale was too small) to it, say 2, then the result (obviously) has only 2 decimal places, i.e. 3.52.
But likewise, if you add 1.1111111111 to the value 1.52, you get the value 2.63 (and nothing more!).
The reason for the rule is to prevent you from kidding yourself into thinking you got more information out of a calculation than you put in by the measurement (which is impossible, but would seem that way by filling it with garbage, see above).
That said, this specific rule is for addition only (for addition: the error of the result is the sum of the two errors - so if you measure just one badly, though luck, there goes your precision...).
How to get the other rules:
Let's say a is the measured number and δa the error. Let's say your original formula was:
f:=m a
Let's say you also measure m with error δm (let that be the positive side).
Then the actual limit is:
f_up=(m+δm) (a+δa)
and
f_down=(m-δm) (a-δa)
So,
f_up =m a+δm δa+(δm a+m δa)
f_down=m a+δm δa-(δm a+m δa)
Hence, now the significant digits are even less:
f_up ~m a+(δm a+m δa)
f_down~m a-(δm a+m δa)
and so
δf=δm a+m δa
If you look at the relative error, you get:
δf/f=δm/m+δa/a
For division it is
δf/f=δm/m-δa/a
Hope that gets the gist across and hope I didn't make too many mistakes, it's late here :-)
tl,dr: Significant digits mean how many of the digits in the output actually come from the digits in your input (in the real world, not the distorted picture that floating point numbers have).
If your measurements were 1 with "no" error and 3 with "no" error and the function is supposed to be 1/3, then yes, all infinite digits are actual significant digits. Otherwise, the inverse operation would not work, so obviously they have to be.
If significant digit rule means something completely different in another field, carry on :-)