The IEE754 (64 bits) floating point is supposed to correctly represent 15 significant digit although the internal representation has 17 ditigs. Is there a way to force the 16th and 17th digits to zero ??
Ref:
http://msdn.microsoft.com/en-us/library/system.double(VS.80).aspx :
.
.
Remember that a floating-point number can only approximate a decimal number, and that the precision of a floating-point number determines how accurately that number approximates a decimal number. By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally. The precision of a floating-point number has several consequences:
.
.
Example nos:
d1 = 97842111437.390091
d2 = 97842111437.390076
d1 and d2 differ in 16th and 17th decimal places that are not supposed to be significant. Looking for ways to force them to zero. ie
d1 = 97842111437.390000
d2 = 97842111437.390000
No. Counter-example: the two closest floating-point numbers to a rational
1.11111111111118
(which has 15 decimal digits) are
1.1111111111111799942818834097124636173248291015625
1.1111111111111802163264883347437717020511627197265625
In other words, there is not floating-point number that starts with 1.1111111111111800.
This question is a little malformed. The hardware stores the numbers
in binary, not decimal. So in the general case you can't do precise
math in base 10. Some decimal numbers (0.1 is one of them!) do not
even have a non-repeating representation in binary. If you have
precision requirements like this, where you care about the number
being of known precision to exactly 15 decimal digits, you will need
to pick another representation for your numbers.
No, but I wonder if this is relevant to any of your issues (GCC specific):
GCC Documentation
-ffloat-store Do not store floating point variables in registers, and
inhibit other options that might
change whether a floating point value
is taken from a register or memory.
This option prevents undesirable
excess precision on machines such as
the 68000 where the floating registers
(of the 68881) keep more precision
than a double is supposed to have.
Similarly for the x86 architecture.
For most programs, the excess
precision does only good, but a few
programs rely on the precise
definition of IEEE floating point. Use
-ffloat-store for such programs, after modifying them to store all pertinent
intermediate computations into
variables.
You should be able to directly modify the bits in your number by creating a union with a field for the floating point number and an integral type of the same size. Then you can access the bits you want and set them however you want. Here is in example where I whack the sign bit; you can choose any field you want, of course.
#include <stdio.h>
union double_int {
double fp;
unsigned long long integer;
};
int main(int argc, const char *argv[])
{
double my_double = 1325.34634;
union double_int *my_union = (union double_int *)&my_double;
/* print original numbers */
printf("Float %f\n", my_double);
printf("Integer %llx\n", my_union->integer);
/* whack the sign bit to 1 */
my_union->integer |= 1ULL << 63;
/* print modified numbers */
printf("Negative float %f\n", my_double);
printf("Negative integer %llx\n", my_union->integer);
return 0;
}
Generally speaking, people only care about something like this ("I only want the first x digits") when displaying the number. That's relatively easy with stringstreams or sprintf.
If you're concerned about comparing numbers with ==; you really can't do that with floating point numbers. Instead you want to see if the numbers are close enough (say, within an epsilon() of each other).
Playing with the bits of the number directly isn't a great idea.
Related
In one of my applications I am trying to put a float value into a string stream like this:
stream << static_cast<float>(double value);
Instead of getting the entire float value I get only the integer part of it. Any idea why that might happen?
You're casting to a float - which C++ defines as an IEEE 754 32-bit 'single precision' floating point type.
If you look up the format of such a value, the 32 bits are split between three components:
23 bits to store the significand
8 bits to store the exponent
1 bit to store the sign.
If you have 23 bits to store the signifcand, that means the largest value you could represent in the significand is 2^23. As a result, single-precision floating points only have about 6-9 digits of precision.
If you have a floating point value that has 9 or more digits before the decimal point - if it exceeds 2^23 - you will never have a fractional component.
To help that sink in, consider the following code:
void Test()
{
float test = 8388608.0F;
while( test > 0.0F )
{
test -= 0.1F;
}
}
That code never terminates. Every time we try to decrement test by 0.1, the change in magnitude is lost because we don't have the precision to store it, so the value ends up right back at 8388608.0. No progress can ever be made, so it never terminates. This is true of all limited precision floating point types, so you'd find that this same problem would happen for IEEE 754 double precision floating point types (64-bit) all the same, just at a different, larger value.
Also, if your goal is to preserve as much precision as possible, then it does not make sense to cast from double to float. double is a 64-bit floating point type; float is a 32-bit floating point type. If you used double, you might be able to avoid most of the truncation if your values are small enough.
For the following program:
#include <iostream>
#include <iomanip>
using namespace std;
int main()
{
for (float a = 1.0; a < 10; a++)
cout << std::setprecision(30) << 1.0/a << endl;
return 0;
}
I recieve the following output:
1
0.5
0.333333333333333314829616256247
0.25
0.200000000000000011102230246252
0.166666666666666657414808128124
0.142857142857142849212692681249
0.125
0.111111111111111104943205418749
Which is definitely not right right for the lower place digits, particularly with respect to 1/3,1/5,1/7, and 1/9. things just start going wrong around 10^-16 I would expect to see out put more resembling:
1
0.5
0.333333333333333333333333333333
0.25
0.2
0.166666666666666666666666666666
0.142857142857142857142857142857
0.125
0.111111111111111111111111111111
Is this an inherit flaw in the float class? Is there a way to overcome this and have proper division? Is there a special datatype for doing precise decimal operations? Am I just doing something stupid or wrong in my example?
There are a lot of numbers that computers cannot represent, even if you use float or double-precision float. 1/3, or .3 repeating, is one of those numbers. So it just does the best it can, which is the result you get.
See http://floating-point-gui.de/, or google float precision, there's a ton of info out there (including many SO questions) on this subject.
To answer your questions -- yes, this is an inherent limitation in both the float class and the double class. Some mathematical programs (MathCAD, probably Mathematica) can do "symbolic" math, which allows calculation of the "correct" answers. In many cases, the round-off error can be managed, even over really complex computations, such that the top 6-8 decimal places are correct. However, the opposite is true as well -- naive computations can be constructed that return wildly incorrect answers.
For small problems like division of whole numbers, you'll get a decent number of decimal place accuracy (maybe 4-6 places). If you use double precision floats, that will go up to maybe 8. If you need more... well, I'd start questioning why you want that many decimal places.
First of all, since your code does 1.0/a, it gives you double (1.0 is a double value, 1.0f is float) as the rules of C++ (and C) always extends a smaller type to the larger one if the operands of an operation is different size (so, int + char makes the char into an int before adding the values, long + int will make the int long, etc, etc).
Second floating point values have a set number of bits for the "number". In float, that is 23 bits (+ 1 'hidden' bit), and in double it's 52 bits (+1). Yet get approximately 3 digits per bit (exactly: log2(10), if we use decimal number representation), so a 23 bit number gives approximately 7-8 digits, a 53 bit number approximately 16-17 digits. The remainder is just "noise" caused by the last few bits of the number not evening out when converting to a decimal number.
To have infinite precision, we would have to either store the value as a fraction, or have an infinite number of bits. And of course, we could have some other finite precision, such as 100 bits, but I'm sure you'd complain about that too, because it would just have another 15 or so digits before it "goes wrong".
Floats only have so much precision (23 bits worth to be precise). If you REALLY want to see "0.333333333333333333333333333333" output, you could create a custom "Fraction" class which stores the numerator and denominator separately. Then you could calculate the digit at any given point with complete accuracy.
I need to represent numbers using the following structure. The purpose of this structure is not to lose the precision.
struct PreciseNumber
{
long significand;
int exponent;
}
Using this structure actual double value can be represented as value = significand * 10e^exponent.
Now I need to write utility function which can covert double into PreciseNumber.
Can you please let me know how to extract the exponent and significand from the double?
The prelude is somewhat flawed.
Firstly, barring any restrictions on storage space, conversion from a double to a base 10 significand-exponent form won't alter the precision in any form. To understand that, consider the following: any binary terminating fraction (like the one that forms the mantissa on a typical IEEE-754 float) can be written as a sum of negative powers of two. Each negative power of two is a terminating fraction itself, and hence it follows that their sum must be terminating as well.
However, the converse isn't necessarily true. For instance, 0.3 base 10 is equivalent to the non-terminating 0.01 0011 0011 0011 ... in base 2. Fitting this into a fixed size mantissa would blow some precision out of it (which is why 0.3 is actually stored as something that translates back to 0.29999999999999999.)
By this, we may assume that any precision that is intended by storing the numbers in decimal significand-exponent form is either lost, or isn't simply gained at all.
Of course, you might think of the apparent loss of accuracy generated by storing a decimal number as a float as loss in precision, in which case the Decimal32 and Decimal64 floating point formats may be of some interest -- check out http://en.wikipedia.org/wiki/Decimal64_floating-point_format.
This is a very difficult problem. You might want to see how much code it takes to implement a double-to-string conversion (for printf, e.g.). You might steal the code from gnu's implementation of gcc.
You cannot convert an "imprecise" double into a "precise" decimal number, because the required "precision" simply isn't there to begin with (otherwise why would you even want to convert?).
This is what happens if you try something like it in Java:
BigDecimal x = new BigDecimal(0.1);
System.out.println(x);
The output of the program is:
0.1000000000000000055511151231257827021181583404541015625
Well you're at less precision than a typical double. Your significand is a long giving you a range from -2 billion to +2 billion which is more than 9 but fewer than 10 digits of precision.
Here's an untested starting point on what you'd want to do for some simple math on PreciseNumbers
PreciseNumber Multiply(PreciseNumber lhs, PreciseNumber rhs)
{
PreciseNumber ret;
ret.s=lhs.s;
ret.e=lhs.e;
ret.s*=rhs.s;
ret.e+=lhs.e;
return ret;
}
PreciseNumber Add(PreciseNumber lhs, PreciseNumber rhs)
{
PreciseNumber ret;
ret.s=lhs.s;
ret.e=lhs.e;
ret.s+=(rhs.s*pow(10,rhs.e-lhs.e));
}
I didn't take care of any renormalization, but in both cases there are places where you have to worry about over/under flows and loss of precision. Just because you're doing it yourself rather than letting the computer take care of it in a double, doesn't meat the same pitfalls aren't there. The only way to not lose precision is to keep track of all of the digits.
Here's a very rough algorithm. I'll try to fill in some details later.
Take the log10 of the number to get the exponent. Multiply the double by 10^x if positive, or divide by 10^-x if negative.
Start with a significand of zero. Repeat the following 15 times, since a double contains 15 digits of significance:
Multiply the previous significand by 10.
Take the integer portion of the double, add it to the significand, and subtract it from the double.
Subtract 1 from the exponent.
Multiply the double by 10.
When finished, take the remaining double value and use it for rounding: if it's >= 5, add one to the significand.
I'm writing a set of numeric type conversion functions for a database engine, and I'm concerned about the behavior of converting large integral floating-point values to integer types with greater precision.
Take for example converting a 32-bit int to a 32-bit single-precision float. The 23-bit significand of the float yields about 7 decimal digits of precision, so converting any int with more than about 7 digits will result in a loss of precision (which is fine and expected). However, when you convert such a float back to an int, you end up with artifacts of its binary representation in the low-order digits:
#include <iostream>
#include <iomanip>
using namespace std;
int main()
{
int a = 2147483000;
cout << a << endl;
float f = (float)a;
cout << setprecision(10) << f << endl;
int b = (int)f;
cout << b << endl;
return 0;
}
This prints:
2147483000
2147483008
2147483008
The trailing 008 is beyond the precision of the float, and therefore seems undesirable to retain in the int, since in a database application, users are primarily concerned with decimal representation, and trailing 0's are used to indicate insignificant digits.
So my questions are: Are there any well-known existing systems that perform decimal significant digit rounding in float -> int (or double -> long long) conversions, and are there any well-known, efficient algorithms for doing so?
(Note: I'm aware that some systems have decimal floating-point types, such as those defined by IEEE 754-2008. However, they don't have mainstream hardware support and aren't built into C/C++. I might want to support them down the road, but I still need to handle binary floats intuitively.)
std::numeric_limits<float>::digits10 says you only get 6 precise digits for float.
Pick an efficient algorithm for your language, processor, and data distribution to calculate-the-decimal-length-of-an-integer (or here). Then subtract the number of digits that digits10 says are precise to get the number of digits to cull. Use that as an index to lookup a power of 10 to use as a modulus. Etc.
One concern: Let's say you convert a float to a decimal and perform this sort of rounding or truncation. Then convert that "adjusted" decimal to a float and back to a decimal with the same rounding/truncation scheme. Do you get the same decimal value? Hopefully yes.
This isn't really what you're looking for but may be interesting reading: A Proposal to add a max significant decimal digits value to the C++ Standard Library Numeric limits
Naturally, 2147483008 has trailing zeros if you write it in binary (1111111111111111111110110000000) or hexadecimal (0b0x7FFFFD80). The most "correct" thing to do would be to track insignificant digits in any of those forms instead.
Alternatively, you could just zero all digits after the first seven significant ones in the int (ideally by rounding) after converting to it from a float, since the float contains approximately seven significant digits.
I was wondering whether it is possible to limit the number of characters we enter in a float.
I couldn't seem to find any method. I have to read in data from an external interface which sends float data of the form xx.xx. As of now I am using conversion to char and vice-versa, which is a messy work-around. Can someone suggest inputs to improve the solution?
If you always have/want only 2 decimal places for your numbers, and absolute size is not such a big issue, why not work internally with integers instead, but having their meaning be "100th of the target unit". At the end you just need to convert them back to a float and divide by 100.0 and you're back to what you want.
This is a slight misunderstanding. You cannot think of a float or double as being a decimal number.
Most any attempt to use it as a fixed decimal number of precision, say, 2, will incur problems as some values will not be precisely equal to xxx.xx but only approximately so.
One solution that many apps use is to ensure that:
1) display of floating point numbers is well controlled using printf/sprintf to a certain number of significant digits,
2) one does not do exact comparison between floating point numbers, i.e. to compare to the 2nd decimal point of precision two numbers a, b : abs(a-b) <= epsilon should generally be used. Outright equality is dangerous as 0.01 might have multiple floating point values, e.g. 0.0101 and 0.0103 might result if you do arithmetic, but be indistinguishable to the user if values are truncated to 2 dp, and they may be logically equivalent to your application which is assuming 2dp precision.
Lastly, I would suggest you use double instead of float. These days there is no real overhead as we aren't doing floating point without a maths coprocessor any more! And a float under 32-bit architectures has 7 decimal points of precision, and a double has 15, and this is enough to be significant in many case.
Rounding a float (that is, binary floating-point number) to 2 decimal digits doesn't make much sense because you won't be able to round it exactly in some cases anyway, so you'll still get a small delta which will affect subsequent calculations. If you really need it to be precisely 2 places, then you need to use decimal arithmetic; for example, using IBM's decNumber++ library, which implements ISO C/C++ TR 24773 draft
You can limit the number of significant numbers to output:
http://www.cplusplus.com/reference/iostream/manipulators/setprecision/
but I don't think there is a function to actually lop off a certain number of digits. You could write a function using ftoa() (or stringstream), lop off a certain number of digits, and use atof() (or stringstream) and return that.
You should checks the string rather than the converted float. It will be easier to check the number of digits.
Why don't you just round the floats to the desired precision?
double round(double val, int decimalPlaces)
{
double power_of_10 = pow(10.0, static_cast<double>(decimalPlaces));
return floor(val * power_of_10 + 0.5) / power_of_10;
}
int main()
{
double d;
cin >> d;
// round d to 3 decimal places...
d = round(d, 3);
// do something with d
d *= 1.75;
cout << setprecision(3) << d; // now output to 3 decimal places
}
There exist no fixed point decimal datatype in C, but you can mimic pascal's decimal with a struct of two ints.
If the need is to take 5 digits [ including or excluding the decimal point ], you could simply write like below.
scanf( "%5f", &a );
where a is declared as float.
Fo eg:
If you enter 123.45, scanf will consider the first 5 characters i.e., 4 digits and the decimal point & will store 123.4
If entered 123456, the value of a will be 12345 [ ~ 12345.00 ]
With printf, we would be able to control how many characters can be printed after decimal as well.
printf( "%5.2f \n", a );
The value of 123.4 will be printed as 12.30 [ total 5, including the decimal & 2 digits after decimal ]
But this have a limitation, where if the digits in the value are more than 5, it will display the actual value.
eg: The value of 123456.7, will be displayed as 123456.70.
This [ specifying the no. of digits after the decimal, as mentioned for printf ] I heard can be used for scanf as well, I am not sure sure & the compiler I use doesn't support that format. Verify whether your compiler does.
Now, when it comes to taking data from an external interface, are you talking about serialization here, I mean transmission of data on netwrok.
Then, to my knowledge your approach is fine.
We generally tend to read in the form of char only, to make sure the application works for any format of data.
You can print a float use with printf("%.2f", float), or something similar.