Convert float to string with precision and without tolerance - c++

How can you convert float to string with specified precision without tolerance?
For example, with precision 6 get the following result.
40.432 -> 40.432000.
In a string the only value that I can get is 40.431999.

The problem is that you're using a float data type which only has a precision of at most 7.22 total digits and sometimes as little as 6 digits, and you're trying to display 8 total digits (2 before the decimal and 6 after). As noted in the comments, the closest possible binary float to 40.432 is 40.43199920654296875, the second closest would be 40.432003021240234375.
You can get more digits by converting to the larger double type. Once you've done that you can round to the nearest 6-digit number. Note that if the float was generated by a calculation, rounding may actually create a less accurate result.
If you always know your numbers will be between 10 and 100, this simple code will work. Otherwise you'll need a more complex process to determine the appropriate amount of rounding.
float f = 40.432;
double d = f;
double r = std::round(d * 10000.0) / 10000.0; // 2 digits before the decimal, 4 after
std::cout << std::fixed << std::setprecision(6) << r;
Note that the last 2 digits will always be zero because of the rounding.
See it in action: http://coliru.stacked-crooked.com/a/f085e56c03ebeb73

How can you convert float to string with specified precision
You can use a string stream:
std::ostringstream strs;
strs << std::fixed << std::setprecision(6) << the_value;
std::string str = strs.str();
In a string the only value that I can get is 40.431999.
Your problem may be that there exists no exact representation for 40.432 in the floating point format that your system uses. Since you can never have a floating point value 40.432, you can never convert such value to a string.
It just so happens that the closest representable value to 40.432 is closer to 40.431999 than it is to 40.432.
You need to:
Either Accept that 40.432 ~~ 40.431999
Or use a floating point format that is precise enough to have a representation for 40.432 that is closer to 40.432 than it is to 40.431999, and is also precise enough for all other numbers for which you have a specific expected value. IEEE 754 double precision floating point happens to have a representable value closer to 40.432 than 40.431999.
Or stop using floating point. You won't have problems like this if you use fixed point or arbitrary precision data types.

Related

Is there a simple explanation to understand floating points representation and std::numeric_limits maxdigits() and digits10()? [duplicate]

I am confused about what max_digits10 represents. According to its documentation, it is 0 for all integral types. The formula for floating-point types for max_digits10 looks similar to int's digits10's.
To put it simple,
digits10 is the number of decimal digits guaranteed to survive text → float → text round-trip.
max_digits10 is the number of decimal digits needed to guarantee correct float → text → float round-trip.
There will be exceptions to both but these values give the minimum guarantee. Read the original proposal on max_digits10 for a clear example, Prof. W. Kahan's words and further details. Most C++ implementations follow IEEE 754 for their floating-point data types. For an IEEE 754 float, digits10 is 6 and max_digits10 is 9; for a double it is 15 and 17. Note that both these numbers should not be confused with the actual decimal precision of floating-point numbers.
Example digits10
char const *s1 = "8.589973e9";
char const *s2 = "0.100000001490116119384765625";
float const f1 = strtof(s1, nullptr);
float const f2 = strtof(s2, nullptr);
std::cout << "'" << s1 << "'" << '\t' << std::scientific << f1 << '\n';
std::cout << "'" << s2 << "'" << '\t' << std::fixed << std::setprecision(27) << f2 << '\n';
Prints
'8.589973e9' 8.589974e+009
'0.100000001490116119384765625' 0.100000001490116119384765625
All digits up to the 6th significant digit were preserved, while the 7th digit didn't survive for the first number. However, all 27 digits of the second survived; this is an exception. However, most numbers become different beyond 7 digits and all numbers would be the same within 6 digits.
In summary, digits10 gives the number of significant digits you can count on in a given float as being the same as the original real number in its decimal form from which it was created i.e. the digits that survived after the conversion into a float.
Example max_digits10
void f_s_f(float &f, int p) {
std::ostringstream oss;
oss << std::fixed << std::setprecision(p) << f;
f = strtof(oss.str().c_str(), nullptr);
}
float f3 = 3.145900f;
float f4 = std::nextafter(f3, 3.2f);
std::cout << std::hexfloat << std::showbase << f3 << '\t' << f4 << '\n';
f_s_f(f3, std::numeric_limits<float>::max_digits10);
f_s_f(f4, std::numeric_limits<float>::max_digits10);
std::cout << f3 << '\t' << f4 << '\n';
f_s_f(f3, 6);
f_s_f(f4, 6);
std::cout << f3 << '\t' << f4 << '\n';
Prints
0x1.92acdap+1 0x1.92acdcp+1
0x1.92acdap+1 0x1.92acdcp+1
0x1.92acdap+1 0x1.92acdap+1
Here two different floats, when printed with max_digits10 digits of precision, they give different strings and these strings when read back would give back the original floats they are from. When printed with lesser precision they give the same output due to rounding and hence when read back lead to the same float, when in reality they are from different values.
In summary, max_digits10 are at least required to disambiguate two floats in their decimal form, so that when converted back to a binary float, we get the original bits again and not of the one slightly before or after it due to rounding errors.
In my opinion, it is explained sufficiently at the linked site (and the site for digits10):
digits10 is the (max.) amount of "decimal" digits where numbers
can be represented by a type in any case, independent of their actual value.
A usual 4-byte unsigned integer as example: As everybody should know, it has exactly 32bit,
that is 32 digits of a binary number.
But in terms of decimal numbers?
Probably 9.
Because, it can store 100000000 as well as 999999999.
But if take numbers with 10 digits: 4000000000 can be stored, but 5000000000 not.
So, if we need a guarantee for minimum decimal digit capacity, it is 9.
And that is the result of digits10.
max_digits10 is only interesting for float/double... and gives the decimal digit count
which we need to output/save/process... to take the whole precision
the floating point type can offer.
Theoretical example: A variable with content 123.112233445566
If you show 123.11223344 to the user, it is not as precise as it can be.
If you show 123.1122334455660000000 to the user, it makes no sense because
you could omit the trailing zeros (because your variable can´t hold that much anyways)
Therefore, max_digits10 says how many digits precision you have available in a type.
Lets build some context
After going through lots of answers and reading stuff following is the simplest and layman answer i could reach upto for this.
Floating point numbers in computers (Single precision i.e float type in C/C++ etc. OR double precision i.e double in C/C++ etc.) have to be represented using fixed number of bits.
float is a 32-bit IEEE 754 single precision Floating Point Number – 1
bit for the sign, 8 bits for the exponent, and 23* for the value.
float has 7 decimal digits of precision.
And for double type
The C++ double should have a floating-point precision of up to 15
digits as it contains a precision that is twice the precision of the
float data type. When you declare a variable as double, you should
initialize it with a decimal value
What the heck above means to me?
Its possible that sometimes the floating point number which you have cannot fit into the number of bits available for that type. for eg. float value of 0.1 cannot FIT into available number of BITS in a computer. You may ask why. Try converting this value to binary and you will see that the binary representation is never ending and we have only finite number of bits so we need to stop at one point even though the binary conversion logic says keep going on.
If the given floating point number can be represented by the number of bits available, then we are good. If its not possible to represent the given floating point number in the available number of bits, then the bits are stored a value which is as close as possible to the actual value. This is also known as "Rounding the float value" OR "Rounding error". Now how this value is calculated depends of specific implementation but its safe to assume that given a specific implementation, the most closest value is chosen.
Now lets come to std::numeric_limits<T>::digits10
The value of std::numeric_limits::digits10 is the number of
base-10 digits that are necessary to uniquely represent all distinct
values of the type T, such as necessary for
serialization/deserialization to text. This constant is meaningful for
all floating-point types.
What this std::numeric_limits<T>::digits10 is saying is that whenever you fall into a scenario where rounding MUST happen then you can be assured that after given floating point value is rounded to its closest representable value by the computer, then its guarantied that the closest representable value's std::numeric_limits<T>::digits10 number of Decimal digits will be exactly same as your input floating point. For single precision floating point value this number is usually 6 and for double precision float value this number is usually 15.
Now you may ask why i used the word "guarantied". Well i used this because its possible that more number of digits may survive while conversion to float BUT if you ask me give me a guarantee that how many will survive in all the cases, then that number is std::numeric_limits<T>::digits10. Not convinced yet?
OK, consider example of unsigned char which has 8 bits of storage. When you convert a decimal value to unsigned char, then what's the guarantee that how many decimal digits will survive? I will say "2". Then you will say that even 145 will survive, so it should be 3. BUT i will say NO. Because if you take 256, then it won't survive. Of course 255 will survive, but since you are asking for guarantee so i can only guarantee that 2 digits will survive because answer 3 is not true if i am trying to use values higher than 255.
Now use the same analogy for floating number types when someone asks for a guarantee. That guarantee is given by std::numeric_limits<T>::digits10
Now what the heck is std::numeric_limits<T>::max_digits10
Here comes a bit of another level of complexity. BUT I will try to explain as simple as I can
As i mentioned previously that due to limited number of bits available to represent a floating type on a computer, its not possible to represent every float value exactly. Few can be represented exactly BUT not all values. Now lets consider a hypothetical situation. Someone asks you to write down all the possible float values which the computer can represent (ooohhh...i know what you are thinking). Luckily you don't have write all those :)
Just imagine that you started and reached the last float value which a computer can represent. The max float value which the computer can represent will have certain number of decimal digits. These are the number of decimal digits which std::numeric_limits<T>::max_digits10 tells us. BUT an actual explanation for std::numeric_limits<T>::max_digits10 is the maximum number of decimal digits you need to represent all possible representable values. Thats why i asked you to write all the value initially and you will see that you need maximum std::numeric_limits<T>::max_digits10 of decimal digits to write all representable values of type T.
Please note that this max float value is also the float value which can survive the text to float to text conversion but its number of decimal digits are NOT the guaranteed number of digits (remember the unsigned char example i gave where 3 digits of 255 doesn't mean all 3 digits values can be stored in unsigned char?)
Hope this attempt of mine gives people some understanding. I know i may have over simplified things BUT I have spent sleepless night thinking and reading stuff and this is the explanation which was able to give me some peace of mind.
Cheers !!!

How to convert string to double with specified number of precision in c++

How to convert string to double with specified number of precision in c++
code snippet as below:
double value;
CString thestring(_T("1234567890123.4"));
value = _tcstod(thestring,NULL);
The value is coming as this:1234567890123.3999
expected value is:1234567890123.4
Basically you can use strtod or std::stod for the conversion and then round to your desired precision. For the actual rounding, a web search will provide lots of code examples.
But the details are more complicated than you might think: The string is (probably) a decimal representation of the number while the double is binary. I guess that you want to round to a specified number of decimal digits. The problem is that most decimal floating point decimal numbers cannot be exactly represented in binary. Even for numbers like 0.1 it is not possible.
You also need to define what kind of precision you are interested in. Is it the total number of digits (relative precision) or the number of digits after the decimal point (absolute precision).
The floating-point double type can not exactly represent the value 1234567890123.4 and 1234567890123.3999 is the best it can represent and that is what the result is. Note that floating point types (e.g. IEEE-754) can not exactly represent all real numbers, hence these use approximations for most cases.
To be more precise, according to IEEE-754 double-precision floating point format 1234567890123.4 is represented as the hexadecimal value of 4271F71FB04CB666, where in binary the sign bit is 0, the 11 exponent and 52 singificand bits are 10000100111 and 0001111101110001111110110000010011001011011001100110 respectively. So this results in the value of (-1)sign×2exponent×(1.significand bits) = 1×240×1.1228329550462148311851251492043957114219 = 1234567890123.39990234375.
Note that not even a 128-bit floating point would store the value exactly. It would still result in 1234567890123.39999999999999999999991529670527456996609316774993203580379486083984375. Maybe you should instead attempt to use some fixed-point or rational number types instead.
std::stod is generic and doesn't give this kind of manipulation. Thus, you have to craft something of your own, like I did below using std::stringstream and <iomanip> facilities:
double stodpre(std::string const &str, std::size_t const p) {
std::stringstream sstrm;
sstrm << std::setprecision(p) << std::fixed << str << std::endl;
double d;
sstrm >> d;
return d;
}
Live Demo
You cannot control the precision with which a decimal number is stored.
Decimal numbers are stored in binary using the floating point notation.
What you can do is to control the precision of what is displayed on outputting the number.
For example, do this to control the precision of the output to 2 digits -
std::cout << std::fixed;
std::cout << std::setprecision(2);
std::cout << value;
You can give any number for the precision.

What is the purpose of max_digits10 and how is it different from digits10?

I am confused about what max_digits10 represents. According to its documentation, it is 0 for all integral types. The formula for floating-point types for max_digits10 looks similar to int's digits10's.
To put it simple,
digits10 is the number of decimal digits guaranteed to survive text → float → text round-trip.
max_digits10 is the number of decimal digits needed to guarantee correct float → text → float round-trip.
There will be exceptions to both but these values give the minimum guarantee. Read the original proposal on max_digits10 for a clear example, Prof. W. Kahan's words and further details. Most C++ implementations follow IEEE 754 for their floating-point data types. For an IEEE 754 float, digits10 is 6 and max_digits10 is 9; for a double it is 15 and 17. Note that both these numbers should not be confused with the actual decimal precision of floating-point numbers.
Example digits10
char const *s1 = "8.589973e9";
char const *s2 = "0.100000001490116119384765625";
float const f1 = strtof(s1, nullptr);
float const f2 = strtof(s2, nullptr);
std::cout << "'" << s1 << "'" << '\t' << std::scientific << f1 << '\n';
std::cout << "'" << s2 << "'" << '\t' << std::fixed << std::setprecision(27) << f2 << '\n';
Prints
'8.589973e9' 8.589974e+009
'0.100000001490116119384765625' 0.100000001490116119384765625
All digits up to the 6th significant digit were preserved, while the 7th digit didn't survive for the first number. However, all 27 digits of the second survived; this is an exception. However, most numbers become different beyond 7 digits and all numbers would be the same within 6 digits.
In summary, digits10 gives the number of significant digits you can count on in a given float as being the same as the original real number in its decimal form from which it was created i.e. the digits that survived after the conversion into a float.
Example max_digits10
void f_s_f(float &f, int p) {
std::ostringstream oss;
oss << std::fixed << std::setprecision(p) << f;
f = strtof(oss.str().c_str(), nullptr);
}
float f3 = 3.145900f;
float f4 = std::nextafter(f3, 3.2f);
std::cout << std::hexfloat << std::showbase << f3 << '\t' << f4 << '\n';
f_s_f(f3, std::numeric_limits<float>::max_digits10);
f_s_f(f4, std::numeric_limits<float>::max_digits10);
std::cout << f3 << '\t' << f4 << '\n';
f_s_f(f3, 6);
f_s_f(f4, 6);
std::cout << f3 << '\t' << f4 << '\n';
Prints
0x1.92acdap+1 0x1.92acdcp+1
0x1.92acdap+1 0x1.92acdcp+1
0x1.92acdap+1 0x1.92acdap+1
Here two different floats, when printed with max_digits10 digits of precision, they give different strings and these strings when read back would give back the original floats they are from. When printed with lesser precision they give the same output due to rounding and hence when read back lead to the same float, when in reality they are from different values.
In summary, max_digits10 are at least required to disambiguate two floats in their decimal form, so that when converted back to a binary float, we get the original bits again and not of the one slightly before or after it due to rounding errors.
In my opinion, it is explained sufficiently at the linked site (and the site for digits10):
digits10 is the (max.) amount of "decimal" digits where numbers
can be represented by a type in any case, independent of their actual value.
A usual 4-byte unsigned integer as example: As everybody should know, it has exactly 32bit,
that is 32 digits of a binary number.
But in terms of decimal numbers?
Probably 9.
Because, it can store 100000000 as well as 999999999.
But if take numbers with 10 digits: 4000000000 can be stored, but 5000000000 not.
So, if we need a guarantee for minimum decimal digit capacity, it is 9.
And that is the result of digits10.
max_digits10 is only interesting for float/double... and gives the decimal digit count
which we need to output/save/process... to take the whole precision
the floating point type can offer.
Theoretical example: A variable with content 123.112233445566
If you show 123.11223344 to the user, it is not as precise as it can be.
If you show 123.1122334455660000000 to the user, it makes no sense because
you could omit the trailing zeros (because your variable can´t hold that much anyways)
Therefore, max_digits10 says how many digits precision you have available in a type.
Lets build some context
After going through lots of answers and reading stuff following is the simplest and layman answer i could reach upto for this.
Floating point numbers in computers (Single precision i.e float type in C/C++ etc. OR double precision i.e double in C/C++ etc.) have to be represented using fixed number of bits.
float is a 32-bit IEEE 754 single precision Floating Point Number – 1
bit for the sign, 8 bits for the exponent, and 23* for the value.
float has 7 decimal digits of precision.
And for double type
The C++ double should have a floating-point precision of up to 15
digits as it contains a precision that is twice the precision of the
float data type. When you declare a variable as double, you should
initialize it with a decimal value
What the heck above means to me?
Its possible that sometimes the floating point number which you have cannot fit into the number of bits available for that type. for eg. float value of 0.1 cannot FIT into available number of BITS in a computer. You may ask why. Try converting this value to binary and you will see that the binary representation is never ending and we have only finite number of bits so we need to stop at one point even though the binary conversion logic says keep going on.
If the given floating point number can be represented by the number of bits available, then we are good. If its not possible to represent the given floating point number in the available number of bits, then the bits are stored a value which is as close as possible to the actual value. This is also known as "Rounding the float value" OR "Rounding error". Now how this value is calculated depends of specific implementation but its safe to assume that given a specific implementation, the most closest value is chosen.
Now lets come to std::numeric_limits<T>::digits10
The value of std::numeric_limits::digits10 is the number of
base-10 digits that are necessary to uniquely represent all distinct
values of the type T, such as necessary for
serialization/deserialization to text. This constant is meaningful for
all floating-point types.
What this std::numeric_limits<T>::digits10 is saying is that whenever you fall into a scenario where rounding MUST happen then you can be assured that after given floating point value is rounded to its closest representable value by the computer, then its guarantied that the closest representable value's std::numeric_limits<T>::digits10 number of Decimal digits will be exactly same as your input floating point. For single precision floating point value this number is usually 6 and for double precision float value this number is usually 15.
Now you may ask why i used the word "guarantied". Well i used this because its possible that more number of digits may survive while conversion to float BUT if you ask me give me a guarantee that how many will survive in all the cases, then that number is std::numeric_limits<T>::digits10. Not convinced yet?
OK, consider example of unsigned char which has 8 bits of storage. When you convert a decimal value to unsigned char, then what's the guarantee that how many decimal digits will survive? I will say "2". Then you will say that even 145 will survive, so it should be 3. BUT i will say NO. Because if you take 256, then it won't survive. Of course 255 will survive, but since you are asking for guarantee so i can only guarantee that 2 digits will survive because answer 3 is not true if i am trying to use values higher than 255.
Now use the same analogy for floating number types when someone asks for a guarantee. That guarantee is given by std::numeric_limits<T>::digits10
Now what the heck is std::numeric_limits<T>::max_digits10
Here comes a bit of another level of complexity. BUT I will try to explain as simple as I can
As i mentioned previously that due to limited number of bits available to represent a floating type on a computer, its not possible to represent every float value exactly. Few can be represented exactly BUT not all values. Now lets consider a hypothetical situation. Someone asks you to write down all the possible float values which the computer can represent (ooohhh...i know what you are thinking). Luckily you don't have write all those :)
Just imagine that you started and reached the last float value which a computer can represent. The max float value which the computer can represent will have certain number of decimal digits. These are the number of decimal digits which std::numeric_limits<T>::max_digits10 tells us. BUT an actual explanation for std::numeric_limits<T>::max_digits10 is the maximum number of decimal digits you need to represent all possible representable values. Thats why i asked you to write all the value initially and you will see that you need maximum std::numeric_limits<T>::max_digits10 of decimal digits to write all representable values of type T.
Please note that this max float value is also the float value which can survive the text to float to text conversion but its number of decimal digits are NOT the guaranteed number of digits (remember the unsigned char example i gave where 3 digits of 255 doesn't mean all 3 digits values can be stored in unsigned char?)
Hope this attempt of mine gives people some understanding. I know i may have over simplified things BUT I have spent sleepless night thinking and reading stuff and this is the explanation which was able to give me some peace of mind.
Cheers !!!

C++ String to float with precision

I have a file with number readings (example 5.513208E-05 / 1.146383E-05)
I read the file and store the entries in a temporary string.
After that I convert the temporary string into float variable (which I store in an Multi Dimensional Array).
I use the code below to convert.
getline(infile, temporary_string, ',');
Array[i][0] = ::atof(temporary_string.c_str());
getline(infile, temporary_string);
Array[i][1] = ::atof(temporary_string.c_str());
The problem is that when I print the floats to the screen
5.51321e-05 1.14638e-05 instead of
5.513208E-05 1.146383E-05
How can I get the precise numbers stored ???
You don't specify precision when you read or convert the string. Instead you set the precision when you output the value:
std::cout << std::setprecision(3) << 1.2345 << '\n';
The above will produce the following oputput:
1.23
See e.g. this reference.
Ensure you have double Array[][], not float. A text presentation (base 10) is always approximated by the binary floating point number (base 2), but with luck approximated number of atof has the same presentation, when using the same format. In general one does not do much calculation, and on output uses a reduced precision with setprecision or formats.
Every floating-point representation of numbers has limited precision. In particular, float has 24 bits (1 fixed+23 variable) for its binary mantissa, thus implying a precision of roughly seven decimal digits.
If you need more precision for the stored number, you may wish to consider using double instead of float. On normal PCs, double has 53 bits (1+52) for the binary mantissa, thus allowing a 15-decimal digit precision.
But remember that there's also a problem when those numbers are output. I think the default precision for both printf() and std::ostream is only 6 digits, both for float and for double, unless you specify otherwise. There is no point, however, in demanding a higher precision during output than what the data type provides. So, even though you can say printf("%0.30g", some_float), the extra 23 digits beyond the seven actually supported by the data type might not really produce useful information.

Why comparing double and float leads to unexpected result? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
strange output in comparision of float with float literal
float f = 1.1;
double d = 1.1;
if(f == d) // returns false!
Why is it so?
The important factors under consideration with float or double numbers are:
Precision & Rounding
Precision:
The precision of a floating point number is how many digits it can represent without losing any information it contains.
Consider the fraction 1/3. The decimal representation of this number is 0.33333333333333… with 3′s going out to infinity. An infinite length number would require infinite memory to be depicted with exact precision, but float or double data types typically only have 4 or 8 bytes. Thus Floating point & double numbers can only store a certain number of digits, and the rest are bound to get lost. Thus, there is no definite accurate way of representing float or double numbers with numbers that require more precision than the variables can hold.
Rounding:
There is a non-obvious differences between binary and decimal (base 10) numbers.
Consider the fraction 1/10. In decimal, this can be easily represented as 0.1, and 0.1 can be thought of as an easily representable number. However, in binary, 0.1 is represented by the infinite sequence: 0.00011001100110011…
An example:
#include <iomanip>
int main()
{
using namespace std;
cout << setprecision(17);
double dValue = 0.1;
cout << dValue << endl;
}
This output is:
0.10000000000000001
And not
0.1.
This is because the double had to truncate the approximation due to it’s limited memory, which results in a number that is not exactly 0.1. Such an scenario is called a Rounding error.
Whenever comparing two close float and double numbers such rounding errors kick in and eventually the comparison yields incorrect results and this is the reason you should never compare floating point numbers or double using ==.
The best you can do is to take their difference and check if it is less than an epsilon.
abs(x - y) < epsilon
Try running this code, the results will make the reason obvious.
#include <iomanip>
#include <iostream>
int main()
{
std::cout << std::setprecision(100) << (double)1.1 << std::endl;
std::cout << std::setprecision(100) << (float)1.1 << std::endl;
std::cout << std::setprecision(100) << (double)((float)1.1) << std::endl;
}
The output:
1.100000000000000088817841970012523233890533447265625
1.10000002384185791015625
1.10000002384185791015625
Neither float nor double can represent 1.1 accurately. When you try to do the comparison the float number is implicitly upconverted to a double. The double data type can accurately represent the contents of the float, so the comparison yields false.
Generally you shouldn't compare floats to floats, doubles to doubles, or floats to doubles using ==.
The best practice is to subtract them, and check if the absolute value of the difference is less than a small epsilon.
if(std::fabs(f - d) < std::numeric_limits<float>::epsilon())
{
// ...
}
One reason is because floating point numbers are (more or less) binary fractions, and can only approximate many decimal numbers. Many decimal numbers must necessarily be converted to repeating binary "decimals", or irrational numbers. This will introduce a rounding error.
From wikipedia:
For instance, 1/5 cannot be represented exactly as a floating point number using a binary base but can be represented exactly using a decimal base.
In your particular case, a float and double will have different rounding for the irrational/repeating fraction that must be used to represent 1.1 in binary. You will be hard pressed to get them to be "equal" after their corresponding conversions have introduced different levels of rounding error.
The code I gave above solves this by simply checking if the values are within a very short delta. Your comparison changes from "are these values equal?" to "are these values within a small margin of error from each other?"
Also, see this question: What is the most effective way for float and double comparison?
There are also a lot of other oddities about floating point numbers that break a simple equality comparison. Check this article for a description of some of them:
http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm
The IEEE 754 32-bit float can store: 1.1000000238...
The IEEE 754 64-bit double can store: 1.1000000000000000888...
See why they're not "equal"?
In IEEE 754, fractions are stored in powers of 2:
2^(-1), 2^(-2), 2^(-3), ...
1/2, 1/4, 1/8, ...
Now we need a way to represent 0.1. This is (a simplified version of) the 32-bit IEEE 754 representation (float):
2^(-4) + 2^(-5) + 2^(-8) + 2^(-9) + 2^(-12) + 2^(-13) + ... + 2^(-24) + 2^(-25) + 2^(-27)
00011001100110011001101
1.10000002384185791015625
With 64-bit double, it's even more accurate. It doesn't stop at 2^(-25), it keeps going for about twice as much. (2^(-48) + 2^(-49) + 2^(-51), maybe?)
Resources
IEEE 754 Converter (32-bit)
Floats and doubles are stored in a binary format that can not represent every number exactly (it's impossible to represent the infinitely many possible different numbers in a finite space).
As a result they do rounding. Float has to round more than double, because it is smaller, so 1.1 rounded to the nearest valid Float is different to 1.1 rounded to the nearest valud Double.
To see what numbers are valid floats and doubles see Floating Point