Weird bug with floats in if-statement - c++

So in my C++ code I have the following line of code for debugging purposes:
if(float1 != float2)
{
std::cout<<float1<<" "<<float2<<std::endl;
}
What's happening is that the program is entering into the if-statement...but when I print out the two float values they are the same. But if they were the same, then it should bypass this if-statement completely. So I'm really confused as to why this is happening.

The floats may just have very similar values. By default, the I/O library will truncate the output of floating point values. You can ensure that you get the full precision by calling the precision member function of std::cout:
if(float1 != float2)
{
std::cout.precision(9);
std::cout<<float1<<" "<<float2<<std::endl;
}
Now you should see the difference. The value 9 is the number of base-10 digits representable by a IEEE 754 32 bit float (see #EricPostpischil's comment below).

Floating-point value are typically stored in computer memory in binary format. Meanwhile, values you print through cout are represented in decimal format. The conversion from binary floating-point representation to decimal representation can be lossy, depending on your conversion settings. This immediatelty means that what you print is not necessarily exactly the same as what is actually stored in memory. This explains why the direct comparison between float1 and float2 might say that they are different, while the decimal printout might look identical.

Related

Is a floating-point value of 0.0 represented differently from other floating-point values?

I've been going back through my C++ book, and I came across a statement that says zero can be represented exactly as a floating-point number. I was wondering how this is possible unless the value of 0.0 is stored as a type other than a floating point value. I wrote the following code to test this:
#include <iomanip>
#include <iostream>
int main()
{
float value1 {0.0};
float value2 {0.1};
std::cout << std::setprecision(10) << std::fixed;
std::cout << value1 << '\n'
<< value2 << std::endl;
}
Running this code gave the following output:
0.0000000000
0.1000000015
To 10 digits of precision, 0.0 is still 0, and 0.1 has some inaccuracies (which is to be expected). Is a value of 0.0 different from other floating point numbers in the way it is represented, and is this a feature of the compiler or the computer's architecture?
How can 2 be represented as an exact number? 4? 15? 0.5? The answer is just that some numbers can be represented exactly in the floating-point format (which is based on base-2/binary) and others can't.
This is no different from in decimal. You can't represent 1/3 exactly in decimal, but that doesn't mean you can't represent 0.
Zero is special in a way, because (like the other real numbers) it's more trivial to prove this property than for some arbitrary fractional number. But that's about it.
So:
what is it about these values (0, 1/16, 1/2048, ...) that allows them to be represented exactly.
Simple mathematics. In any given base, in the sort of representation we're talking about, some numbers can be written out with a fixed number of decimal places; others can't. That's it.
You can play online with H. Schmidt's IEEE-754 Floating Point Converter for different numbers to see a bunch of different representations, and what errors come about as a result of encoding into those representations. For starters, try 0.5, 0.2 and 0.1.
It was my (perhaps naive) understanding that all floating point values contained some instability.
No, absolutely not.
You want to treat every floating point value in your program as potentially having some small error on it, because you generally don't know what sequence of calculations led to it. You can't trust it, in general. I expect someone half-taught this to you in the past, and that's what led to your misunderstanding.
But, if you do know the error (or lack thereof) involved at each step in the creation of the value (e.g. "all I've done is initialised it to zero"), then that's fine! No need to worry about it then.
Here is one way to look at the situation: with 64 bits to store a number, there are 2^64 bit patterns. Some of these are "not-a-number" representations, but most of the 2^64 patterns represent numbers. The number that is represented is represented exactly, with no error. This might seem strange after learning about floating point math; a caveat lurks ahead.
However, as huge as 2^64 is, there are infinitely many more real numbers. When a calculation produces a non-integer result, the odds are pretty good that the answer will not be a number represented by one of the 2^64 patterns. There are exceptions. For example, 1/2 is represented by one of the patterns. If you store 0.5 in a floating point variable, it will actually store 0.5. Let's try that for other single-digit denominators. (Note: I am writing fractions for their expressive power; I do not intend integer arithmetic.)
1/1 – stored exactly
1/2 – stored exactly
1/3 – not stored exactly
1/4 – stored exactly
1/5 – not stored exactly
1/6 – not stored exactly
1/7 – not stored exactly
1/8 – stored exactly
1/9 – not stored exactly
So with these simple examples, over half are not stored exactly. When you get into more complicated calculations, any one piece of the calculation can throw you off the islands of exact representation. Do you see why the general rule of thumb is that floating point values are not exact? It is incredibly easy to fall into that realm. It is possible to avoid it, but don't count on it.
Some numbers can be represented exactly by a floating point value. Most cannot.

Why hexadecimal floating constants in C++17?

C++17 to add hexadecimal floating constant (floating point literal). Why? How about a couple of examples showing the benefits.
Floating point numbers are stored in x86/x64 processors in base 2, not base 10: https://en.wikipedia.org/wiki/Double-precision_floating-point_format . Because of that many decimal floating point numbers cannot be represented exactly, e.g decimal 0.1 could be represented as something like 0.1000000000000003 or 0.0999999999999997 - whatever has base 2 representation close enough to decimal 0.1 . Because of that inexactness, e.g. printing in decimal and then parsing of a floating-point number may result in a slightly different number than the one stored in memory binarily before printing.
For some application emergence of such errors is unacceptable: they want to parse into exactly the same binary floating-point number as the one which was before printing (e.g. one application exports floating-point data and another imports). For that, one could export and import doubles in hexadecimal format. Because 16 is a power of 2, binary floating-point numbers can be represented exactly in hexadecimal format.
printf and scanf have been extended with %a format specifier which allows to print and parse hexadecimal floating point numbers. Though MSVC++ does not support %a format specifier for scanf yet:
The a and A specifiers (see printf Type Field Characters) are not available with scanf.
To print a double in full precision with hexadecimal format one should specify printing of 13 hexadecimal digits after point, which correspond to 13*4=52 bits:
double x = 0.1;
printf("%.13a", x);
See more details on hexadecimal floating point with code and examples (note that at least for MSVC++ 2013 simple specification of %a in printf prints 6 hexadecimal digits after point, not 13 - this is stated in the end of the article).
Specifically for constants, as asked in the question, hexadecimal constants may be convenient for testing the application on exact hard-coded floating-point inputs. E.g. your bug may be reproducible for 0.1000000000000003, but not for 0.0999999999999997, so you need hexadecimal hardcoded value to specify the representation of interest for decimal 0.1 .
The main 2 reasons to use hex floats over decimals are accuracy and speed.
The algorithms for accurately converting between decimal constants and the underlying binary format of floating point numbers are surprisingly complicated, and even nowadays conversion errors still occasionally arise.
Converting between hexadecimal and binary is a much simpler endeavour, and guaranteed to be exact. An example use case is when it is critical that you use a specific floating point number, and not one either side (e.g. for implementations of special functions such as exp). This simplicity also makes the conversion much faster (it doesn't require any intermediate "bignum" arithmetic): in some cases I've seen 3x speed up for read/write operations for hex float vs decimals.

C++ Type of variables - value

I am the beginner, but I think same important thinks I should learn as soon as it possible.
So I have a code:
float fl=8.28888888888888888888883E-5;
cout<<"The value = "<<fl<<endl;
But my .exe file after run show:
8.2888887845911086e-005
I suspected the numbers to limit of the type and rest will be the zero, but I saw digits, which are random. Maybe it gives digits from memory after varible?
Could you explain me how it does works?
I suspected the numbers to limit of the type and rest will be the zero
Yes, this is exactly what happens, but it happens in binary. This program will show it by using the hexadecimal printing format %a:
#include <stdio.h>
int main(int c, char *v[]) {
float fl = 8.28888888888888888888883E-5;
printf("%a\n%a\n", 8.28888888888888888888883E-5, fl);
}
It shows:
0x1.5ba94449649e2p-14
0x1.5ba944p-14
In these results, 0x1.5ba94449649e2p-14 is the hexadecimal representation of the double closest to 8.28888888888888888888883*10-5, and 0x1.5ba944p-14
is the representation of the conversion to float of that number. As you can see, the conversion simply truncated the last digits (in this case. The conversion is done according to the rounding mode, and when the rounding goes up instead of down, it changes one or more of the last digits).
When you observe what happens in decimal, the fact that float and double are binary floating-point formats on your computer means that there are extra digits in the representation of the value.
I suspected the numbers to limit of the type and rest will be the zero
That is what happens internally. Excess bits beyond what the type can store are lost.
But that's in the binary representation. When you convert it to decimal, you can get trailing non-zero digits.
Example:
0b0.00100 is 0.125 in decimal
What you're seeing is a result of the fact that you cannot exactly represent a floating-point number in memory. Because of this, floats will be stored as the nearest value that can be stored in memory. A float usually has 24 bits used to represent the mantissa, which translates to about 6 decimal places (this is implementation defined though, so you shouldn't rely on this). When printing more than 6 decimal digits, you'll notice your value isn't stored in memory as the value you intended, and you'll see random digits.
So to recap, the problem you encountered is caused by the fact that base-10 decimal numbers cannot be represented in memory, instead the closest number to it is stored and this number will then be used.
each data type has range after this range all number is from memory or rubbish so you have to know this ranges and deal with it when you write code.
you can know this ranges from here or here

Read float wrong value from txt file c++

I have a text file of values:
133.25 129.40 41.69 2.915
when I read it:
fscanf(File, "%f", &floatNumber[i]);
I get these values:
1.3325000000000000e+002, 1.2939999389648437e+002, 4.1689998626708984e+001 2.9149999618530273e+000
the first value is okay but the other three values why they are different?
The values are the same, you need to change the format specificier in your printf.
Also, floating point numbers have discrete precision, it is therefore not possible to reprenent
any arbitrary floating point numbers to infinite accuracy.
This is well-known problem with IEEE spec.
They're not different. Floating-point is only accurate to a point [sic]. These are the closest representations of those values. Floating-point is a special beast.
The reason the values are different is that all numbers except the first one cannot be represented exactly as a binary float value. If you need exact representation of decimals, you need to use a non-standard library.
Although most of your inputs cannot be represented exactly in either format, you would have got a lot more matching digits using double rather than float.
I regard float as a very specialized type. If you have a very large array of low precision floating point data, and are doing only very well behaved calculations on it, you may be able to gain some performance by using float. You get twice as many floats in e.g. a cache line. For anything else, prefer double to float.

Write a float with full precision in C++

In C++, can I write and read back a float (or double) in text format without losing precision?
Consider the following:
float f = ...;
{
std::ofstream fout("file.txt");
// Set some flags on fout
fout << f;
}
float f_read;
{
std::ifstream fin("file.txt");
fin >> f;
}
if (f != f_read) {
std::cout << "precision lost" << std::endl;
}
I understand why precision is lost sometimes. However, if I print the value with enough digits, I should be able to read back the exact same value.
Is there a given set of flags that is guaranteed to never lose precision?
Would this behaviour be portable across platforms?
If you don't need to support platforms that lack C99 support (MSVC), your best bet is actually to use the %a format-specifier with printf, which always generates an exact (hexadecimal) representation of the number while using a bounded number of digits. If you use this method, then no rounding occurs during the conversion to a string or back, so the rounding mode has no effect on the result.
Have a look at this article: How to Print Floating-Point Numbers Accurately and also at that one: Printing Floating-Point Numbers Quickly and Accurately.
It is also mentioned on stackoverflow here, and there is some pointer to an implementation here.
if I print the value with enough digits, I should be able to read back the exact same value
Not if you write it in decimal - there's not an integer relationship between the number of binary digits and the number of decimal digits required to represent a number. If you print your number out in binary or hexadecimal, you'll be able to read it back without losing any precision.
In general, floating point numbers are not portable between platforms in the first place, so your text representation is not going to be able to bridge that gap. In practice, most machines use IEEE 754 floating point numbers, so it'll probably work reasonably well.
You can't necessarily print the exact value of a "power of two" float in decimal.
Think of using base three to store 1/3, now try and print 1/3 in decimal perfectly.
For solutions see: How do you print the EXACT value of a floating point number?