Printing bits as IEEE-754 float - c++

Is there some clever and reliable way to print series of bits as an IEEE-754 without actually using a float type?
I have found a way to print fractions, which allows me to represent the float as a a fraction. However, I then came to realize that the exponent may range from -127 to 128 (after adjusting with bias), which may result in the multiplication mantissa * 2^128. The fraction method relies on representing the numerator as an integer, and I would require a really large integer to do this multiplication. I mean, I could use "custom" type to represent this large value (i.e. https://gmplib.org/), but I would prefer if to avoid this. If we multiplied by 10^x, I could simply adjust the decimal point and add some zeros, but sadly that's not the case either.
I have not been able to find anything that mentions any solution for this. Probably due to the fact that googling stuff like "print from
Why am I actually trying to do this?
I'm only doing this to get a better understanding of how floats (IEEE-754 in particular) work, and I find that it always help to do some practical example. So I thought "Hey, why not try to code it?". This has no practical application (that I know of)!

So, almost immediately after posting this, I finally succeded in finding the resources I've been looking for.
https://www.ryanjuckett.com/printing-floating-point-numbers/ talks about it, and references other relevant sources.

Related

Why is there a loss in precision when converting char * to float using sscanf_s or atof?

I am trying to convert a char * containing just a floating point value to a type float, but both sscanf_s and atof both produce the same invalid result.
char t[] = "2.10";
float aFloat( 0.0f ), bFloat( 0.0f );
sscanf_s( t, "%f", &aFloat );
bFloat = atof( t );
Output:
aFloat: 2.09999990
bFloat: 2.09999990
When I looked at similar questions in an attempt to ascertain the answer I attempted their solutions to no avail.
Converting char* to float or double
The solution given here was to include 'stdlib.h', and after doing so I changed the call to atof to an explicit call 'std::atof', but still no luck.
Unfortunately, not all floating point values can be explicitly represented in binary form. You will get the same result if you say
float myValue = 2.10;
I see the excellent answer in comments is missing (or I didn't find it there easily) one other option how to deal with it.
You should have wrote, why you need floating point number. If you by accident happen to work with monetary amounts (and not too huge ones), you can create custom parser of input values, and custom formatter for value output, to read it as 64b integer (*100), and work in your whole application with 100*amount values. If you are working with really huge amounts, use some library for big numbers, or you may create your own, working with char* numbers.
It's a special case of Fixed-point arithmetic.
If you are interested into "just to solve this", without coding too much, head for big numbers library anyway, even the *100 fixed-point variant is easy to write with bugs - if it's your first time and you don't have enough resources to do it correctly (TDD advised).
But definitely learn how the numbers are stored in computer, and why float/double can't represent all numbers. Float 2.1 for computer (base 2 used internally) is similar case to human's 1/3, which can't be represented in base 10 without infinite number of decimal places (and how 1.0 == 0.99999... in base 10). (thanks #tobi303)
After reading your new comment, if "Does this not have a big impact on financial applications?"
Answer: nope, zero impact, nobody sane (and professional) would create financial application with floats or doubles.

how to take the root of a very large number?

given x=4 and y=1296;
we need to solve for z in z^x=y;
we can calculate z=6 in various ways;
Question is how do I find z if y is a very large number greater than 10^100? I obviously can't store that number as int, so how would I go about calculating z?
C++ implementation would be nice, if not, any solution will work.
It depends on the accuracy required. Since 1e100 cannot be exactly represented by a double, you have a problem.
This works, if you are willing to accept that it does not yield an exact solution. But then, I just said that 1e100 is not represented exactly as a double anyway. Thus, in MATLAB,
exp(log(1e100)/4)
ans =
1e+25
Ok, so it looks like 1e25 is the answer, but is it really? In fact, the number we really get, in terms of a double, is: 10000000000000026675773440.
One problem is the original number was not represented exactly anyway. So 1e100, when stored in the IEEE format, is more accurately stored as something like this:
1.00000000000000001590289110975991804683608085639452813897813e100
To solve this exactly, you would best be served by a big integer form, but a big decimal form would do reasonably well too.
Thus, in MATLAB, using my big decimal (HPF) form we see that 1e100 is exactly represented in 100 digits of precision.
x = hpf('1e100',100)
x =
1.e100
And, to 100 digits of precision, the root is correct.
exp(log(x)/4)
ans =
10000000000000000000000000
Actually though, be careful, as any floating point form cannot represent real numbers exactly. To more precision, we see that the number computed was actually slightly in error:
9999999999999999999999999.9999999999999999999999999999999999999999999999999999999999999999999999999999999999800
A big integer form will yield an exact result, if one exists. Thus, using a big integer form, we see the expected result:
vpi(10)^100
ans =
10000000000000000000000000000000000000000000000000000000000000000000
000000000000000000000000000000000
nthroot(vpi(10)^100,4)
ans =
10000000000000000000000000
The point is, to do the computation you desire, you need to use tools that can do the computation. There are many such big decimal or big integer tools to be had. For example, Java has a BigDecimal and a BigInteger form that I have used on occasion (though I've written my own tools anyway, thus in MATLAB, HPF and VPI.)
Maybe you can do something evil with logarithms
maybe there is a library that you can find that lets you deal with big integers
You can try to use Newton's method. In this case you need to use arbitrary-precision arithmetic.
I.e. you need to write class for arbitrary-precision number. It would be composition of mantissa, which is represented by array of digits and exponent, which is represented by integer. You should realize basic operations on numbers similar to pencil-and-paper methods. Then you should realize Newton's algoriithm as described in wiki.

How can I get consistent program behavior when using floats?

I am writing a simulation program that proceeds in discrete steps. The simulation consists of many nodes, each of which has a floating-point value associated with it that is re-calculated on every step. The result can be positive, negative or zero.
In the case where the result is zero or less something happens. So far this seems straightforward - I can just do something like this for each node:
if (value <= 0.0f) something_happens();
A problem has arisen, however, after some recent changes I made to the program in which I re-arranged the order in which certain calculations are done. In a perfect world the values would still come out the same after this re-arrangement, but because of the imprecision of floating point representation they come out very slightly different. Since the calculations for each step depend on the results of the previous step, these slight variations in the results can accumulate into larger variations as the simulation proceeds.
Here's a simple example program that demonstrates the phenomena I'm describing:
float f1 = 0.000001f, f2 = 0.000002f;
f1 += 0.000004f; // This part happens first here
f1 += (f2 * 0.000003f);
printf("%.16f\n", f1);
f1 = 0.000001f, f2 = 0.000002f;
f1 += (f2 * 0.000003f);
f1 += 0.000004f; // This time this happens second
printf("%.16f\n", f1);
The output of this program is
0.0000050000057854
0.0000050000062402
even though addition is commutative so both results should be the same. Note: I understand perfectly well why this is happening - that's not the issue. The problem is that these variations can mean that sometimes a value that used to come out negative on step N, triggering something_happens(), now may come out negative a step or two earlier or later, which can lead to very different overall simulation results because something_happens() has a large effect.
What I want to know is whether there is a good way to decide when something_happens() should be triggered that is not going to be affected by the tiny variations in calculation results that result from re-ordering operations so that the behavior of newer versions of my program will be consistent with the older versions.
The only solution I've so far been able to think of is to use some value epsilon like this:
if (value < epsilon) something_happens();
but because the tiny variations in the results accumulate over time I need to make epsilon quite large (relatively speaking) to ensure that the variations don't result in something_happens() being triggered on a different step. Is there a better way?
I've read this excellent article on floating point comparison, but I don't see how any of the comparison methods described could help me in this situation.
Note: Using integer values instead is not an option.
Edit the possibility of using doubles instead of floats has been raised. This wouldn't solve my problem since the variations would still be there, they'd just be of a smaller magnitude.
I've worked with simulation models for 2 years and the epsilon approach is the sanest way to compare your floats.
Generally, using suitable epsilon values is the way to go if you need to use floating point numbers. Here are a few things which may help:
If your values are in a known range you and you don't need divisions you may be able to scale the problem and use exact operations on integers. In general, the conditions don't apply.
A variation is to use rational numbers to do exact computations. This still has restrictions on the operations available and it typically has severe performance implications: you trade performance for accuracy.
The rounding mode can be changed. This can be use to compute an interval rather than an individual value (possibly with 3 values resulting from round up, round down, and round closest). Again, it won't work for everything but you may get an error estimate out of this.
Keeping track of the value and a number of operations (possible multiple counters) may also be used to estimate the current size of the error.
To possibly experiment with different numeric representations (float, double, interval, etc.) you might want to implement your simulation as templates parameterized for the numeric type.
There are many books written on estimating and minimizing errors when using floating point arithmetic. This is the topic of numerical mathematics.
Most cases I'm aware of experiment briefly with some of the methods mentioned above and conclude that the model is imprecise anyway and don't bother with the effort. Also, doing something else than using float may yield better result but is just too slow, even using double due to the doubled memory footprint and the smaller opportunity of using SIMD operations.
I recommend that you single step - preferably in assembly mode - through the calculations while doing the same arithmetic on a calculator. You should be able to determine which calculation orderings yield results of lesser quality than you expect and which that work. You will learn from this and probably write better-ordered calculations in the future.
In the end - given the examples of numbers you use - you will probably need to accept the fact that you won't be able to do equality comparisons.
As to the epsilon approach you usually need one epsilon for every possible exponent. For the single-precision floating point format you would need 256 single precision floating point values as the exponent is 8 bits wide. Some exponents will be the result of exceptions but for simplicity it is better to have a 256 member vector than to do a lot of testing as well.
One way to do this could be to determine your base epsilon in the case where the exponent is 0 i e the value to be compared against is in the range 1.0 <= x < 2.0. Preferably the epsilon should be chosen to be base 2 adapted i e a value that can be exactly represented in a single precision floating point format - that way you know exactly what you are testing against and won't have to think about rounding problems in the epsilon as well. For exponent -1 you would use your base epsilon divided by two, for -2 divided by 4 and so on. As you approach the lowest and the highest parts of the exponent range you gradually run out of precision - bit by bit - so you need to be aware that extreme values can cause the epsilon method to fail.
If it absolutely has to be floats then using an epsilon value may help but may not eliminate all problems. I would recommend using doubles for the spots in the code you know for sure will have variation.
Another way is to use floats to emulate doubles, there are many techniques out there and the most basic one is to use 2 floats and do a little bit of math to save most of the number in one float and the remainder in the other (saw a great guide on this, if I find it I'll link it).
Certainly you should be using doubles instead of floats. This will probably reduce the number of flipped nodes significantly.
Generally, using an epsilon threshold is only useful when you are comparing two floating-point number for equality, not when you are comparing them to see which is bigger. So (for most models, at least) using epsilon won't gain you anything at all -- it will just change the set of flipped nodes, it wont make that set smaller. If your model itself is chaotic, then it's chaotic.

How to convert a double to a string without using the CRT

My question has no practical application. I'm just interested. Suppose, I have a double value and I want to obtain its string representation similarly to the printf function. How would I do that without the C runtime library? Let's suppose I'm on the x86 architecture.
Given that you state your question has no practical application, I figure you're trying to learn about floating point number representations.
Thus, if you're looking for a solution without using any library support, start with the format specification. From that you can discern the various "special" values (Infinity, NAN, etc) as well as decoding/calculating the actual numeric value. Once you have the significand and exponent, you know where to put the decimal point. You'll have to write your own itoa type routine. For radices which are a power of two, this can be as simple as a lookup table. For decimal, you'll have to do a little extra math.
you can get all values on left side by (double % 10) and then divide by 10 every time.
they will be in right to left.
to get values on right of dot you have to multiply by 10 and then (double % 10). they will be in left-to-right.
If you want to do it simply with a "close enough" result, see my article http://www.exploringbinary.com/quick-and-dirty-floating-point-to-decimal-conversion/ . It describes a simple program that uses floating-point to convert from floating-point to decimal, and explains why that approach can never be accurate for all conversions. (The program doesn't do decimal rounding like printf, but that should be easy enough to add.)

Why would I use 2's complement to compare two doubles instead of comparing their differences against an epsilon value?

Referenced here and here...Why would I use two's complement over an epsilon method? It seems like the epsilon method would be good enough for most cases.
Update: I'm purely looking for a theoretical reason why you'd use one over the other. I've always used the epsilon method.
Has anyone used the 2's complement comparison successfully? Why? Why Not?
the second link you reference mentions an article that has quite a long description of the issue:
http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm
but unless you are tweaking performance I would stick with epsilon so people can debug your code
The bits method might be faster. I say might because on modern (multicore, highly pipelined) processors it is often impossible to guess what is really faster.
Code the simplest most obviously correct implementation, then measure, then optomise.
In short, when comparing two floats with unknown origins, picking an epsilon that is valid is almost impossible.
For example:
What is a good epsilon when comparing distance in miles between Atlanta GA, Dallas TX and some place in Ohio?
What is a good epsilon when comparing distance in miles between my left foot, my right foot and the computer under my desk?
EDIT:
Ok, I'm getting a fair number of people not understanding why you wouldn't know what your epsilon is.
Back in the old days of lore, I wrote two programs that worked with NeverWinter Nights (a game made by BioWare). One of the programs took a binary model and converted it to ASCII. The other program took an ASCII model and compiled it into binary. One of the tests I wrote was to take all of BioWare's binary models, decompile them to ASCII and then back to binary. Then I compared my binary version with original one from BioWare. One of the problems during the comparison was dealing with some of the slight variances in floating point values. So instead of coming up with a bunch of different EPSILONS for each type of floating point number (vertex, normal, etc), I wanted to use something such as this twos compliment compare. Thus avoiding the whole multiple EPSILON issue.
The same type of issue can apply to any type of software that processes 3rd party data and then needs to validate their results with the original. In these cases you might not even know what the floating point values represent, you just have to compare them. We ran into this issue with our industrial automation software.
EDIT:
LOL, this has been voted up and down by different people.
I'll boil the problem down to this, given two arbitrary floating point numbers, how do you decide what epsilon to use? You can't.
How can you compare 1e23 and 1.0001e23 with an epsilon and still compare 1e-23 and 5.2e-23 using the same epsilon? Sure, you can do some dynamic epsilon tricks, but that is the whole point to the integer compare (which does NOT require the integers be exact).
The integer compare is able to compare two floats using an epsilon relative to the magnitude of the numbers.
EDIT
Steve, lets look at what you said in the comments:
"But you know what equality means to you... Hence, you should be able to find an appropriate epsilon".
Turn this statement around to say:
"If you know what equality means to you, then you should be able to find an appropriate epsilon."
The whole point to what I am trying to say is that there are applications where we don't know what equality means in the absolute sense, thus we have to resort to a relative compare which is what the integer version is trying to do.
When it comes to speed, follow these rules:
If you're not a very experienced developer, don't optimize.
If you are an experienced developer, don't optimize yet.
Do the easiest method.
Alex
Oskar's right. Don't screw with this unless you really, really need that performance.
And you don't. If you were in the situation that did, you wouldn't have needed to ask the question -- you'd already know. If you think you do, then you don't. Your performance problems lie elsewhere. Just use the readable version.
Using any method that compares bitwise will result in trouble when fractions are represented by approximations. All floating point numbers with fractions that are not denominated in powers of two (1/2, 1/4, 1/8, 1/65536, &c) are approximated. So, of course, are all irrational numbers.
float third = 1/3;
float two=2.0;
float another_two=third*6.0;
if(two != another_two)
print ("Approximation!\n");
The only time comparing bitwise would work is when you derive the floating point numbers exactly the same way or they are exact representations (whole numbers, fraction powers of two). Even then, there can be multiple representations of some numbers, though I have never seen this in a working system.