I have a floating point addition that is somewhat likely to go wrong as the values have different magnitude, so quite a few significant digits are shifted out (possibly even all of them). In the scope of the entire calculation precision is not that relevant, only that the result is greater or equal to what would be the result with arbitrary precision (I'm keeping track of the end of a range here, and extend it by at least a certain amount).
So I'd need an addition that rounds up when bringing the summands to the same exponent (i.e. if one digit shifted out of a summand was set, the addition should take place with nextval(denormalized_summand, +infinity).
Is there an easy way to perform this addition (manually denormalizing the smaller summand and using nextval on it springs to mind, but I doubt that would be efficient)?
You can set the FPU rounding mode to "upward" and then just add normally.
This is how it's done in GNU environments:
#include <fenv.h>
fesetround(FE_UPWARD);
If you have a Microsoft compiler, the equivalent code is:
#include <float.h>
_set_controlfp(_RC_UP, _MCW_RC);
Related
I'm having trouble with rounding floats. I'm solving a task where you need to round your result to two decimal points. But I can't do it when the third decimal point is 5 because it's stored incorrectly.
For example: My result is equal to 1.005 and that should be rounded to 1.01. But C++ rounds it to 1.00 because the original float is stored as 1.0049999... and not 1.005.
I've already tried always adding a very small float to the result but there are some other test cases which are then rounded up but should be rounded down.
I know how floating-point works and that it is often not completely accurate. I'm just wondering whether anyone has found a way around this specific problem.
When you say "my result is equal to 1.005", you are assuming some count of true decimal digits. This can be 1.005 (three digits of fractional part), 1.0050 (four digits), 1.005000, and so on.
So, you should first round, using some usual rounding, to that count of digits. It is simpler to do this in integers: for example, with 6 fractional digits, it means some usual round(), rint(), etc. after multiplication by 1,000,000. With this step, you are getting exact decimal number. After this, you are able to make the required final rounding to what you need.
In your example, this will round 1,004,999.99... to 1,005,000. Then, divide by 10000 and round again.
(Notice that there are suggestions to make this rounding in yet specific way. The General Decimal Arithmetic specification and IBM arithmetic manuals suggest this rounding is done in the way that exact fractional part 0.5 shall be rounded away from zero unless least significant result bit becomes 0 or 5, in that case it is rounded toward zero. But, if you have no such rounding available, a general away-from-zero is also suitable.)
If you are implementing arithmetic for money accounting, it is reasonable to avoid floating point at all and use fixed-point arithmetic (emulated with integers, if needed). This is better because you the methods I've described for rounding are inevitably containing conversion to integers (and back), so, it's cheaper to use such integers directly. You will get inexact operation checking as well (by cost of explicit integer overflow).
If you can use a library like boost with its Multiprecision support.
Another option would be to use a long double, maybe that's precise enough for you.
ques:
I have a large number of floating point numbers (~10,000 numbers) , each having 6 digits after decimal. Now, the multiplication of all these numbers would yield about 60,000 digits. But the double range is for 15 digits only. The output product has to have 6 digits of precision after decimal.
my approach:
I thought of multiplying these numbers by 10^6 and then multiplying them and later dividing them by 10^12.
I also thought of multiplying these numbers using arrays to store their digits and later converting them to decimal. But this also appears cumbersome and may not yield correct result.
Is there an alternate easier way to do this?
I thought of multiplying these numbers by 10^6 and then multiplying them and later dividing them by 10^12.
This would only achieve further loss of accuracy. In floating-point, large numbers are represented approximately just like small numbers are. Making your numbers bigger only means you are doing 19999 multiplications (and one division) instead of 9999 multiplications; it does not magically give you more significant digits.
This manipulation would only be useful if it prevented the partial product to reach into subnormal territory (and in this case, multiplying by a power of two would be recommended to avoid loss of accuracy due to the multiplication). There is no indication in your question that this happens, no example data set, no code, so it is only possible to provide the generic explanation below:
Floating-point multiplication is very well behaved when it does not underflow or overflow. At the first order, you can assume that relative inaccuracies add up, so that multiplying 10000 values produces a result that's 9999 machine epsilons away from the mathematical result in relative terms(*).
The solution to your problem as stated (no code, no data set) is to use a wider floating-point type for the intermediate multiplications. This solves both the problems of underflow or overflow and leaves you with a relative accuracy on the end result such that once rounded to the original floating-point type, the product is wrong by at most one ULP.
Depending on your programming language, such a wider floating-point type may be available as long double. For 10000 multiplications, the 80-bit “extended double” format, widely available in x86 processors, would improve things dramatically and you would barely see any performance difference, as long as your compiler does map this 80-bit format to a floating-point type. Otherwise, you would have to use a software implementation such as MPFR's arbitrary-precision floating-point format or the double-double format.
(*) In reality, relative inaccuracies compound, so that the real bound on the relative error is more like (1 + ε)9999 - 1 where ε is the machine epsilon. Also, in reality, relative errors often cancel each other, so that you can expect the actual relative error to grow like the square root of the theoretical maximum error.
I am writing code for enumerating floating point addition in C++ using integer addition and shifts for some homework. I have googled the topic and I am able to add floating point numbers by adjusting exponents and then adding. The problem is I could not find the appropriate algorithm for rounding off result. Right now I am using truncation. It shows errors of something like 0.000x magnitude. But when I try to use this adder for complex calculations like fft's, it shows enormous errors.
So what I am looking for now is the exact algorithm that is used by my machine for rounding off floating point results. It would be great if someone can post some link for the purpose.
Thanks in advance.
Most commonly, if the bits to be rounded away represent a value less than half that of the smallest bit to be retained, they are rounded downward, the same as truncation. If they represent more than half, they are rounded upward, thus adding one in the position of the smallest retained bit. If they are exactly half, they are rounded downward if the smallest retained bit is zero and upward if the bit is one. This is called “round-to-nearest, ties to even.”
This presumes you have all the bits you are rounding away, that none have been lost yet in the course of doing arithmetic. If you cannot keep all the bits, there are techniques for keeping track of enough information about them to do the correct rounding, such as maintaining three bits called guard, round, and sticky bits.
Floating point type represents a number by storing its significant digits and its exponent separately on separate binary words so it fits in 16, 32, 64 or 128 bits.
Fixed point type stores numbers with 2 words, one representing the integer part, another representing the part past the radix, in negative exponents, 2^-1, 2^-2, 2^-3, etc.
Float are better because they have wider range in an exponent sense, but not if one wants to store number with more precision for a certain range, for example only using integer from -16 to 16, thus using more bits to hold digits past the radix.
In terms of performances, which one has the best performance, or are there cases where some is faster than the other ?
In video game programming, does everybody use floating point because the FPU makes it faster, or because the performance drop is just negligible, or do they make their own fixed type ?
Why isn't there any fixed type in C/C++ ?
That definition covers a very limited subset of fixed point implementations.
It would be more correct to say that in fixed point only the mantissa is stored and the exponent is a constant determined a-priori. There is no requirement for the binary point to fall inside the mantissa, and definitely no requirement that it fall on a word boundary. For example, all of the following are "fixed point":
64 bit mantissa, scaled by 2-32 (this fits the definition listed in the question)
64 bit mantissa, scaled by 2-33 (now the integer and fractional parts cannot be separated by an octet boundary)
32 bit mantissa, scaled by 24 (now there is no fractional part)
32 bit mantissa, scaled by 2-40 (now there is no integer part)
GPUs tend to use fixed point with no integer part (typically 32-bit mantissa scaled by 2-32). Therefore APIs such as OpenGL and Direct3D often use floating-point types which are capable of holding these values. However, manipulating the integer mantissa is often more efficient so these APIs allow specifying coordinates (in texture space, color space, etc) this way as well.
As for your claim that C++ doesn't have a fixed point type, I disagree. All integer types in C++ are fixed point types. The exponent is often assumed to be zero, but this isn't required and I have quite a bit of fixed-point DSP code implemented in C++ this way.
At the code level, fixed-point arithmetic is simply integer arithmetic with an implied denominator.
For many simple arithmetic operations, fixed-point and integer operations are essentially the same. However, there are some operations which the intermediate values must be represented with a higher number of bits and then rounded off. For example, to multiply two 16-bit fixed-point numbers, the result must be temporarily stored in 32-bit before renormalizing (or saturating) back to 16-bit fixed-point.
When the software does not take advantage of vectorization (such as CPU-based SIMD or GPGPU), integer and fixed-point arithmeric is faster than FPU. When vectorization is used, the efficiency of vectorization matters a lot more, such that the performance differences between fixed-point and floating-point is moot.
Some architectures provide hardware implementations for certain math functions, such as sin, cos, atan, sqrt, for floating-point types only. Some architectures do not provide any hardware implementation at all. In both cases, specialized math software libraries may provide those functions by using only integer or fixed-point arithmetic. Often, such libraries will provide multiple level of precisions, for example, answers which are only accurate up to N-bits of precision, which is less than the full precision of the representation. The limited-precision versions may be faster than the highest-precision version.
Fixed point is widely used in DSP and embedded-systems where often the target processor has no FPU, and fixed point can be implemented reasonably efficiently using an integer ALU.
In terms of performance, that is likley to vary depending on the target architecture and application. Obviously if there is no FPU, then fixed point will be considerably faster. When you have an FPU it will depend on the application too. For example performing some functions such as sqrt() or log() will be much faster when directly supported in the instruction set rather thna implemented algorithmically.
There is no built-in fixed point type in C or C++ I imagine because they (or at least C) were envisaged as systems level languages and the need fixed point is somewhat domain specific, and also perhaps because on a general purpose processor there is typically no direct hardware support for fixed point.
In C++ defining a fixed-point data type class with suitable operator overloads and associated math functions can easily overcome this shortcomming. However there are good and bad solutions to this problem. A good example can be found here: http://www.drdobbs.com/cpp/207000448. The link to the code in that article is broken, but I tracked it down to ftp://66.77.27.238/sourcecode/ddj/2008/0804.zip
You need to be careful when discussing "precision" in this context.
For the same number of bits in representation the maximum fixed point value has more significant bits than any floating point value (because the floating point format has to give some bits away to the exponent), but the minimum fixed point value has fewer than any non-denormalized floating point value (because the fixed point value wastes most of its mantissa in leading zeros).
Also depending on the way you divide the fixed point number up, the floating point value may be able to represent smaller numbers meaning that it has a more precise representation of "tiny but non-zero".
And so on.
The diferrence between floating point and integer math depends on the CPU you have in mind. On Intel chips the difference is not big in clockticks. Int math is still faster because there are multiple integer ALU's that can work in parallel. Compilers are also smart to use special adress calculation instructions to optimize add/multiply in a single instruction. Conversion counts as an operation too, so just choose your type and stick with it.
In C++ you can build your own type for fixed point math. You just define as struct with one int and override the appropriate overloads, and make them do what they normally do plus a shift to put the comma back to the right position.
You dont use float in games because it is faster or slower you use it because it is easier to implement the algorithms in floating point than in fixed point. You are assuming the reason has to do with computing speed and that is not the reason, it has to do with ease of programming.
For example you may define the width of the screen/viewport as going from 0.0 to 1.0, the height of the screen 0.0 to 1.0. The depth of the word 0.0 to 1.0. and so on. Matrix math, etc makes things real easy to implement. Do all of the math that way up to the point where you need to compute real pixels on a real screen size, say 800x400. Project the ray from the eye to the point on the object in the world and compute where it pierces the screen, using 0 to 1 math, then multiply x by 800, y times 400 and place that pixel.
floating point does not store the exponent and mantissa separately and the mantissa is a goofy number, what is left over after the exponent and sign, like 23 bits, not 16 or 32 or 64 bits.
floating point math at its core uses fixed point logic with extra logic and extra steps required. By definition compared apples to apples fixed point math is cheaper because you dont have to manipulate the data on the way into the alu and dont have to manipulate the data on the way out (normalize). When you add in IEEE and all of its garbage that adds even more logic, more clock cycles, etc. (properly signed infinity, quiet and signaling nans, different results for same operation if there is an exception handler enabled). As someone pointed out in a comment in a real system where you can do fixed and float in parallel, you can take advantage of some or all of the processors and recover some clocks that way. both with float and fixed clock rate can be increased by using vast quantities of chip real estate, fixed will remain cheaper, but float can approach fixed speeds using these kinds of tricks as well as parallel operation.
One issue not covered is the answers is a power consumption. Though it highly depends on specific hardware architecture, usually FPU consumes much more energy than ALU in CPU thus if you target mobile applications where power consumption is important it's worth consider fixed point impelementation of the algorithm.
It depends on what you're working on. If you're using fixed point then you lose precision; you have to select the number of places after the decimal place (which may not always be good enough). In floating point you don't need to worry about this as the precision offered is nearly always good enough for the task in hand - uses a standard form implementation to represent the number.
The pros and cons come down to speed and resources. On modern 32bit and 64bit platforms there is really no need to use fixed point. Most systems come with built in FPUs that are hardwired to be optimised for fixed point operations. Furthermore, most modern CPU intrinsics come with operations such as the SIMD set which help optimise vector based methods via vectorisation and unrolling. So fixed point only comes with a down side.
On embedded systems and small microcontrollers (8bit and 16bit) you may not have an FPU nor extended instruction sets. In which case you may be forced to use fixed point methods or the limited floating point instruction sets that are not very fast. So in these circumstances fixed point will be a better - or even your only - choice.
(1) I have met several cases where epsilon is added to a non-negative variable to guarantee nonzero value. So I wonder why not add the minimum value that the data type can represent instead of epsilon? What are the difference problems that these two can solve?
(2) Also I notice that the inverse of the maximum value of a double precision type is bigger than its min value, and inverse of its min value is inf, way bigger than its max value. Is it useful to compute the reciprocals of its max and min values?
(3) For a very small positive number of double type, to compute its reciprocal, how small it is when its reciprocal starts to not make sense? Is it better to put an upper bound on the reciprocal? How much is the bound?
Thanks and regards
Epsilon
Epsilon is the smallest value that can be added to 1.0 and produce a result that's distinguishable from 1.0. As Poita_ implied, this is useful for dealing with rounding errors. The situation is pretty simple: a normal floating point number has precision that remains fixed, regardless of the magnitude of the number. To put that slightly differently, it always computes to the same number of significant digits. For example, a typical implementation of double will have around 15 significant digits (which translates to Epsilon = ~1e-15). If you're working with a number in the range 10e-200, the smallest change it can represent will be around 10e-215. If you're working with a number in the range 10e+200, the smallest change it can represent will be around 1e+185.
Meaningful use of Epsilon normally requires scaling it to the range of the numbers you're working with, and using that to define a range you're willing to accept as probably due to rounding errors, so if two numbers fall within that range, you assume they're probably really equal. For example, with Epsilon of 1e-15, you might decide to treat numbers that fall within 1e-14 of each other as equal (i.e. on significant digit has been lost to rounding).
The smallest number that can be represented will normally be dramatically smaller than that. With that same typical double, it's usually going to be around 1e-308. This would be equivalent to Epsilon if you were using fixed point numbers instead of floating point numbers. For example, at one time quite a few people used fixed-point for various graphics. A typical version was a 16-bit bit integer broken into a something like 10 bits before the decimal point and six bits after the decimal point. Such a number can represent numbers from roughly 0 to 1024, with about two (decimal) digits after the decimal point. Alternatively, you can treat it as signed, running from (roughly) -512 to +512, again with around two digits after the decimal point.
In this case, the scaling factor is fixed, so the smallest difference that can be represented between two numbers is also fixed -- i.e. the difference between 1024 and the next larger number is exactly the same as the difference between 0 and the next larger number.
Reciprocals
I'm not sure exactly why you're concerned with computing reciprocals of extremely large or extremely small numbers. IEEE floating point uses denormals, which means numbers close to the limits of the range lose precision. Basically, a number is divided into an exponent and a significand. The exponent contains the magnitude of the number, and the significand contains the significant digits. Each is represented with a specified number of bits. In the usual case, numbers are normalized, which means they're vaguely similar to the scientific notation we all learned in school. In scientific notation, you always adjust the significand and exponent so there's exactly one place before the decimal point, so (for example) 140 becomes 1.4e2, 20030 becomes 2.003e4, and so on.
Think of this as the "normalized" form of a floating point number. Assume, however, that you're limited t an exponent having 2 digits, so it can only run from -99 to +99. Also assume that you can have a maximum of 15 significant digits. Within those limitations, you could produce a number like 0.00001002e-99. This lets you represent a number smaller than 1e-99, at the expense of losing some precision -- instead of 15 digits of precision, you've used 5 digits of your significand to represent magnitude, so you're left with only 10 digits that are really significant.
Except that it's in binary instead of decimal, IEEE floating point works roughly that way.
As you approach the end of the range, the numbers have less and less precision, until (at the very end of the range) you have only one bit of precision left.
If you take that number that has only one bit of precision, and take its reciprocal you get an extremely large number -- but since you only started with one bit of precision, the result can only have one bit of precision as well. Although slightly better than no result at all, it's still pretty close to meaningless. You've reached the limit of what the number of bits can represent; about the only way to cure the problem is to use more bits.
There's not really any one point at which a reciprocal (or other computation) "stops making sense". It's not really a hard line where one result makes sense, and another doesn't. Rather, it's a slope, where one result might have 15 digits of precision, another 10 and a third only 1. What "makes sense" or not is mostly how you interpret that result. To get meaningful results, you need a fair idea of how many digits in your final result are really meaningful.
You need to understand how floating point numbers are represented in the CPU. In the data type, 1 bit is reserved for the sign, i.e. whether it is a positive or negative number, (yes you can have positive and negative 0 in floating point numbers,) then a number of bits is reserved for the significand (or mantissa,) these are the significant digits in the floating point number and finally a number of bits is reserved for the exponent. The value of the floating point number now is:
-1^sign * significand * 2^exponent
This means the smallest number is a very small value, namely the smalles significand with the lowest exponent. The rounding error however is much larger and depends on the magnitude of the number, namely the smallest number with a given exponent. The epsilon is the difference between 1.0 and the next representable larger value. That's why epsilon is used in code that is robust for rounding errors, and really you should scale the epsilon with the magnitude of the numbers you work with if you do it right. The smallest representable value is not really of any significant use normally.
You're seeing the difference between the normalized and denormalized minimum. The problem is that due to the way the significand is used it is possible to make a larger negative exponent than a positive one, say the bit pattern of the significand is all zeros except the last bit, which is one, then the exponent is effectively lowered by the number of bits in the significand. For the maximum you cannot do this, even if you set the significand to all ones, the effective exponent will still only be the exponent that is given. i.e. think of the difference between 0.000001e-10 and 9.999999e+10, the first is much smaller than the second is big. The first is actually 1e-16 while the second is approx 1e+11.
It depends on the precision of the floating point number of course. In the case of double precision, the difference between the maximum and the next smaller value is already huge, (along the lines of 10^292,) so your rounding errors will be very big. If the value is too small you will simply get inf instead, as you already saw. Really, there is no strict answer, it depends entirely on the precision of numbers you need. Given that the rounding error is approx epsilon*magnitude, the reciprocal of (1/epsilon) already has a rounding error of around 1.0 if you need numbers to be accurate to 1e-3 then even epsilon would be too big to divide by.
See these wikipedia pages on IEEE754 and Machine epsilon for some background info.
Epsilons are added to test equality between two values that should be equal, but aren't because of rounding errors. While you could use the smallest positive value for epsilon, it wouldn't be optimal, because it's simply too small. The rounding errors caused by floating point arithmetic almost always exceed that smallest value, so a larger epsilon is needed. How large depends on your desired accuracy.
I don't understand the question. Are the reciprocals useful for what? I can't think of any reason why they would be useful.
In general, dividing by very small values is a bad idea as it will cause very large rounding errors. I'm not sure what you mean by adding an upper bound. Just avoid dividing by small values wherever possible.