28x slowdown when multiplying small floating point numbers [duplicate] - c++

So I'm trying to learn more about Denormalized numbers as defined in the IEEE 754 standard for Floating Point numbers. I've already read several articles thanks to Google search results, and I've gone through several StackOverFlow posts. However I still have some questions unanswered.
First off, just to review my understanding of what a Denormalized float is:
Numbers which have fewer bits of precision, and are smaller (in
magnitude) than normalized numbers
Essentially, a denormalized float has the ability to represent the SMALLEST (in magnitude) number that is possible to be represented with any floating point value.
Does that sound correct? Anything more to it than that?
I've read that:
using denormalized numbers comes with a performance cost on many
platforms
Any comments on this?
I've also read in one of the articles that
one should "avoid overlap between normalized and denormalized numbers"
Any comments on this?
In some presentations of the IEEE standard, when floating point ranges are presented the denormalized values are excluded and the tables are labeled as an "effective range", almost as if the presenter is thinking "We know that denormalized numbers CAN represent the smallest possible floating point values, but because of certain disadvantages of denormalized numbers, we choose to exclude them from ranges that will better fit common use scenarios" -- As if denormalized numbers are not commonly used.
I guess I just keep getting the impression that using denormalized numbers turns out to not be a good thing in most cases?
If I had to answer that question on my own I would want to think that:
Using denormalized numbers is good because you can represent the smallest (in magnitude) numbers possible -- As long as precision is not important, and you do not mix them up with normalized numbers, AND the resulting performance of the application fits within requirements.
Using denormalized numbers is a bad thing because most applications do not require representations so small -- The precision loss is detrimental, and you can shoot yourself in the foot too easily by mixing them up with normalized numbers, AND the peformance is not worth the cost in most cases.
Any comments on these two answers? What else might I be missing or not understand about denormalized numbers?

Essentially, a denormalized float has the ability to represent the
SMALLEST (in magnitude) number that is possible to be represented with
any floating point value.
That is correct.
using denormalized numbers comes with a performance cost on many platforms
The penalty is different on different processors, but it can be up to 2 orders of magnitude. The reason? The same as for this advice:
one should "avoid overlap between normalized and denormalized numbers"
Here's the key: denormals are a fixed-point "micro-format" within the IEEE-754 floating-point format. In normal numbers, the exponent indicates the position of the binary point. Denormal numbers contain the last 52 bits in the fixed-point notation with an exponent of 2-1074 for doubles.
So, denormals are slow because they require special handling. In practice, they occur very rarely, and chip makers don't like to spend too many valuable resources on rare cases.
Mixing denormals with normals is slow because then you're mixing formats and you have the additional step of converting between the two.
I guess I just keep getting the impression that using denormalized
numbers turns out to not be a good thing in most cases?
Denormals were created for one primary purpose: gradual underflow. It's a way to keep the relative difference between tiny numbers small. If you go straight from the smallest normal number to zero (abrupt underflow), the relative change is infinite. If you go to denormals on underflow, the relative change is still not fully accurate, but at least more reasonable. And that difference shows up in calculations.
To put it a different way. Floating-point numbers are not distributed uniformly. There are always the same amount of numbers between successive powers of two: 252 (for double precision). So without denormals, you always end up with a gap between 0 and the smallest floating-point number that is 252 times the size of the difference between the smallest two numbers. Denormals fill this gap uniformly.
As an example about the effects of abrupt vs. gradual underflow, look at the mathematically equivalent x == y and x - y == 0. If x and y are tiny but different and you use abrupt underflow, then if their difference is less than the minimum cutoff value, their difference will be zero, and so the equivalence is violated.
With gradual underflow, the difference between two tiny but different normal numbers gets to be a denormal, which is still not zero. The equivalence is preserved.
So, using denormals on purpose is not advised, because they were designed only as a backup mechanism in exceptional cases.

Related

Why aren’t posit arithmetic representations commonly used?

I recently found this library that seems to provide its own types and operations on real numbers that are 2 to 3 orders of magnitude faster than normal floating point arithmetic.
The library is based on using a different representation for real numbers. One that is described to be both more efficient and mathematically accurate than floating point - posit.
If this representation is so efficient why isn’t it widely used in all sorts of applications and implemented in hardware, or maybe it is? As far as I know most typical hardware uses some kind of IEEE floating point representation for real numbers.
Is it somehow maybe only applicable to some very specific AI research, as they seem to list mostly that as an example?
If this representation is not only hundreds to thousands of times faster than floating point, but also much more deterministic and designed for use in concurrent systems, why isn’t it implemented in GPUs, which are basically massively concurrent calculators working on real numbers? Wouldn’t it bring huge advances in rendering performance and GPU computation capabilities?
Update: People behind the linked Universal library have released a paper about their design and implementation.
The most objective and convincing reason I know of is that posits were introduced less than 4 years ago. That's not enough time to make inroads in the marketplace (people need time to develop implementations), much less take it over (which, among other things, requires overcoming incompatibilities with existing software).
Whether or not the industry wants to make such a change is a separate issue that tends towards subjectivity.
The reason why the IEEE standard seems to be slower is because the IEEE addresses some topics with an higher importance. For example:
.
.
.
The IEEE Standard for Floating-Point Arithmetic (IEEE 754) defines:
arithmetic formats: sets of binary and decimal floating-point data, which consist of finite numbers (including signed zeros and subnormal numbers), infinities, and special "not a number" values (NaNs)
interchange formats: encodings (bit strings) that may be used to exchange floating-point data in an efficient and compact form
rounding rules: properties to be satisfied when rounding numbers during arithmetic and conversions
operations: arithmetic and other operations (such as trigonometric functions) on arithmetic formats
exception handling: indications of exceptional conditions (such as division by zero, overflow, etc.)
The above is from Wikipedia copied: https://en.wikipedia.org/wiki/IEEE_754
.
.
.
Your linked library, which seems to be called the posit number system advocates the following strengths.
Economical - No bit patterns are redundant. There is one representation for infinity denoted as ± inf and zero. All other bit patterns are valid distinct non-zero real numbers. ± inf serves as a replacement for NaN.
Mathematical Elegant - There is only one representation for zero, and the encoding is symmetric around 1.0. Associative and distributive laws are supported through deferred rounding via the quire, enabling reproducible linear algebra algorithms in any concurrency environment.
Tapered Accuracy - Tapered accuracy is when values with small exponent have more digits of accuracy and values with large exponents have fewer digits of accuracy. This concept was first introduced by Morris (1971) in his paper ”Tapered Floating Point: A New Floating-Point Representation”.
Parameterized precision and dynamic range -- posits are defined by a size, nbits, and the number of exponent bits, es. This enables system designers the freedom to pick the right precision and dynamic range required for the application. For example, for AI applications we may pick 5 or 6 bit posits without any exponent bits to improve performance. For embedded DSP applications, such as 5G base stations, we may select a 16 bit posit with 1 exponent bit to improve performance per Watt.
Simpler Circuitry - There are only two special cases, Not a Real and Zero. No denormalized numbers, overflow, or underflow.
The above is from GitHub copied: https://github.com/stillwater-sc/universal
.
.
.
So, in my opinion, the posit number system prefers performance, while the IEEE Standard for Floating-Point Arithmetic (IEEE 754) prefers technical compatibility and interchangeability.
I strongly challenge the claim of that library being faster than IEEE floating point:
Modern hardware includes circuitry specifically designed to handle IEEE floating point arithmetic. Depending on your CPU model, it can perform roughly 0.5 to 4 floating point operations per clock cycle. Yes, this circuitry does complex things, but because it's built in hardware and aggressively optimized for many years, it achieves this kind of speed.
Any software library that provide a different floating point format must perform the arithmetic in software. It cannot just say "please multiply these two numbers using double precision arithmetic" and see the result appear in the corresponding register two clock cycles later, it must contain code that takes the four different parts of the posit format, handles them separately, and fuses together a result. And that code takes time to execute. Much more time than just two clock cycles.
The "universal" library may have corner cases where its posit number format shines. But speed is not where it can hope to compete.

Is it safe to use double for scientific constants in C++?

I want to do some calculations in C++ using several scientific constants like,
effective mass of electron(m) 9.109e-31 kg
charge of electron 1.602e-19 C
Boltzman constant(k) 1.38×10−23
Time 8.92e-13
And I have calculations like, sqrt((2kT)/m)
Is it safe to use double for these constants and for results?
floating point arithmetic and accuracy is a very tricky subject. Read absolutely the floating-point-gui.de site.
Errors of many floating point operations can accumulate to the point of giving meaningless results. Several catastrophic events (loss of life, billions of dollars crashes) happened because of this. More will happen in the future.
There are some static source analyzers dedicated to detect them, for example Fluctuat (by my CEA colleagues, several now at Ecole Polytechnique, Palaiseau, France) and others. But Rice's theorem applies so that static analysis problem is unsolvable in general.
(but static analysis of floating point accuracy could sometimes practically work on some small programs of a few thousand lines, and do not scale well to large programs)
There are also some programs instrumenting calculations, for example CADNA from LIP6 in Paris, France.
(but instrumention may give a huge over-approximation of the error)
You could design your numerical algorithms to be less sensitive to floating point errors. This is very difficult (and you'll need years of work to acquire the relevant skills and expertise).
(you need both numerical, mathematical, and computer science skills, PhD-level)
You could also use arbitrary-precision arithmetic, or extended precision one (e.g. 128 bit floats or quad-precision). This slows down the computations.
An important consideration is how much effort (time and money) you can allocate to hunt floating point errors, and how much do they matter to your particular problem. But there is No Silver Bullet, and the question of floating point accurary remains a very difficult issue (you could work your entire life on it).
PS. I am not a floating point expert. I just happen to know some.
With the particular example you gave (constants and calculations) : YES
You didn't define 'safe' in your problem. I will assume that you want to keep the same number of correct significant digits.
doubles are correct to 15 significant digits
you have constants that have 4 significant digits
the operations involves use multiplication, division, and one square root
it doesn't seem that your results are going to the 'edge' cases of doubles (for very small or large exponent value, where mantissa loses precision)
In this particular order, the result would be correct to 4 significant digits.
In the general case, however, you have to be careful. (probably not, and this depend on your definition of 'safe' of course).
This is a large and complicated subject. In particular, your result might not be correct to the same number of significant digits if you have :
a lot more operations,
if you have substractions of numbers close to each other
other problematic operations
Obligatory reading : What Every Computer Scientist Should Know About Floating-Point Arithmetic
See the good answer of #Basile Starynkevitch for other references.
Also, for complex calculations, it is relevant to have some notion of the Condition number of a problem.
If you need a yes or no answer, No.

How to force 32bits floating point calculation consistency across different platforms?

I have a simple piece of code that operates with floating points.
Few multiplications, divisions, exp(), subtraction and additions in a loop.
When I run the same piece of code on different platforms (like PC, Android phones, iPhones) I get slightly different results.
The result is pretty much equal on all the platforms but has a very small discrepancy - typically 1/1000000 of the floating point value.
I suppose the reason is that some phones don't have floating point registers and just simulate those calculations with integers, some do have floating point registers but have different implementations.
There are proofs to that here: http://christian-seiler.de/projekte/fpmath/
Is there a way to force all the platform to produce a consistent results?
For example a good & fast open-source library that implements floating point mechanics with integers (in software), thus I can avoid hardware implementation differences.
The reason I need an exact consistency is to avoid compound errors among layers of calculations.
Currently those compound errors do produce a significantly different result.
In other words, I don't care so much which platform has a more correct result, but rather want to force consistency to be able to reproduce equal behavior. For example a bug which was discovered on a mobile phone is much easier to debug on PC, but I need to reproduce this exact behavior
One relatively widely used and high quality software FP implementation is MPFR. It is a lot slower than hardware FP, though.
Of course, this won't solve the actual problems your algorithm has with compound errors, it will just make it produce the same errors on all platforms. Probably a better approach would be to design an algorithm which isn't as sensitive to small differences in FP arithmetic, if feasible. Or if you go the MPFR route, you can use a higher precision FP type and see if that helps, no need to limit yourself to emulating the hardware single/double precision.
32-bit floating point math, for a given calculation will, at best, have a precision of 1 in 16777216 (1 in 224). Functions such as exp are often implemented as a sequence of calculations, so may have a larger error due to this. If you do several calculations in a row, the errors will add and multiply up. In general float has about 6-7 digits of precision.
As one comment says, check the rounding mode is the same. Most FPU's have a "round to nearest" (rtn), "round to zero" (rtz) and "round to even" (rte) mode that you can choose. The default on different platforms MAY vary.
If you perform additions or subtractions of fairly small numbers to fairly large numbers, since the number has to be normalized you will have a greater error from these sort of operations.
Normalized means shifted such that both numbers have the decimal place lined up - just like if you do that on paper, you have to fill in extra zeros to line up the two numbers you are adding - but of course on paper you can add 12419818.0 with 0.000000001 and end up with 12419818.000000001 because paper has as much precision as you can be bothered with. Doing this in float or double will result in the same number as before.
There are indeed libraries that do floating point math - the most popular being MPFR - but it is a "multiprecision" library, but it will be fairly slow - because they are not really built to be "plugin replacement of float", but a tool for when you want to calculate pi with 1000s of digits, or when you want to calculate prime numbers in the ranges much larger than 64 or 128 bits, for example.
It MAY solve the problem to use such a library, but it will be slow.
A better choice would, moving from float to double should have a similar effect (double has 53 bits of mantissa, compared to the 23 in a 32-bit float, so more than twice as many bits in the mantissa). And should still be available as hardware instructions in any reasonably recent ARM processor, and as such relatively fast, but not as fast as float (FPU is available from ARMv7 - which certainly what you find in iPhone - at least from iPhone 3 and the middle to high end Android devices - I managed to find that Samsung Galaxy ACE has an ARM9 processor [first introduced in 1997] - so has no floating point hardware).

IEEE floating points implementation, precision and accumulation of approximations [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
If I understand IEEE floating points correctly, they are unable to accurately represent some values. They are accurate in very limited cases and pretty much every floating point operation increases the accumulated approximations. Also, another downside - the "minimum step" grows with the exponent.
Wouldn't it be better to offer some more concrete representation?
For example, use 20 bits for the "decimal" part, but not all all 2^20 values, instead only 1000000, giving a full 1/millionth smallest possible representation/resolution, and use the other 44 bits for the integer part, giving quite the range. This way "floating point" numbers can be calculated using integer arithmetic, which may even end up faster. And in the case of multiplication, addition and subtraction there is no accumulation of approximations, the only possible loss is during division.
This concept rests on the fact that 2^n values are not optimal for representing decimal numbers, e.g. 1 does not divide that well into 1024 parts, but it divides pretty well into 1000. Technically, this is omitting to make use of the full precision, but I can think of plenty of cases where LESS can be MORE.
Naturally, this approach will lose both range and precision in a way, but in all the cases where extremities are not required, such a representation sounds like a good idea.
What you describe as a proposition is a fixed point arithmetic. Now, it's not necesserily about better or worse; each representation has advantages and disadvantages that often make one more suitable than the other for some specific purpose. For example:
Fixed point arithmetic does not introduce rouding errors for operations like addition and subtraction, what makes it suitable for financial calculations. You certainly don't want to store money as a floating point values.
Speculation: arguably, fixed point arithmetic is simpler in terms of implementation, which probably leads to smaller, more efficient circuits.
Floating-point representation covers extremely large range: it can be used to store really big numbers (~1040 for 32-bit float, 10308 for 64-bit one) and really small positive ones (~10-320) at the expense of precision, while the fixed-point representation is linearly limited by its size.
Floating-point precision is not distributed uniformly accross the representable range. Instead, most of the values (in terms of number of representable numbers) lies in the unit ball around 0. That makes it very accurate in the range we operate in most often.
You said it yourself:
Technically, this is omitting to make use of the full precision, but I
can think of plenty of cases where LESS can be MORE
Exactly, that's the whole point. Now, depending on the problem at hand, a choice must be made. There is no one-size-fits-all representation, it's always a tradeoff.

What is the binary format of a floating point number used by C++ on Intel based systems?

I am interested to learn about the binary format for a single or a double type used by C++ on Intel based systems.
I have avoided the use of floating point numbers in cases where the data needs to potentially be read or written by another system (i.e. files or networking). I do realise that I could use fixed point numbers instead, and that fixed point is more accurate, but I am interested to learn about the floating point format.
Wikipedia has a reasonable summary - see http://en.wikipedia.org/wiki/IEEE_754.
Burt if you want to transfer numbers betwen systems you should avoid doing it in binary format. Either use middleware like CORBA (only joking, folks), Tibco etc. or fall back on that old favourite, textual representation.
This should get you started : http://docs.sun.com/source/806-3568/ncg_goldberg.html. (:
Floating-point format is determined by the processor, not the language or compiler. These days almost all processors (including all Intel desktop machines) either have no floating-point unit or have one that complies with IEEE 754. You get two or three different sizes (Intel with SSE offers 32, 64, and 80 bits) and each one has a sign bit, an exponent, and a significand. The number represented is usually given by this formula:
sign * (2**(E-k)) * (1 + S / (2**k'))
where k' is the number of bits in the significand and k is a constant around the middle range of exponents. There are special representations for zero (plus and minus zero) as well as infinities and other "not a number" (NaN) values.
There are definite quirks; for example, the fraction 1/10 cannot be represented exactly as a binary IEEE standard floating-point number. For this reason the IEEE standard also provides for a decimal representation, but this is used primarily by handheld calculators and not by general-purpose computers.
Recommended reading: David Golberg's What Every Computer Scientist Should Know About Floating-Point Arithmetic
As other posters have noted, there is plenty of information about on the IEEE format used by every modern processor, but that is not where your problems will arise.
You can rely on any modern system using IEEE format, but you will need to watch for byte ordering. Look up "endianness" on Wikipedia (or somewhere else). Intel systems are little-endian, a lot of RISC processors are big-endian. Swapping between the two is trivial, but you need to know what type you have.
Traditionally, people use big-endian formats for transmission. Sometimes people include a header indicating the byte order they are using.
If you want absolute portability, the simplest thing is to use a text representation. However that can get pretty verbose for floating point numbers if you want to capture the full precision. 0.1234567890123456e+123.
Intel's representation is IEEE 754 compliant.
You can find the details at http://download.intel.com/technology/itj/q41999/pdf/ia64fpbf.pdf .
Note that decimal floating-point constants may convert to different floating-point binary values on different systems (even with different compilers on the same system). The difference would be slight -- maybe only as large as 2^-54 for a double -- but is a difference nonetheless.
Use hexadecimal constants if you want to guarantee the same floating-point binary value on any platform.