I just wanted to know how the CPU "Cast" a floating point number.
I mean, i suppouse that when when we use a "float" or "double" in C/C++ the compiler is using the x87 unit, or am i wrong? (i couldn't find the answer) So, if this is the case and the floating point numbers are not emulated how does the compiler cast it?
I mean, i suppouse that when when we use a "float" or "double" in C/C++ the compiler is using the x87 unit, or am i wrong?
On modern Intel processors, the compiler is likely to use the SSE/AVX registers. The FPU is often not in regular use.
I just wanted to know how the CPU "Cast" a floating point number.
Converting an integer to a floating-point number is a computation that is basically (glossing over some details):
Start with the binary (for unsigned types) or two’s complement (for signed types) representation of the integer.
If the number is zero, return all bits zero.
If it is negative, remember that and negate the number to make it positive.
Locate the highest bit set in the integer.
Locate the lowest bit that will fit in the significand of the destination format. (For example, for the IEEE-754 binary32 format commonly used for float, 24 bits fit in the significand, so the 25th bit after the highest bit set does not fit.)
Round the number at that position where the significand will end.
Calculate the exponent, which is a function of where the highest bit set is. Add a “bias” used in encoding the exponent (127 for binary32, 1023 for binary64).
Assemble a sign bit, bits for the exponent, and bits for the significand (omitting the high bit, because it is always one). Return those bits.
That computation prepares the bits that represent a floating-point number. (It omits details involving special cases like NaNs, infinities, and subnormal numbers because these do not occur when converting typical integer formats to typical floating-point formats.)
That computation may be performed “in software” (that is, with general instructions for shifting bits, testing values, and so on) or “in hardware” (that is, with special instructions for doing the conversion). All desktop computers have instructions for this. Small processors for special-purpose embedded use might not have such instructions.
It is not clear what do you mean by
"Cast" a floating point number. ?
If target architecture has FPU then compiler will issue FPU instructions in order to manipulate floating point variables, no mistery there...
In order to assign float variable to int variable, float must be truncated or rounded(up or down). Special instructions usually exists to serve this purpose.
If target architecture is "FPU-less" then compiler(toolchain) might provide software implementation of floating point operations using CPU instructions available. For example, expression like a = x * y; will be equivalent to a = fmul(x, y); Where fmul() is compiler provided special function(intrinsic) to do floating point operations without FPU. Ofcourse this is typically MUCH slower than using hardware FPU. Floating point arithmetic is not used on such platforms if performance matters, fixed point arithmetic https://en.wikipedia.org/wiki/Fixed-point_arithmetic could be used instead.
Related
Is there any way to multiply two 32-bit floating point numbers without using a 64-bit intermediate value?
Background:
In an IEEE floating point number, 1-bit is devoted to the sign, 8-bits are devoted to the exponent, and 23-bits are devoted to the mantissa. When multiplying the two numbers, the mantissa's have to be multiplied separately. When doing this, you will end up with a 48-bit number (since the most significant bit of 1 is implied). After receiving a 48-bit number, that value should be truncated by 25-bits so that only the 23 most significant bits are retained in the result.
My question is that, to do this multiplication as is, you will need a 64-bit number to store the intermediate result. But, I'm assuming that there is a way to do this without using a 64-bit number since 32-bit architectures didn't have the luxury of using 64-bit numbers and they were still able to do 32-bit floating point number multiplication. So how can you do this without using a 64-bit intermediate number?
From https://isocpp.org/wiki/faq/newbie#floating-point-arith2 :
floating point calculations and comparisons are often performed by
special hardware that often contain special registers, and those
registers often have more bits than a double.
So even on a 32bit architecture you probably have more-than-32-bits registers for floating point operations.
I'm writing a binary file reader/writer and have decided that to handle the issue of endianness I will convert all data to "network" (big) endianness on writing and to host endianness on reading. I'm avoiding hton* because I don't want to link with winsock for just those functions.
My main point of confusion comes from how to handle floating point values. For all integral values I have the sized types in <cstdint> (uint32_t, etc.), but from my research no such equivalent exists for floating point types. I'd like to convert all floating point values to a 32 bit representation on writing and convert back to whatever precision is used on the host (32 bit is enough for my application). This way I will know precisely how many bytes to write and read for floating point values; as opposed to if I used sizeof(float) and sizeof(float) was different on the machine loading the file than the machine that wrote it.
I was just made aware of the possibility of using frexp to get the mantissa and exponent in integer terms, writing those integers out (with some fixed size), then reading the integers in and reconstructing the floating point value using ldexp. This looks promising, but I am wondering if there is any generally accepted or recommended method for handling float endianness without htonf/ntohf.
I know with almost certainly any platform I'll be targeting anytime soon will have float represented with 32-bits, but I'd like to make the code I write now as compatible as I can for use in future projects.
If you want to be completely cross-platform and standards-compliant, then the frexp/ldexp solution is the best way to go. (Although you might need to consider the highly theoretical case where either the source or the target hardware uses decimal floating point.)
Suppose that one or the other machine did not have a 32-bit floating point representation. Then there is no datatype on that machine bit-compatible with a 32-bit floating pointer number, regardless of endianness. So there is then no standard way of converting the non-32-bit float to a transmittable 32-bit representation, or to convert the transmitted 32-bit representation to a native non-32-bit floating point number.
You could restrict your scope to machines which have a 32-bit floating point representation, but then you will need to assume that both machines have the same number and order of bits dedicated to sign, exponent and mantissa. That's likely to be the case, since IEEE-754 format is almost universal these days, but C++ does not insist on it and it is at least conceivable that there is a machine which implements 1/8/23-bit floating point numbers with the sign bit at the low-order end instead of the high-order end.
In short, endianness is only one of the possible incompatibilities between binary floating point formats. Reducing every floating point number to two integers, however, avoids having to deal with other incompatibilities (other than radix).
I'm writing an algorithm, to round a floating number. The input will be a 64bit IEEE754 double type number, very close to X.5, where X is a integer less than 32. The first solution came into my mind is to use a bit mask, to mask off those least significant bits as they represent very small fractions of 2^-n.(Given the exponent is not large).
But the problem is should I do that? Is there any other ways to accomplish the same thing? I feel using bit operation on float point is very controversy. Thanks!
The langugage I'm using is C++ by the way.
Edit:
Thanks guys, for your comments. I appreciate! Let's say I have a float number, can be 1.4999999... or 21.50000012.... I want to round it to 1.5 or 21.5. My goal is to round any number to its nearest to X.5 form, since it can be stored in a IEEE754 float point number.
If your compiler guarantees that you are using IEEE 754 floating-point, I would recommend that you round according to the method delineated in this blog post: add, and then immediately subtract a large constant so as to send the value in the binade of floating-point numbers where the ULP is 0.5. You won't find any faster method, and it does not involve any bit manipulation.
The appropriate constant to round a number between 0 and 32 to the nearest halt-unit for IEEE 754 double-precision is 2251799813685248.0.
Summary: use x = x + 2251799813685248.0 - 2251799813685248.0;.
You can use any of the functions round(), floor(), ceil(), rint(), nearbyint(), and trunc(). All do rounding in different modes, and all are standard C99. The only thing you need to do is to link against the standard math library by specifying -lm as a compiler flag.
As to trying to achieve rounding by bit manipulations, I would stay away from that: a) it will be much slower than using the functions above (they generally use hardware facilities where possible), b) it is reinventing the wheel with a lot of potential for bugs, and c) the newer C standards don't like you doing bit manipulations on floating point types: they use the so called strict aliasing rules that disallow you to just cast a double* to an uint64_t*. You would either need to do your bit manipulation by casting to a unsigned char* and manipulating the IEEE number byte by byte, or you would have to use memcpy() to copy the bit representation from a double variable into an uint64_t and back again. A lot of hassle for something already available in the form of standardized functions and hardware support.
You want to round x to the nearest value of the form d.5. For a generan number you write:
round(x+0.5)-0.5
For a number close to d.5, less than 0.25 away, you can use Pascal's offering:
round(2*x)*0.5
If you're looking for a bit trick and are guaranteed to have doubles in the ranges you describe, then you could do something like this (inline as you see fit):
void RoundNearestHalf(double &d) {
unsigned const maskshift = ((*(unsigned __int64*)&d >> 52) - 1023);
unsigned __int64 const setmask = 0x0008000000000000 >> maskshift;
unsigned __int64 const clearmask = ~0x0007FFFFFFFFFFFF >> maskshift;
*(unsigned __int64*)&d |= setmask;
*(unsigned __int64*)&d &= clearmask;
}
maskshift is the unbiased exponent. For the input range, we know this will be non-negative and no more than 4 (the trick will work for higher values too, but no more than 51). We use this value to make a setmask which sets the 2^-1 (one-half) place in the mantissa, and clearmask which clears all bits in the mantissa of lower value than 2^-1. The result is d rounded to the nearest half.
Note that it would be worth profiling this against other implementations, perhaps using the standard library to determine whether or not its actually faster.
I can't speak about C++ for sure, but in C99 the use of IEEE 754 standard for floating point will be purely normative (not required). In C99 if the __STDC_IEC_559__ macro is set then it declares that IEC 559 (which is more or less IEEE 754) is used for floating point.
I think it should be pointed out that there are functions to handle many types of rounding for you.
I am trying to convert an 80-bit extended precision floating point number (in a buffer) to double.
The buffer basically contains the content of an x87 register.
This question helped me get started as I wasn't all that familiar with the IEEE standard.
Anyway, I am struggling to find useful info on subnormal (or denormalized) numbers in the 80-bit format.
What I know is that unlike float32 or float64 it doesn't have a hidden bit in the mantissa (no implied addition of 1.0), so one way to know if a number is normalized is to check if the highest bit in the mantissa is set. That leaves me with the following question:
From what wikipedia tells me, float32 and float64 indicate a subnormal number with a (biased) exponent of 0 and a non-zero mantissa.
What does that tell me in an 80-bit float?
Can 80-bit floats with a mantissa < 1.0 even have a non-zero exponent?
Alternatively, can 80-bit floats with an exponent of 0 even have a mantissa >= 1.0?
EDIT: I guess the question boils down to:
Can I expect the FPU to sanitize exponent and highest mantissa bit in x87 registers?
If not, what kind of number should the conversion result in? Should I ignore the exponent altogether in that case? Or is it qNaN?
EDIT:
I read the FPU section in the Intel manual (Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1: Basic Architecture) which was less scary than I had feared. As it turns out the following values are not defined:
exponent == 0 + mantissa with the highest bit set
exponent != 0 + mantissa without the highest bit set
It doesn't mention if these values can appear in the wild, nor if they are internally converted.
So I actually dusted off Ollydbg and manually set bits in the x87 registers.
I crafted ST(0) to contain all bits set in the exponent and a mantissa of 0. Then I made it execute
FSTP QWORD [ESP]
FLD QWORD [ESP]
The value stored at [ESP] was converted to a signaling NaN.
After the FLD, ST(0) contained a quiet NaN.
I guess that answers my question. I accepted J-16 SDiZ's solution because it's the most straight forward solution (although it doesn't explicitly explain some of the finer details).
Anyway, case solved. Thanks, everybody.
Try SoftFloat library, it have floatx80_to_float32, floatx80_to_float64 and floatx80_to_float128. Detect the native format, act accordingly.
The problem with finding information on sub-normal 80 bit numbers might be because the 8087 does not make use of any special denormalization for them. Found this on MSDNs page on Type float (C):
The values listed in this table apply only to normalized
floating-point numbers; denormalized floating-point numbers have a
smaller minimum value. Note that numbers retained in 80x87 registers
are always represented in 80-bit normalized form; numbers can only be
represented in denormalized form when stored in 32-bit or 64-bit
floating-point variables (variables of type float and type long).
Edit
The above might be true for how Microsoft make use of the FPUs registers. Found another source that indicate this:
FPU Data types:
The 80x87 FPU generally stores values in a normalized format. When a
floating point number is normalized, the H.O. bit is always one. In
the 32 and 64 bit floating point formats, the 80x87 does not actually
store this bit, the 80x87 always assumes that it is one. Therefore, 32
and 64 bit floating point numbers are always normalized. In the
extended precision 80 bit floating point format, the 80x87 does not
assume that the H.O. bit of the mantissa is one, the H.O. bit of the
number appears as part of the string of bits.
Normalized values provide the greatest precision for a given number of
bits. However, there are a large number of non-normalized values which
we can represent with the 80 bit format. These values are very close
to zero and represent the set of values whose mantissa H.O. bit is not
zero. The 80x87 FPUs support a special form of 80 bit known as
denormalized values.
Floating point type represents a number by storing its significant digits and its exponent separately on separate binary words so it fits in 16, 32, 64 or 128 bits.
Fixed point type stores numbers with 2 words, one representing the integer part, another representing the part past the radix, in negative exponents, 2^-1, 2^-2, 2^-3, etc.
Float are better because they have wider range in an exponent sense, but not if one wants to store number with more precision for a certain range, for example only using integer from -16 to 16, thus using more bits to hold digits past the radix.
In terms of performances, which one has the best performance, or are there cases where some is faster than the other ?
In video game programming, does everybody use floating point because the FPU makes it faster, or because the performance drop is just negligible, or do they make their own fixed type ?
Why isn't there any fixed type in C/C++ ?
That definition covers a very limited subset of fixed point implementations.
It would be more correct to say that in fixed point only the mantissa is stored and the exponent is a constant determined a-priori. There is no requirement for the binary point to fall inside the mantissa, and definitely no requirement that it fall on a word boundary. For example, all of the following are "fixed point":
64 bit mantissa, scaled by 2-32 (this fits the definition listed in the question)
64 bit mantissa, scaled by 2-33 (now the integer and fractional parts cannot be separated by an octet boundary)
32 bit mantissa, scaled by 24 (now there is no fractional part)
32 bit mantissa, scaled by 2-40 (now there is no integer part)
GPUs tend to use fixed point with no integer part (typically 32-bit mantissa scaled by 2-32). Therefore APIs such as OpenGL and Direct3D often use floating-point types which are capable of holding these values. However, manipulating the integer mantissa is often more efficient so these APIs allow specifying coordinates (in texture space, color space, etc) this way as well.
As for your claim that C++ doesn't have a fixed point type, I disagree. All integer types in C++ are fixed point types. The exponent is often assumed to be zero, but this isn't required and I have quite a bit of fixed-point DSP code implemented in C++ this way.
At the code level, fixed-point arithmetic is simply integer arithmetic with an implied denominator.
For many simple arithmetic operations, fixed-point and integer operations are essentially the same. However, there are some operations which the intermediate values must be represented with a higher number of bits and then rounded off. For example, to multiply two 16-bit fixed-point numbers, the result must be temporarily stored in 32-bit before renormalizing (or saturating) back to 16-bit fixed-point.
When the software does not take advantage of vectorization (such as CPU-based SIMD or GPGPU), integer and fixed-point arithmeric is faster than FPU. When vectorization is used, the efficiency of vectorization matters a lot more, such that the performance differences between fixed-point and floating-point is moot.
Some architectures provide hardware implementations for certain math functions, such as sin, cos, atan, sqrt, for floating-point types only. Some architectures do not provide any hardware implementation at all. In both cases, specialized math software libraries may provide those functions by using only integer or fixed-point arithmetic. Often, such libraries will provide multiple level of precisions, for example, answers which are only accurate up to N-bits of precision, which is less than the full precision of the representation. The limited-precision versions may be faster than the highest-precision version.
Fixed point is widely used in DSP and embedded-systems where often the target processor has no FPU, and fixed point can be implemented reasonably efficiently using an integer ALU.
In terms of performance, that is likley to vary depending on the target architecture and application. Obviously if there is no FPU, then fixed point will be considerably faster. When you have an FPU it will depend on the application too. For example performing some functions such as sqrt() or log() will be much faster when directly supported in the instruction set rather thna implemented algorithmically.
There is no built-in fixed point type in C or C++ I imagine because they (or at least C) were envisaged as systems level languages and the need fixed point is somewhat domain specific, and also perhaps because on a general purpose processor there is typically no direct hardware support for fixed point.
In C++ defining a fixed-point data type class with suitable operator overloads and associated math functions can easily overcome this shortcomming. However there are good and bad solutions to this problem. A good example can be found here: http://www.drdobbs.com/cpp/207000448. The link to the code in that article is broken, but I tracked it down to ftp://66.77.27.238/sourcecode/ddj/2008/0804.zip
You need to be careful when discussing "precision" in this context.
For the same number of bits in representation the maximum fixed point value has more significant bits than any floating point value (because the floating point format has to give some bits away to the exponent), but the minimum fixed point value has fewer than any non-denormalized floating point value (because the fixed point value wastes most of its mantissa in leading zeros).
Also depending on the way you divide the fixed point number up, the floating point value may be able to represent smaller numbers meaning that it has a more precise representation of "tiny but non-zero".
And so on.
The diferrence between floating point and integer math depends on the CPU you have in mind. On Intel chips the difference is not big in clockticks. Int math is still faster because there are multiple integer ALU's that can work in parallel. Compilers are also smart to use special adress calculation instructions to optimize add/multiply in a single instruction. Conversion counts as an operation too, so just choose your type and stick with it.
In C++ you can build your own type for fixed point math. You just define as struct with one int and override the appropriate overloads, and make them do what they normally do plus a shift to put the comma back to the right position.
You dont use float in games because it is faster or slower you use it because it is easier to implement the algorithms in floating point than in fixed point. You are assuming the reason has to do with computing speed and that is not the reason, it has to do with ease of programming.
For example you may define the width of the screen/viewport as going from 0.0 to 1.0, the height of the screen 0.0 to 1.0. The depth of the word 0.0 to 1.0. and so on. Matrix math, etc makes things real easy to implement. Do all of the math that way up to the point where you need to compute real pixels on a real screen size, say 800x400. Project the ray from the eye to the point on the object in the world and compute where it pierces the screen, using 0 to 1 math, then multiply x by 800, y times 400 and place that pixel.
floating point does not store the exponent and mantissa separately and the mantissa is a goofy number, what is left over after the exponent and sign, like 23 bits, not 16 or 32 or 64 bits.
floating point math at its core uses fixed point logic with extra logic and extra steps required. By definition compared apples to apples fixed point math is cheaper because you dont have to manipulate the data on the way into the alu and dont have to manipulate the data on the way out (normalize). When you add in IEEE and all of its garbage that adds even more logic, more clock cycles, etc. (properly signed infinity, quiet and signaling nans, different results for same operation if there is an exception handler enabled). As someone pointed out in a comment in a real system where you can do fixed and float in parallel, you can take advantage of some or all of the processors and recover some clocks that way. both with float and fixed clock rate can be increased by using vast quantities of chip real estate, fixed will remain cheaper, but float can approach fixed speeds using these kinds of tricks as well as parallel operation.
One issue not covered is the answers is a power consumption. Though it highly depends on specific hardware architecture, usually FPU consumes much more energy than ALU in CPU thus if you target mobile applications where power consumption is important it's worth consider fixed point impelementation of the algorithm.
It depends on what you're working on. If you're using fixed point then you lose precision; you have to select the number of places after the decimal place (which may not always be good enough). In floating point you don't need to worry about this as the precision offered is nearly always good enough for the task in hand - uses a standard form implementation to represent the number.
The pros and cons come down to speed and resources. On modern 32bit and 64bit platforms there is really no need to use fixed point. Most systems come with built in FPUs that are hardwired to be optimised for fixed point operations. Furthermore, most modern CPU intrinsics come with operations such as the SIMD set which help optimise vector based methods via vectorisation and unrolling. So fixed point only comes with a down side.
On embedded systems and small microcontrollers (8bit and 16bit) you may not have an FPU nor extended instruction sets. In which case you may be forced to use fixed point methods or the limited floating point instruction sets that are not very fast. So in these circumstances fixed point will be a better - or even your only - choice.