what does 754 mean in IEEE 754 floating point number? - ieee-754

There are lots of IEEE standard.and almost all languages guarantee to implement IEEE 754 binary floating-point numbers.

I think it's just the running number, like IRC has RFC1459

Related

Why dealing with floating point endianness conversion in C/C++ is difficult?

Swapping a bunch of bytes seems an easy job, but I always prefer to use libraries when they are available. So I stumble across Boost.Endian and I was surprised when I discover that it deals with intergers but not floating points. What is so special about floating point endianness? (I read Boost.Endian exaplantion but it doesn't fulfill my curiosity).
In particular I would like to understand this: if Boost.Endian already provides functions for swapping 4 and 8 bytes integers why not simply apply those functions to floats and doubles?
So I stumble across Boost.Endian and I was surprised when I discover that it deals with intergers but not floating points. What is so special about floating point endianness?
The documentation explains why:
An attempt was made to support four-byte floats and eight-byte doubles, limited to IEEE 754 (also known as ISO/IEC/IEEE 60559) floating point and further limited to systems where floating point endianness does not differ from integer endianness. Even with those limitations, support for floating point types was not reliable and was removed. For example, simply reversing the endianness of a floating point number can result in a signaling-NAN. For all practical purposes, binary serialization and endianness for integers are one and the same problem. That is not true for floating point numbers, so binary serialization interfaces and formats for floating point does not fit well in an endian-based library.
So, the problems are not related to the language, but to the underlying floating point format that the system uses.
The signaling-NAN mentioned as an example for example causes on POSIX systems the process to abort by default. This is typically an undesirable outcome when changing endianness.

How can one test floating point representation for file storage? [duplicate]

This question already has answers here:
How to check if C++ compiler uses IEEE 754 floating point standard
(2 answers)
Closed 7 years ago.
I have scientific data dumped into files. At the moment I've just dumped them with the same representation as they are in memory. I have documented that they are IEEE754 but I would love to have this asserted in the code so that if it gets ported to a weird architecture and separated from my documentation (research codes get passed around) it errors on compilation. At the moment I have
static_assert(sizeof(double)==8), "message");
Is there a way to test IEEE754? And can that be static asserted?
In C, check if this is defined:
__STDC_IEC_559__
In C++, check if this is true:
std::numeric_limits<double>::is_iec559()
IEC559, short for International Electrotechnical Commission standard 559,is the same as IEEE 754.
The answer for C++ is here: How to check if C++ compiler uses IEEE 754 floating point standard
For C, Annex F of the current C Standard specifies that the preprocessor constant __STDC_IEC_559__ will be pre-defined to the value 1 if the platform conforms to the IEEE 754 specification for floating point arithmetic. But older C compilers may not pre-define it even if floats are indeed IEEE 754.
Yet this is not enough for either language: this conformity only guarantees the semantics of IEEE 754, not the binary representation, and since you are dumping the binary representation to a file, you would also need to handle the endianness issue. It becomes even more complicated as endianness for integers may differ from the endianness for floats.
In the end, it is much better to use a textual representation to store your floating point values if you wish to achieve portability between various platforms, present and future. Of course, you will need to use the maximum precision for this representation.
Another solution is provided by http://hdfgroup.org that handle this very problem efficiently for large quantities of data.
You could use std::numeric_limits<double>'s is_iec559 - see here

Why is IEEE-754 Floating Point not exchangable between platforms?

It has been asserted that (even accounting for byte endian-ness) IEEE754 floating point is not guaranteed to be exchangeable between platforms.
So:
Why, theoretically, is IEEE floating point not exchangeable between platforms?
Are any of these concerns valid for modern hardware platforms (e.g. i686, x64, arm)?
If the concerns are valid, can you please demonstrate an example where this is the case (C or C++ is preferred)?
Motivation: Several GPS manufacturers exchange their binary formats for (e.g.) latitude, longitude and raw data in "IEEE-754 compliant floating point values". So, I don't have control to choose a text format or other "portable" format. Hence, my question has to when the differences may or may not occur.
IEEE 754 clause 3.4 specifies binary interchange format encodings. Given a floating-point format (below), the interchange format puts the sign bit in the most significant bit, biased exponent bits in the next most significant bits, and the significand encoding in the least significant bits. A mapping from bits to bytes is not specified, so a system could use little-endian, big-endian, or other ordering.
Clause 3.6 specifies format parameters for various format widths, including 64-bit binary, for which there is one sign bit, 11 exponent field bits, and 52 significand field bits. This clause also specifies the exponent bias.
Clauses 3.3 and 3.4 specify the data represented by this format.
So, to interchange IEEE-754 floating-point data, it seems systems need only to agree on two things: which format to use (e.g., 64-bit binary) and how to get the bits back and forth (e.g., how to map bits to bytes for writing to a file or a network message).

Floating point Endianness?

I'm writing a client and a server for a realtime offshore simulator, and, as I have to send a lot of data through a socket, I'm using binary data to maximize the amount of data I can send. I already know about integers endianness, and how to use htonl and ntohl to circumvent endianness issues, but my application, as almost all simulation software, deals with a lot of floats.
My question is: Is there some issue of endianness when dealing with binary formats of floating point numbers? I know that all the machines where my code will run use IEEE implementation of floating points, but is there some endianness issue when dealing with floats?
Since I only have access to machines with the same endian, I cannot test this by myself. So, I'll be glad if someone can help me with this.
According to Wikipedia,
Floating-point and endianness
On some machines, while integers were
represented in little-endian form,
floating point numbers were
represented in big-endian form.
Because there are many floating point
formats, and a lack of a standard
"network" representation, no standard
for transferring floating point values
has been made. This means that
floating point data written on one
machine may not be readable on
another, and this is the case even if
both use IEEE 754 floating point
arithmetic since the endianness of the
memory representation is not part of
the IEEE specification.
Yes, floating point can be endianess dependent. See Converting float values from big endian to little endian for info, be sure to read the comments.
EDIT: THE FOLLOWING IS WRONG ANSWER (leaving so that people know that this 'somewhat popular' view is wrong, please read the accepted answer and comments on this answer)
--WRONG ANSWER BEGIN--
There is no such thing as floating point endianness or integer endianness etc. Its just binary representation endianness.
Either a machine is little-endian, or its big-endian. Which means that it will either represent the MSb/MSB in the binary representation of any datatype as its first bit/byte or last bit/byte.
Thats it.
--WRONG ANSWER END---

What is the binary format of a floating point number used by C++ on Intel based systems?

I am interested to learn about the binary format for a single or a double type used by C++ on Intel based systems.
I have avoided the use of floating point numbers in cases where the data needs to potentially be read or written by another system (i.e. files or networking). I do realise that I could use fixed point numbers instead, and that fixed point is more accurate, but I am interested to learn about the floating point format.
Wikipedia has a reasonable summary - see http://en.wikipedia.org/wiki/IEEE_754.
Burt if you want to transfer numbers betwen systems you should avoid doing it in binary format. Either use middleware like CORBA (only joking, folks), Tibco etc. or fall back on that old favourite, textual representation.
This should get you started : http://docs.sun.com/source/806-3568/ncg_goldberg.html. (:
Floating-point format is determined by the processor, not the language or compiler. These days almost all processors (including all Intel desktop machines) either have no floating-point unit or have one that complies with IEEE 754. You get two or three different sizes (Intel with SSE offers 32, 64, and 80 bits) and each one has a sign bit, an exponent, and a significand. The number represented is usually given by this formula:
sign * (2**(E-k)) * (1 + S / (2**k'))
where k' is the number of bits in the significand and k is a constant around the middle range of exponents. There are special representations for zero (plus and minus zero) as well as infinities and other "not a number" (NaN) values.
There are definite quirks; for example, the fraction 1/10 cannot be represented exactly as a binary IEEE standard floating-point number. For this reason the IEEE standard also provides for a decimal representation, but this is used primarily by handheld calculators and not by general-purpose computers.
Recommended reading: David Golberg's What Every Computer Scientist Should Know About Floating-Point Arithmetic
As other posters have noted, there is plenty of information about on the IEEE format used by every modern processor, but that is not where your problems will arise.
You can rely on any modern system using IEEE format, but you will need to watch for byte ordering. Look up "endianness" on Wikipedia (or somewhere else). Intel systems are little-endian, a lot of RISC processors are big-endian. Swapping between the two is trivial, but you need to know what type you have.
Traditionally, people use big-endian formats for transmission. Sometimes people include a header indicating the byte order they are using.
If you want absolute portability, the simplest thing is to use a text representation. However that can get pretty verbose for floating point numbers if you want to capture the full precision. 0.1234567890123456e+123.
Intel's representation is IEEE 754 compliant.
You can find the details at http://download.intel.com/technology/itj/q41999/pdf/ia64fpbf.pdf .
Note that decimal floating-point constants may convert to different floating-point binary values on different systems (even with different compilers on the same system). The difference would be slight -- maybe only as large as 2^-54 for a double -- but is a difference nonetheless.
Use hexadecimal constants if you want to guarantee the same floating-point binary value on any platform.