C/C++ - converting a float to byte array such that bytes preserve ordering - c++

Is there a way to convert a float to an unsigned char fbytes[8] such that:
If float value f1 < f2, then fbytes1[i] should NEVER be greater than fbytes2[i] for 0 <= i <= 7?
The intention behind this is to serialize and store a floating point number and still be able to do comparisons by comparing byte by byte. E.g. even if the float is stored as raw bytes, I can sort rows by simple byte comparison and still preserve order.

If you ignore NANs (which always compare unequal to everything, including themselves), this is simple. Otherwise, it is impossible.
First, convert to a big-endian byte array directly.
The sign bit is the high bit of the first byte. The following bits are the exponent, and then the mantissa.
If the sign bit is not set, set it so that all positive numbers sort after all negative numbers.
Otherwise, apply the ~ operator to all bytes (or maybe do this while it's all packed in a single integer - why do you need bytes anyway?). This will reverse the sort order within the negative numbers, as well as moving them below the positive numbers.
You may also want to coalesce zeros (if the only bit set anywhere is the sign bit, clear it).

If your floating-point implementation uses IEEE-754 (most do), you can take advantage of a subtle design point: if you treat the bits of an IEEE floating-point value as a sign-magnitude representation of an integral value, the usual ordering comparisons on that value will give exactly the same order as the corresponding comparisons on the floating-point value, and provide a consistent ordering when NaNs are present (positive NaNs are greater than infinity and negative NaNs are less than -infinity). So check the sign bits: if they're different, the positive one is greater; if they're the same, compare the rest as if they were an integral value.

Related

Can the bitwise AND of some positive integers be negative?

Can the bitwise AND of some positive integers be negative?
Let's say I have some numbers 1,2,3..N. For example, take N=7 and I want to find a subarray which results me in a negative result after bitwise AND operation.
Taking 4,5,6,7 gives me 4(100) but what subarray(for any N) could give me negative result?
(In most/commonly used encodings) sign is stored in first bit.
For first bit to be 1 (meaning negative number) after AND, both bits need to be 1 before that. So it means only 2 negative numbers can make a negative number with AND.
This is true both for integers and floats (IEEE754) (the most common int and float implementations) but basically anything that can store negative numbers has to store the sign somewhere... and as positive numbers are the "default", it's minus that gets marked as 1.
Can the bitwise AND of some positive integers be negative?
No, it can't.
A negative number has the top bit set. All positive numbers have the top bit unset. A zero bit AND a zero bit results in a zero bit. This applies to any number of bitwise AND operations performed on any number of positive integers.
And it also applies if you include zero.
In fact, if you perform an bitwise AND on a collection of integers, the result will only be negative if all of the integers are less than zero.
Note that above is for twos' complement binary representations. All modern mainstream computer instruction sets use twos' complement to represent integers.
With ones' complement you actually have two zeros +0 and -0. But if you treat -0 as negative then the same logic applies.
Bitwise AND'ing of floating point number representations ... and treating the results as numbers ... doesn't make a lot of sense. So I haven't considered that.

why comparisons operators are so quick?

According to the Alexandrescu article https://www.facebook.com/notes/facebook-engineering/three-optimization-tips-for-c/10151361643253920/
The speed hierarchy of operations is:
comparisons
(u)int add, subtract, bitops, shift
floating point add, sub (separate unit!)
indexed array access (caveat: cache effects)
(u)int32 mul
FP mul
FP division, remainder
(u)int division,remainder
I don't understand why comparisons are so quick.
How it's possible to quickly compare two large number? What is the algorithm?
These comparisons are quick because they are easy. The article in question is talking about comparisons of numeric values. In general, these fall into two types: comparing one integer with another integer in the same format, and comparing two floating point numbers.
In the case of integer comparison, the integer format used will be such that it's possible to compare two values in a very straightforward way, looking at specific bits in a certain order (sign bits, high-order bits, then low-order bits).
In the case of floating point numbers, the vast majority of values will be represented in a normalised form, so you can just compare, in order:
the corresponding sign bits, then if they are equal
the corresponding exponents, then if they are equal
the corresponding mantissas (high to low order bits, bitwise).
These sorts of comparison are so common and so important in general processors that they will be optimised for speed.

What do the C and C++ standards say about bit-level integer representation and manipulation?

I know the C and C++ standards don't dictate a particular representation for numbers (could be two's complement, sign-and-magnitude, etc.). But I don't know the standards well enough (and couldn't find if it's stated) to know if there are any particular restrictions/guarantees/reserved representations made when working with bits. Particularly:
If all the bits in an integer type are zero, does the integer as whole represent zero?
If any bit in an integer type is one, does the integer as a whole represent non-zero? (if this is a "yes" then some representations like sign-and-magnitude would be additionally restricted)
Is there a guaranteed way to check if any bit is not set?
Is there a guaranteed way to check if any bit is set? (#3 and #4 kind of depend on #1 and #2, because I know how to set, for example the 5th bit (see #5) in some variable x, and I'd like to check a variable y to see if it's 5th bit is 1, I would like to know if if (x & y) will work (because as I understand, this relies on the value of the representation and not whether nor not that bit is actually 1 or 0))
Is there a guaranteed way to set the left-most and/or right-most bits? (At least a simpler way than taking a char c with all bits true (set by c = c | ~c) and doing c = c << (CHAR_BIT - 1) for setting the high-bit and c = c ^ (c << 1) for the low-bit, assuming I'm not making any assumptions I should't be, given these questions)
If the answer to #1 is "no" how could one iterate over the bits in an integer type and check if each one was a 1 or a 0?
I guess my overall question is: are there any restrictions/guarantees/reserved representations made by the C and C++ standards regarding bits and integers, despite the fact that an integer's representation is not mandated (and if the C and C++ standards differ in this regard, what's their difference)?
I came up with these questions while doing my homework which required me to do some bit manipulating (note these aren't questions from my homework, these are much more "abstract").
Edit: As to what I refer to as "bits," I mean "value forming" bits and am not including "padding" bits.
(1) If all the bits in an integer type are zero, does the integer as whole represent zero?
Yes, the bit pattern consisting of all zeroes always represents 0:
The representations of integral types shall define values by use of a pure binary numeration system.49 [§3.9.1/7]
49 A positional representation for integers that uses the binary digits 0 and 1, in which the values represented by successive bits are additive, begin with 1, and are multiplied by successive integral power of 2, except perhaps for the bit with the highest position.
(2) If any bit in an integer type is one, does the integer as a whole represent non-zero? (if this is a "yes" then some representations like sign-and-magnitude would be additionally restricted)
No. In fact, signed magnitude is specifically allowed:
[ Example: this International Standard permits 2’s complement, 1’s complement and signed magnitude representations for integral types. —end
example ] [§3.9.1/7]
(3) Is there a guaranteed way to check if any bit is not set?
I believe the answer to this is "no," if you consider signed types. It is equivalent to equality testing with a bit pattern of all ones, which is only possible if you have a way to produce a signed number with bit pattern of all ones. For an unsigned number this representation is guaranteed, but casting from unsigned to signed is undefined if the number is unrepresentable:
If the destination type is signed, the value is unchanged if it can be represented in the destination type (and bit-field width); otherwise, the value is implementation-defined. [§4.7/3]
(4) Is there a guaranteed way to check if any bit is set?
I don't think so, because signed magnitude is allowed—0 would compare equal to −0. But it should be possible with unsigned numbers.
(5) Is there a guaranteed way to set the left-most and/or right-most bits?
Again, I believe the answer is "yes" for unsigned numbers, but "no" for signed numbers. Shifts are undefined for negative signed numbers:
Otherwise, if E1 has a signed type and non-negative value, and E1 × 2E2 is representable in the result type, then that is the resulting value; otherwise, the behavior is undefined. [§5.8/2]
You use the term "all bits" repeatedly, but you do not clarify what "all bits" you are referring to. Object representation of integer types in C/C++ might include value-forming bits and padding bits. The only integer type that is guaranteed not to have padding bits is [signed/unsigned] char.
The language always guaranteed that if all value-forming bits are zero, then the represented integer value is also zero.
As for padding bits, things are/were a bit more complicated. The original specification of C language (C89/90 as well as the original C99) did not guarantee that setting all object bits to zero produced a valid integer representation. It could've produced an invalid trap representation. I.e. in the original C (and even in C99 at first) using memset(..., 0, ...) on integer types did not guarantee that the objects will receive valid zero values (with the exception of [signed/unsigned] char). This was changed in later specifications, namely in one of the technical corrigendums for C99. Now it is required that all-zero bit pattern in an integer object (that involves all bits, including padding ones) represents a valid zero value.
I.e. in modern C it is legal to use memset(..., 0, ...) to set any integer objects to zero, but it became legal only after C99.
You already got some answers about the representation of integer values. There is exactly one way that is guaranteed to give you all the individual bits of any object that is represented in memory: view it as array of unsigned char. This is the only integral type that has no padding bits and is guaranteed to have no trap representation. So casting a pointer of type T* to your object to unsigned char* will always work, as long as you only access the first sizeof(T) bytes. By that you could inspect and set all bytes (and thus bits) to your liking.
If you are interested in more details, here I have written something up about the anatomy of integer types in C. C++ might differ a bit from that, in particular type puning through union as described there doesn't seem to be well defined in C++.
Q: If any bit in an integer type is one, does the integer as a whole represent non-zero? (if this is a "yes" then some representations like sign-and-magnitude would be additionally restricted)
No. The standards for C and C++ don't rule out signed magnitude or one's complement, both of which have +0 and -0. While +0 and -0 do have to compare equal, but they do not have to have the same representation.
Good luck finding a machine nowadays that uses signed magnitude or one's complement.
If you want your brain to explode, consider this: If you interpret an int or long or long long as an array of unsigned char (which is the most reasonable thing to do if you want to see all the bits), you know that the order of bytes is not defined, for example "bigendian" vs. "littleendian". We all (hopefully) know that.
But it is worse: Each bit of an int could be stored in any of the bits of the array of char. So there are 32! ways how the bits of a 32 bit integer could be mapped to an array of four 8-bit unsigned chars by a truly bizarre implementation. Fortunately, I haven't encountered more than two ways myself (and I know of one more ordering in a real computer).
If all the bits in an integer type are zero, does the integer as whole represent zero?
Edit: since you have now clarified that you are not concerned with the padding bits, the answer to this is actually "yes". But I leave the original:
Not necessarily, it could be a trap representation. See C99 6.2.6.1:
For unsigned integer types other than unsigned char, the bits of the object
representation shall be divided into two groups: value bits and padding bits (there need not be any of the latter)
The presence of padding bits allows for the possibility that all 0 is a trap representation. (As noted by Keith Thompson in the comment below, the more recent C11 makes explicit that such a representation is not a trap representation).
and
The values of any padding bits are unspecified
and
44) Some combinations of padding bits might generate trap representations
If you restrict the question to value and sign bits, the answer is yes, due to 6.2.6.2:
If there are N value bits, each bit shall represent a different
power of 2 between 1 and 2 N −1 , so that objects of that type shall be capable of representing values from 0 to 2 N − 1 using a pure binary representation; this shall be known as the value representation.
and
If the sign bit is zero, it shall not affect the resulting value.
If any bit in an integer type is one, does the integer as a whole represent non-zero? (if this is a "yes" then some representations like sign-and-magnitude would be additionally restricted)
Not necessarily, and in fact sign-and-magnitude is explicitly supported in 6.2.6.2.
Is there a guaranteed way to check if any bit is not set?
If you do not care about padding and sign bits, you could just compare to 0, but this would not work with a 1's complement representation (which is allowed) seeing as all bits 0 and all bits 1 both represent the value 0.
Otherwise: you can read the value of each byte via an unsigned char *, and compare the result to 0:
Values stored in unsigned bit-fields and objects of type unsigned char
shall be represented using a pure binary notation
If you want to check a specific value bit, you could construct a suitable bitmask using (1u << n), but this will not necessarily let you inspect the sign bit.
Is there a guaranteed way to check if any bit is set?
The answer is essentially the same as to the previous question.
Is there a guaranteed way to set the left-most and/or right-most bits?
Do you mean left-most value bit? You could count the bits in INT_MAX or UINT_MAX or equivalent depending on the type, and use that to construct a value (via 1 << n) with which to OR the original value.
If the answer to #1 is "no" how could one iterate over the bits in an integer type and check if each one was a 1 or a 0?
You can do so using a bitmask which you left shift repeatedly, but you can check only the value bits this way and not the sign bit.
For the bitmanipulations you could make a struct with 8 one unsigned bit fields and let the pointer of that struct point to your char. In that way you can easily access each bit. But the compiler will probably do masking under the hood, so it is only a cleaner way for the programmer I think. You must check that your compiler doesn't change the order of the fields when doing this.
yourstruct* pChar=(yourstruct*)(&c)
pChar.Bit7=1;
Let me caveat this by saying I'm addressing C and C++ in general (e.g. C90 and lower, MS Visual C++, etc): the "greatest common denominator" (vs. the latest/greatest cx11 "standard").
Q: If all the bits in an integer type are zero, does the integer as whole represent zero?
A: Yes
Q: If any bit in an integer type is one, does the integer as a whole represent non-zero? (if this is a "yes" then some representations like sign-and-magnitude would be additionally restricted)
A: Yes. This includes the sign bit, for a signed int.
I'm frankly not familiar with "magnitude"
Q: Is there a guaranteed way to check if any bit is not set?
A: "And'ing" a bitmask is always guaranteed.
Q: Is there a guaranteed way to check if any bit is set?
A: Again, "and'ing" a bitmask is always guaranteed.
Q: Is there a guaranteed way to set the left-most and/or right-most bits?
A: I believe you should always have a "MAX_INT" available for all implementations/all architectures to determine the leftmost bit.
I'm prepared to be flamed ... but I believe the above is accurate. And I hope it helps.
IMHO...

At what point do doubles begin to lose precision?

My application needs to perform some operations: >, <, ==, !=, +, -, ++, etc. (but without division) on some numbers. Those numbers are sometimes integer, and more rarely floats.
If I use internally the "double" type (as defined by IEEE 754) even for integers, up until what point can I be safe to use them as if they were ints, without running in strange rounding errors (for example, n == 5 && n == 6 are both true because they round to the same number)?
Obviously the second input of the various operations (+, -, etc.) is always an integer and I know that with 0.000[..]01 I'll have troubles since the start.
As a bonus answer, the same question but for float.
The number of bits in a IEEE-754 double mantissa is 52, and there's an extra implied bit that is always 1. This means the maximum value that can be contained exactly is 2^53, or 9007199254740992.
A float mantissa is 23 bits, again with an implied bit. The maximum integer that can be exactly represented is 2^24, or 16777216.
If your intent is to hold integer values only, there's usually a 64-bit integer type that would be more appropriate than a double.
Edit: originally I had 2^53-1 and 2^24-1, but I realized there's no need to subtract 1 - an even number can take advantage of an implied 0 bit to the right of the mantissa.
C# Refer to:
However, do be aware that the range of the decimal type is smaller than a double. That is double can hold a larger value, but it does so by losing precision. Or, as stated on MSDN:
The decimal keyword denotes a 128-bit
data type. Compared to floating-point
types, the decimal type has a greater
precision and a smaller range, which
makes it suitable for financial and
monetary calculations. The approximate
range and precision for the decimal
type are shown in the following table.
The primary difference between decimal and double is that decimal is fixed-point and double is floating point. That means that decimal stores an exact value, while double represents a value represented by a fraction, and is less precise. A decimalis 128 bits, so it takes the double space to store. Calculations on decimal is also slower (measure !).
If you need even larger precision, then BigInteger can be used from .NET 4. (You will need to handle decimal points yourself). Here you should be aware, that BigInteger is immutable, so any arithmetic operation on it will create a new instance - if numbers are large, this might be cribbling for performance.
I suggest you look into exactly how much precision you need. Perhaps your algorithm can work with normalized values, that can be smaller ? If performance is an issue, one of the built in floating point types are likely to be faster.

Floating-point: "The leading 1 is 'implicit' in the significand." -- ...huh?

I'm learning about the representation of floating-point IEEE 754 numbers, and my textbook says:
To pack even more bits into the significand, IEEE 754 makes the leading 1-bit of normalized binary numbers implicit. Hence, the number is actually 24 bits long in single precision (implied 1 and 23-bit fraction), and 53 bits long in double precision (1 + 52).
I don't get what "implicit" means here... what's the difference between an explicit bit and an implicit bit? Don't all numbers have the bit, regardless of their sign?
Yes, all normalised numbers (other than the zeroes) have that bit set to one (a), so they make it implicit to prevent wasting space storing it.
In other words, they save that bit totally, and reuse it so that it can be used to increase the precision of your numbers.
Keep in mind that this is the first bit of the fraction, not the first bit of the binary pattern. The first bit of the binary pattern is the sign, followed by a few bits of exponent, followed by the fraction itself.
For example, a single precision number is (sign, exponent, fraction):
<1> <--8---> <---------23----------> <- bit widths
s eeeeeeee fffffffffffffffffffffff
If you look at the way the number is calculated, it's:
(-1)sign x 1.fraction x 2exponent-bias
So the fractional part used for calculating that value is 1.fffff...fff (in binary).
(a) There is actually a class of numbers (the denormalised ones and the zeroes) for which that property does not hold true. These numbers all have a biased exponent of zero but the vast majority of numbers follow the rule.
Here is what they are saying. The first non-zero bit is always going to be 1. So there is no need for the binary representation to include that bit, since you know what it is. So they don't. They tell you where that first 1 is, and then they give the bits after it. So there is a 1 that is not explicitly in the binary representation, whose location is implicit from the fact that they told you where it was.
It may also be helpful to note that we are dealing in binary representations of a number. The reason that the first digit of a normalized binary number (that is, no leading zeroes) has to be 1 is that 1 is the only non-zero value available to us in this representation. So, the same would not be true for, say, base-three representations.