value changes while reinterpret_cast - c++

while converting an integer to float by using reinterpret_cast the content of memory changes.
For example,
float fDest = 0;
__int32 nTempDest = -4808638;
fDest = *reinterpret_cast<float*>(&nTempDest);
Hexadecimal representation for the variable value nTempest is '42 a0 b6 ff' but after reinterpret_cast the content of fDest is '42 a0 f6 ff'.
Can any one could give an answer why this third byte changed from b6 to f6.

In pure C++ this is in fact undefined behavior. Nevertheless there is an explanation for what you see.
I assume that the hexadecimal representations you give are from a bytewise view of memory. Obviously you are on a little endian architecture. So the 32-bit quantity we are starting from is 0xffb6a042, which is indeed the two's complement representation of -4808638.
As a IEC 60559 single-precision floating point number (also 32 bits) 0xffb6a042 is a negative, signaling NaN. NaNs in this representation have the form (in binary)
s1111111 1qxxxxxx xxxxxxxx xxxxxxxx
Here s is the sign, the x are arbitrary and q=1 for a quiet NaN and q=0 for a signaling NaN.
Now you are using the signaling Nan in that you assign it to fDest. This would raise a floating point invalid exception if floating point signaling is active. By default such exceptions are simply ignored and signaling NaN values are 'quieted' when propagated on.
So: In assigning to fDest, the NaN is propagated and the implementation converts it to a quiet NaN by setting bit 22. This is the change you observe.

Your code produces an Undefined Behavior(UB).
reinterpret_cast only gives the guarantee that if you cast from one pointer type to another and cast it back to the original pointer type then you get the original data. Anything other than that produces UB[Note 1]
This is an UB because you cannot rely on the fact that:
sizeof(float) == sizeof(nTempDest)
This is not guaranteed to be true on all implementations, definitely not true for the ones which follow strict aliasing. And if the implementation doesn't what you get is Undefined Behavior.
[Note 1]There are exceptions to this rule, if you need to rely on these corner rules, you are swimming in rough waters, So be absolutely sure of what you are doing.

Related

Z3 - Floating point arithmetic API function Z3_mk_fpa_to_ubv

I am playing around with Z3-4.6.0 C++ for the first time. Sorry for the noob questions.
My question has 2 parts.
If I have a floating point number, and I use Z3_mk_fpa_to_ubv(...) function to create an unsigned bit-vector.
How much precision is lost?
If the precision is not lost, can I use this new unsigned bit-vector as a regular bit-vector and apply all operations defined for it for e.g., Z3_mk_bvadd(....)?
I know I can use Z3_mk_fpa_to_ieee_bv(....) for graceful, and IEEE-754 compliant conversion. Afterwards I can add,sub etc the bit-vectors.
Just being curious.
Thank you very much.
I'm afraid you're misinterpreting the role of these functions. A good reference to keep open while working with SMTLib floats is: http://smtlib.cs.uiowa.edu/papers/BTRW15.pdf
mk_fpa_to_ubv
This function corresponds to the FPToUInt function in the cited paper. It's defined as follows:
(The NaN choice above is misleading: It should be read as "undefined.")
Note that the precision loss can be huge here, depending on what the FP value is and the bit-width of the vector. Imagine converting a double-precision floating point value to an 8-bit word: You're smashing values in the range ±2.23×10^−308 to ±1.80×10^308 to a mere 256 different values. This means a large number of conversions simply will go through massive rounding cancelations.
You should think of this as "casting" in C like languages:
unsigned char c;
double f;
c = (char) f;
This is the essence of conversion from double-precision to unsigned byte, which will suffer major precision loss. In the other direction, if you convert to a really large bit-vector (say one that has a thousand bits), then your conversion will still be losing precision per the rounding mode, though you'll be able to cover all the integer values precisely in the range. So, it really depends on what BV-type you convert to and the rounding mode you choose.
mk_fpa_to_ieee_bv
This function has nothing to do with "preserving" the value. So asking "precision loss" here is irrelevant. What it does is that it gives you the underlying bit-vector representation of the floating-point value, per the IEEE-754 spec. The wikipedia article has a good discussion on this representation: https://en.wikipedia.org/wiki/Double-precision_floating-point_format#IEEE_754_double-precision_binary_floating-point_format:_binary64
In particular, if you interpret the output of this function as a two's complement integer value, you'll get a completely irrelevant value that has nothing to do with the value of the floating-point number itself. (Also, this conversion is not unique since NaN has multiple corresponding bit-vector patterns.)
Summary
Long story short, conversions from floats to bit-vectors will suffer from precision loss not only due to losing the "fractional" part due to rounding, but also due to the limited range, unless you pick a very-large bit-vector size. The IEEE-754 representation conversion does not preserve value, and thus doing arithmetic on values converted via this function is more or less meaningless.

Why is unsigned integer overflow defined behavior but signed integer overflow isn't?

Unsigned integer overflow is well defined by both the C and C++ standards. For example, the C99 standard (§6.2.5/9) states
A computation involving unsigned operands can never overflow,
because a result that cannot be represented by the resulting unsigned integer type is
reduced modulo the number that is one greater than the largest value that can be
represented by the resulting type.
However, both standards state that signed integer overflow is undefined behavior. Again, from the C99 standard (§3.4.3/1)
An example of undefined behavior is the behavior on integer overflow
Is there an historical or (even better!) a technical reason for this discrepancy?
The historical reason is that most C implementations (compilers) just used whatever overflow behaviour was easiest to implement with the integer representation it used. C implementations usually used the same representation used by the CPU - so the overflow behavior followed from the integer representation used by the CPU.
In practice, it is only the representations for signed values that may differ according to the implementation: one's complement, two's complement, sign-magnitude. For an unsigned type there is no reason for the standard to allow variation because there is only one obvious binary representation (the standard only allows binary representation).
Relevant quotes:
C99 6.2.6.1:3:
Values stored in unsigned bit-fields and objects of type unsigned char shall be represented using a pure binary notation.
C99 6.2.6.2:2:
If the sign bit is one, the value shall be modified in one of the following ways:
— the corresponding value with sign bit 0 is negated (sign and magnitude);
— the sign bit has the value −(2N) (two’s complement);
— the sign bit has the value −(2N − 1) (one’s complement).
Nowadays, all processors use two's complement representation, but signed arithmetic overflow remains undefined and compiler makers want it to remain undefined because they use this undefinedness to help with optimization. See for instance this blog post by Ian Lance Taylor or this complaint by Agner Fog, and the answers to his bug report.
Aside from Pascal's good answer (which I'm sure is the main motivation), it is also possible that some processors cause an exception on signed integer overflow, which of course would cause problems if the compiler had to "arrange for another behaviour" (e.g. use extra instructions to check for potential overflow and calculate differently in that case).
It is also worth noting that "undefined behaviour" doesn't mean "doesn't work". It means that the implementation is allowed to do whatever it likes in that situation. This includes doing "the right thing" as well as "calling the police" or "crashing". Most compilers, when possible, will choose "do the right thing", assuming that is relatively easy to define (in this case, it is). However, if you are having overflows in the calculations, it is important to understand what that actually results in, and that the compiler MAY do something other than what you expect (and that this may very depending on compiler version, optimisation settings, etc).
First of all, please note that C11 3.4.3, like all examples and foot notes, is not normative text and therefore not relevant to cite!
The relevant text that states that overflow of integers and floats is undefined behavior is this:
C11 6.5/5
If an exceptional condition occurs during the evaluation of an
expression (that is, if the result is not mathematically defined or
not in the range of representable values for its type), the behavior
is undefined.
A clarification regarding the behavior of unsigned integer types specifically can be found here:
C11 6.2.5/9
The range of nonnegative values of a signed integer type is a subrange
of the corresponding unsigned integer type, and the representation of
the same value in each type is the same. A computation involving
unsigned operands can never overflow, because a result that cannot be
represented by the resulting unsigned integer type is reduced modulo
the number that is one greater than the largest value that can be
represented by the resulting type.
This makes unsigned integer types a special case.
Also note that there is an exception if any type is converted to a signed type and the old value can no longer be represented. The behavior is then merely implementation-defined, although a signal may be raised.
C11 6.3.1.3
6.3.1.3 Signed and unsigned integers
When a value with integer
type is converted to another integer type other than _Bool, if the
value can be represented by the new type, it is unchanged.
Otherwise, if the new type is unsigned, the value is converted by
repeatedly adding or subtracting one more than the maximum value that
can be represented in the new type until the value is in the range of
the new type.
Otherwise, the new type is signed and the value
cannot be represented in it; either the result is
implementation-defined or an implementation-defined signal is raised.
In addition to the other issues mentioned, having unsigned math wrap makes the unsigned integer types behave as abstract algebraic groups (meaning that, among other things, for any pair of values X and Y, there will exist some other value Z such that X+Z will, if properly cast, equal Y and Y-Z will, if properly cast, equal X). If unsigned values were merely storage-location types and not intermediate-expression types (e.g. if there were no unsigned equivalent of the largest integer type, and arithmetic operations on unsigned types behaved as though they were first converted them to larger signed types, then there wouldn't be as much need for defined wrapping behavior, but it's difficult to do calculations in a type which doesn't have e.g. an additive inverse.
This helps in situations where wrap-around behavior is actually useful - for example with TCP sequence numbers or certain algorithms, such as hash calculation. It may also help in situations where it's necessary to detect overflow, since performing calculations and checking whether they overflowed is often easier than checking in advance whether they would overflow, especially if the calculations involve the largest available integer type.
Perhaps another reason for why unsigned arithmetic is defined is because unsigned numbers form integers modulo 2^n, where n is the width of the unsigned number. Unsigned numbers are simply integers represented using binary digits instead of decimal digits. Performing the standard operations in a modulus system is well understood.
The OP's quote refers to this fact, but also highlights the fact that there is only one, unambiguous, logical way to represent unsigned integers in binary. By contrast, Signed numbers are most often represented using two's complement but other choices are possible as described in the standard (section 6.2.6.2).
Two's complement representation allows certain operations to make more sense in binary format. E.g., incrementing negative numbers is the same that for positive numbers (expect under overflow conditions). Some operations at the machine level can be the same for signed and unsigned numbers. However, when interpreting the result of those operations, some cases don't make sense - positive and negative overflow. Furthermore, the overflow results differ depending on the underlying signed representation.
The most technical reason of all, is simply that trying to capture overflow in an unsigned integer requires more moving parts from you (exception handling) and the processor (exception throwing).
C and C++ won't make you pay for that unless you ask for it by using a signed integer. This isn't a hard-fast rule, as you'll see near the end, but just how they proceed for unsigned integers. In my opinion, this makes signed integers the odd-one out, not unsigned, but it's fine they offer this fundamental difference as the programmer can still perform well-defined signed operations with overflow. But to do so, you must cast for it.
Because:
unsigned integers have well defined overflow and underflow
casts from signed -> unsigned int are well defined, [uint's name]_MAX - 1 is conceptually added to negative values, to map them to the extended positive number range
casts from unsigned -> signed int are well defined, [uint's name]_MAX - 1 is conceptually deducted from positive values beyond the signed type's max, to map them to negative numbers)
You can always perform arithmetic operations with well-defined overflow and underflow behavior, where signed integers are your starting point, albeit in a round-about way, by casting to unsigned integer first then back once finished.
int32_t x = 10;
int32_t y = -50;
// writes -60 into z, this is well defined
int32_t z = int32_t(uint32_t(y) - uint32_t(x));
Casts between signed and unsigned integer types of the same width are free, if the CPU is using 2's compliment (nearly all do). If for some reason the platform you're targeting doesn't use 2's Compliment for signed integers, you will pay a small conversion price when casting between uint32 and int32.
But be wary when using bit widths smaller than int
usually if you are relying on unsigned overflow, you are using a smaller word width, 8bit or 16bit. These will promote to signed int at the drop of a hat (C has absolutely insane implicit integer conversion rules, this is one of C's biggest hidden gotcha's), consider:
unsigned char a = 0;
unsigned char b = 1;
printf("%i", a - b); // outputs -1, not 255 as you'd expect
To avoid this, you should always cast to the type you want when you are relying on that type's width, even in the middle of an operation where you think it's unnecessary. This will cast the temporary and get you the signedness AND truncate the value so you get what you expected. It's almost always free to cast, and in fact, your compiler might thank you for doing so as it can then optimize on your intentions more aggressively.
unsigned char a = 0;
unsigned char b = 1;
printf("%i", (unsigned char)(a - b)); // cast turns -1 to 255, outputs 255

Why int i=400*400/400 gives result 72, is datatype circular?

I think first 400*400=160000 is converted to 28928 by starting from 0 and going 160000 time in a circular fashion for int type (say sizeof(int) = 2 bytes) assuming it like:
And then 28928 is divided by 400, floor of which gives 72 and the result varies with the type of variable. Is my assumption correct or there is any other explanation?
Assuming you're using a horrifically old enough compiler for where int is only 16 bits. Then yes, your analysis is correct.*
400 * 400 = 160000
// Integer overflow wrap-around.
160000 % 2^16 = 28928
// Integer Division
28928 / 400 = 72 (rounded down)
Of course, for larger datatypes, this overflow won't happen so you'll get back 400.
*This wrap-around behavior is guaranteed only for unsigned integer types. For signed integers, it is technically undefined behavior in C and C++.
In many cases, signed integers will still exhibit the same wrap-around behavior. But you just can't count on it. (So your example with a signed 16-bit integer isn't guaranteed to hold.)
Although rare, here are some examples of where signed integer overflow does not wrap around as expected:
Why does integer overflow on x86 with GCC cause an infinite loop?
Compiler optimization causing program to run slower
It certainly seems like you guessed correctly.
If int is a 16-bit type, then it'll behave exactly as you described. Your operation happens sequentially and 400 * 400 produces 160000, which is 10 0111 0001 0000 0000
when you store this in 16-bit register, top "10" will get chopped off and you end up with 0111 0001 0000 0000 (28,928).... and you guessed the rest.
Which compiler/platform are you building this on? Typical desktop would be at least 32-bit so you wouldn't see this issue.
UPDATE #1
NOTE: This is what explain your behavior with YOUR specific compiler. As so many others were quick to point out, DO NOT take this answer to assume all compilers behave this way. But YOUR specific one certainly does.
To complete the answer from the comments below, the reason you are seeing this behavior is that most major compilers optimize for speed in these cases and do not add safety checks after simple arithmetic operations. So as I outlined above, the hardware simply doesn't have room to store those extra bits and that's why you are seeing "circular" behavior.
The first thing you have to know is, in C, integer overflows are undefined behavior.
(C99, 6.5.5p5) "If an exceptional condition occurs during the evaluation of an expression (that is, if the result is not mathematically defined or not in the range of representable values for its type), the behavior is undefined."
C says it very clear and repeats it here:
(C99, 3.4.3p3) "EXAMPLE An example of undefined behavior is the behavior on integer overflow."
Note that integer overflows only regard signed integer as unsigned integer never overflow:
(C99, 6.2.5p9) "A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type."
Your declaration is this one:
int i = 400 * 400 / 400;
Assuming int is 16-bit in your platform and the signed representation is two's complement, 400 * 400 is equal to 160000 which is not representable as an int, INT_MAX value being 32767. We are in presence of an integer overflow and the implementation can do whatever it wants.
Usually, in this specific example, the compiler will do one of the two solutions below:
Consider the overflow wraps around modulo the word size like with unsigned integers then the result of the 400 * 400 / 400 is 72.
Take advantage of the undefined behavior to reduce the expression 400 * 400 / 400 to 400. This is done by good compilers usually when the optimization options are enabled.
Note that that integer overflows are undefined behavior is specific to C language. In most languages (Java for instance) they wrap around modulo the word size like unsigned integers in C.
In gcc to have overflow always wrap, there is the -fno-strict-overflow option that can be enabled (disabled by default). This is for example the choice of the Linux kernel; they compile with this option to avoid bad surprises. But this also constraints the compiler to not perform all optimizations it can do.

Trap representation

What is a "trap representation" in C (some examples might help)? Does this apply to C++?
Given this code...
float f=3.5;
int *pi = (int*)&f;
... and assuming that sizeof(int) == sizeof(float), do f and *pi have the same binary representation/pattern?
A trap representation is a catch-all term used by C99 (IIRC not by C89) to describe bit patterns that fit into the space occupied by a type, but trigger undefined behavior if used as a value of that type. The definition is in section 6.2.6.1p5 (with tentacles into all of 6.2.6) and I'm not going to quote it here because it's long and confusing. A type for which such bit patterns exist is said to "have" trap representations. No type is required to have any trap representations, but the only type that the standard guarantees will not have trap representations is unsigned char (6.2.6.1p5, 6.2.6.2p1).
The standard gives two hypothetical examples of trap representations, neither of which correspond to anything that any real CPU has done for many years, so I'm not going to confuse you with them. A good example of a trap representation (also the only thing that qualifies as a hardware-level trap representation on any CPU you are likely to encounter) is a signaling NaN in a floating-point type. C99 Annex F (section 2.1) explicitly leaves the behavior of signaling NaNs undefined, even though IEC 60559 specifies their behavior in detail.
It's worth mentioning that, while pointer types are allowed to have trap representations, null pointers are not trap representations. Null pointers only cause undefined behavior if they are dereferenced or offset; other operations on them (most importantly, comparisons and copies) are well-defined. Trap representations cause undefined behavior if you merely read them using the type that has the trap representation. (Whether invalid but non-null pointers are, or should be, considered trap representations is a subject of debate. The CPU doesn't treat them that way, but the compiler might.)
The code you show has undefined behavior, but this is because of the pointer-aliasing rules, not because of trap representations. This is how to convert a float into the int with the same representation (assuming, as you say, sizeof(float) == sizeof(int))
int extract_int(float f)
{
union { int i; float f; } u;
u.f = f;
return u.i;
}
This code has unspecified (not undefined) behavior in C99, which basically means the standard doesn't define what integer value is produced, but you do get some valid integer value, it's not a trap representation, and the compiler is not allowed to optimize on the assumption that you have not done this. (Section 6.2.6.1, para 7. My copy of C99 might include technical corrigienda — my recollection is that this was undefined in the original publication but was changed to unspecified in a TC.)
Undefined behaviour to alias a float with a pointer-to-int.
In general, any non-trap IEEE-754 floating point value can be represented as an integer on some platforms without any issue. However, there are floating point values that can result in unexpected behavior if you assume that all floating point values have a unique integer representation and you happen to force the FPU to load that value.
(The example is taken from Byte Swapping Floating Point Types.)
For instance, when working with floating point data you need to marshal between CPUs with different endianness, you might think of doing the following:
double swap(double)
Unfortunately, if the compiler loads the input into an FPU register and it's a trap representation, the FPU can write it back out with an equivalent trap representation that happens to be a different bit representation.
In other words, there are some floating point values that do not have a corresponding bit representation if you don't convert correctly (by correct I mean through a union, memcpy via char * or other standard mechanism).

What happens in C++ when an integer type is cast to a floating point type or vice-versa?

Do the underlying bits just get "reinterpreted" as a floating point value? Or is there a run-time conversion to produce the nearest floating point value?
Is endianness a factor on any platforms (i.e., endianness of floats differs from ints)?
How do different width types behave (e.g., int to float vs. int to double)?
What does the language standard guarantee about the safety of such casts/conversions? By cast, I mean a static_cast or C-style cast.
What about the inverse float to int conversion (or double to int)? If a float holds a small magnitude value (e.g., 2), does the bit pattern have the same meaning when interpreted as an int?
Do the underlying bits just get "reinterpreted" as a floating point value?
No, the value is converted according to the rules in the standard.
is there a run-time conversion to produce the nearest floating point value?
Yes there's a run-time conversion.
For floating point -> integer, the value is truncated, provided that the source value is in range of the integer type. If it is not, behaviour is undefined. At least I think that it's the source value, not the result, that matters. I'd have to look it up to be sure. The boundary case if the target type is char, say, would be CHAR_MAX + 0.5. I think it's undefined to cast that to char, but as I say I'm not certain.
For integer -> floating point, the result is the exact same value if possible, or else is one of the two floating point values either side of the integer value. Not necessarily the nearer of the two.
Is endianness a factor on any platforms (i.e., endianness of floats differs from ints)?
No, never. The conversions are defined in terms of values, not storage representations.
How do different width types behave (e.g., int to float vs. int to double)?
All that matters is the ranges and precisions of the types. Assuming 32 bit ints and IEEE 32 bit floats, it's possible for an int->float conversion to be imprecise. Assuming also 64 bit IEEE doubles, it is not possible for an int->double conversion to be imprecise, because all int values can be exactly represented as a double.
What does the language standard guarantee about the safety of such casts/conversions? By cast, I mean a static_cast or C-style cast.
As indicated above, it's safe except in the case where a floating point value is converted to an integer type, and the value is outside the range of the destination type.
If a float holds a small magnitude value (e.g., 2), does the bit pattern have the same meaning when interpreted as an int?
No, it does not. The IEEE 32 bit representation of 2 is 0x40000000.
For reference, this is what ISO-IEC 14882-2003 says
4.9 Floating-integral conversions
An rvalue of a floating point type can be converted to an rvalue of an integer type. The conversion truncates; that is, the fractional part is discarded. The behavior is undefined if the truncated value cannot be represented in the destination type. [Note:If the destination type is `bool, see 4.12. ]
An rvalue of an integer type or of an enumeration type can be converted to an rvalue of a floating point type. The result is exact if possible. Otherwise, it is an implementation-defined choice of either the next lower or higher representable value. [Note:loss of precision occurs if the integral value cannot be represented exactly as a value of the floating type. ] If the source type is bool, the value falseis converted to zero and the value true is converted to one.
Reference: What Every Computer Scientist Should Know About Floating-Point Arithmetic
Other highly valuable references on the subject of fast float to int conversions:
Know your FPU
Let's Go To The (Floating) Point
Know your FPU: Fixing Floating Fast
Have a good read!
There are normally run-time conversions, as the bit representations are not generally compatible (with the exception that binary 0 is normally both 0 and 0.0). The C and C++ standards deal only with value, not representation, and specify generally reasonable behavior. Remember that a large int value will not normally be exactly representable in a float, and a large float value cannot be represented by an int.
Therefore:
All conversions are by value, not bit patterns. Don't worry about the bit patterns.
Don't worry about endianness, either, since that's a matter of bitwise representation.
Converting int to float can lose precision if the integer value is large in absolute value; it is less likely to with double, since double is more precise, and can represent many more exact numbers. (The details depend on what representations the system is actually using.)
The language definitions say nothing about bit patterns.
Converting from float to int is also a matter of values, not bit patterns. An exact floating-point 2.0 will convert to an integral 2 because that's how the implementation is set up, not because of bit patterns.
When you convert an integer to a float, you are not liable to loose any precision unless you are dealing with extremely large integers.
When you convert a float to an int you are essentially performing the floor() operation. so it just drops the bits after the decimal
For more information on floating point read: http://www.lahey.com/float.htm
The IEEE single-precision format has 24 bits of mantissa, 8 bits of exponent, and a sign bit. The internal floating-point registers in Intel microprocessors such as the Pentium have 64 bits of mantissa, 15 bits of exponent and a sign bit. This allows intermediate calculations to be performed with much less loss of precision than many other implementations. The down side of this is that, depending upon how intermediate values are kept in registers, calculations that look the same can give different results.
So if your integer uses more than 24 bits (excluding the hidden leading bit), then you are likely to loose some precision in conversion.
Reinterpreted? The term "reinterpretation" usually refers to raw memory reinterpretation. It is, of course, impossible to meaningfully reinterpret an integer value as a floating-point value (and vice versa) since their physical representations are generally completely different.
When you cast the types, a run-time conversion is being performed (as opposed to reinterpretation). The conversion is normally not just conceptual, it requires an actual run-time effort, because of the difference in physical representation. There are no language-defined relationships between the bit patterns of source and target values. Endianness plays no role in it either.
When you convert an integer value to a floating-point type, the original value is converted exactly if it can be represented exactly by the target type. Otherwise, the value will be changed by the conversion process.
When you convert a floating-point value to integer type, the fractional part is simply discarded (i.e. not the nearest value is taken, but the number is rounded towards zero). If the result does not fit into the target integer type, the behavior is undefined.
Note also, that floating-point to integer conversions (and the reverse) are standard conversions and formally require no explicit cast whatsoever. People might sometimes use an explicit cast to suppress compiler warnings.
If you cast the value itself, it will get converted (so in a float -> int conversion 3.14 becomes 3).
But if you cast the pointer, then you will actually 'reinterpret' the underlying bits. So if you do something like this:
double d = 3.14;
int x = *reinterpret_cast<int *>(&d);
x will have a 'random' value that is based on the representation of floating point.
Converting FP to integral type is nontrivial and not even completely defined.
Typically your FPU implements a hardware instruction to convert from IEEE format to int. That instruction might take parameters (implemented in hardware) controlling rounding. Your ABI probably specifies round-to-nearest-even. If you're on X86+SSE, it's "probably not too slow," but I can't find a reference with one Google search.
As with anything FP, there are corner cases. It would be nice if infinity were mapped to (TYPE)_MAX but that is alas not typically the case — the result of int x = INFINITY; is undefined.