Type conversions without loss of precision - c++

I've just recently noticed some of the code in the framework I am working with converts certain variables to doubles and then back when they are accessed by the framework. In the C++11 standard, is this guaranteed to work without loss of precision for any integral types? If so, which? Are there any additional types that are universally safe for this kind of conversion in common implementations?
Also, is there any way to check at compile time that a conversion is safe in this way? Essentially I would like something like:
static_assert(T(double(T type))==type);

T(double(T_value))==T_value is guaranteed when the range of integral type T is a subrange of the range of exact integral values of type double.
Since no implementation of double has 16 bits or less for the mantissa, and since as far as I know there's no extant C++ implementation with more than 16 bits per byte (the CHAR_BIT constant from <limits.h>), this guarantee holds for char and and the explicitly signed and unsigned variants.
Typically a double has some 50+ bits mantissa, and that's enough for the guarantee to hold also for 32-bit integral types, but not for 64-bit.

Well, I (mostly) figured out the second part of my question:
#include <limits>
static_assert(T(double(std::numeric_limits<T>::max()))==std::numeric_limits::max(),"ERROR MESSAGE.");

Related

In C++, what happens when I use static_cast<char> on an integer value outside -128,127 range?

In a code compiled on i386 Linux using g++, I have used static_cast<char>() cast on a value that might exceed the valid range of -128,127 for a char. There were no errors or exceptions and so I used the code in production.
The problem is now I don't know how this code might behave when a value outside this range is thrown at it. There is no problem if data is modified or truncated, I only need to know how this modification behaves on this particular platform.
Also what would happen if C-style cast ((char)value) had been used? would it behave differently?
In your case this would be an explicit type conversion. Or to be more precise an integral conversions.
The standard says about this(4.7):
If the destination type is signed, the value is unchanged if it can be represented in the destination type (and
bit-field width); otherwise, the value is implementation-defined.
So your problem is implementation-defined. On the other hand I have not yet seen a compiler that does not just truncate the larger value to the smaller one. And I have never seen any compiler that uses the rule mentioned above.
So it should be fairly safe to just cast your integer/short to the char.
I don't know the rules for an C cast by heart and I really try to avoid them because it is not easy to say which rule will kick in.
This is dealt with in §4.7 of the standard (integral conversions).
The answer depends on whether in the implementation in question char is signed or unsigned. If it is unsigned, then modulo arithmetic is applied. §4.7/2 of C++11 states: "If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer (modulo 2 n where n is the number of bits used to represent the unsigned type)." This means that if the input integer is not negative, the normal bit truncation you expect will arise. If is is negative, the same will apply if negative numbers are represented by 2's complement, otherwise the conversion will be bit altering.
If char is signed, §4.7/3 of C++11 applies: "If the destination type is signed, the value is unchanged if it can be represented in the destination type (and bit-field width); otherwise, the value is implementation-defined." So it is up to the documentation for the particular implementation you use. Having said that, on 2's complement systems (ie all those in normal use) I have not seen a case where anything other than normal bit truncation occurs for char types: apart from anything else, by virtue of §3.9.1/1 of the c++11 standard all character types (char, unsigned char and signed char) must have the same object representation and alignment.
The effect of a C style case, an explicit static_cast and an implicit narrowing conversion is the same.
Technically the language specs for unsigned types agree in inposing a plain base-2. And for unsigned plain base-2 its pretty obvious what extension and truncation do.
When going to unsigned, however, the specs are more "tolerant" allowing potentially different kind of processor to use different ways to represent signed numbers. And since a same number may have in different platform different representations is practically not possible to provide a description on what happen to it when adding or removing bits.
For this reason, language specification remain more vague by saying that "the value is unchanged if it can be represented in the destination type (and bit-field width); otherwise, the value is implementation-defined"
In other words, compiler manufacturer are required to do the best as they can to keep the numeric value. But when this cannot be done, they are free to adapt to what is more efficient for them.

Difference between object and value representation by example

N3797::3.9/4 [basic.types] :
The object representation of an object of type T is the sequence of N
unsigned char objects taken up by the object of type T, where N equals
sizeof(T). The value representation of an object is the set of bits
that hold the value of type T. For trivially copyable types, the value
representation is a set of bits in the object representation that
determines a value, which is one discrete element of an
implementation-defined set of values
N3797::3.9.1 [basic.fundamental] says:
For narrow character types, all bits of the object representation
participate in the value representation.
Consider the following struct:
struct A
{
char a;
int b;
}
I think for A not all bits of the object representation participate in the value representation because of padding added by implementation. But what about others fundamentals type?
The Standard says:
N3797::3.9.1 [basic.fundamental]
For narrow character types, all bits of the object representation
participate in the value representation.
These requirements do not hold for other types.
I can't imagine why it doesn't hold for say int or long. What's the reason? Could you clarify?
An example might be the Unisys mainframes, where an int has 48
bits, but only 40 participate in the value representation (and INT_MAX is 2^39-1); the
others must be 0. I imagine that any machine with a tagged
architecture would have similar issues.
EDIT:
Just some further information: the Unisys mainframes are
probably the only remaining architectures which are really
exotic: the Unisys Libra (ex-Burroughs) have a 48 bit word, use signed
magnitude for integers, and have a tagged architecture, where
the data itself contains information concerning its type. The
Unisys Dorado are the ex-Univac: 36 bit one's complement (but no
reserved bits for tagging) and 9 bit char's.
From what I understand, however, Unisys is phasing them out (or
has phased them out in the last year) in favor of Intel based
systems. Once they disappear, pretty much all systems will be
2's complement, 32 or 64 bits, and all but the IBM mainframes
will use IEEE floating poing (and IBM is moving or has moved in
that direction as well). So there won't be any motivation for
the standard to continue with special wording to support them;
in the end, in a couple of years at lesat, C/C++ could probably
follow the Java path, and impose a representation on all of its
basic data types.
This is probably meant to give the compiler headroom for optimizations on some platforms.
Consider for example a 64 bit platform where handling non-64 bit values incurs a large penalty, then it would make sense to have e.g. short only use 16 bits (value repr), but still use 64 bit storage (obj repr).
Similar rationale applies to the Fastest minimum-width integer types mandated by <stdint>. Sometimes larger types are not slower, but faster to use.
As far as I understand at least one case for this is dealing with trap representations, usually on exotic architectures. This issue is covered in N2631: Resolving the difference between C and C++ with regards to object representation of integers. It is is very long but I will quote some sections(The author is James Kanze, so if we are lucky maybe he will drop by and comment further) which says (emphasis mine).
In recent discussions in comp.lang.c++, it became clear that C and C++ have different requirements concerning the object representation of integers, and that at least one real implementation of C does not meet the C++ requirements. The purpose of this paper is to suggest wording to align the C++ standard with C.
It should be noted that the issue only concerns some fairly “exotic” hardware. In this regard, it raises a somewhat larger issue
and:
If C compatibility is desired, it seems to me that the simplest and surest way of attaining this is by incorporating the exact words from the C standard, in place of the current wording. I thus propose that we adopt the wording from the C standard, as follows
and:
Certain object representations need not represent a value of the object type. If the stored value of an object has such a representation and is read by an lvalue expression that does not have character type, the behavior is undefined. If such a representation is produced by a side effect that modifies all or any part of the object by an lvalue expression that does not have character type, the behavior is undefined. Such a representation is called a trap representation.
and:
For signed integer types [...] Which of these applies is implementation-defined, as is whether the value with sign bit 1 and all value bits zero (for the first two), or with sign bit and all value bits 1 (for one's complement), is a trap representation or a normal value. In the case of sign and magnitude and one's complement, if this representation is a normal value it is called a negative zero.

What does 'Natural Size' really mean in C++?

I understand that the 'natural size' is the width of integer that is processed most efficiently by a particular hardware. When using short in an array or in arithmetic operations, the short integer must first be converted into int.
Q: What exactly determines this 'natural size'?
I am not looking for simple answers such as
If it has a 32-bit architecture, it's natural size is 32-bit
I want to understand why this is most efficient, and why a short must be converted before doing arithmetic operations on it.
Bonus Q: What happens when arithmetic operations are conducted on a long integer?
Generally speaking, each computer architecture is designed such that certain type sizes provide the most efficient numerical operations. The specific size then depends on the architecture, and the compiler will select an appropriate size. More detailed explanations as to why hardware designers selected certain sizes for perticular hardware would be out of scope for stckoverflow.
A short most be promoted to int before performing integral operations because that's the way it was in C, and C++ inherited that behavior with little or no reason to change it, possibly breaking existing code. I'm not sure the reason it was originally added in C, but one could speculate that it's related to "default int" where if no type were specified int was assumed by the compiler.
Bonus A: from 5/9 (expressions) we learn: Many binary operators that expect operands of arithmetic or enumeration type cause conversions and yield result types in a similar way. The purpose is to yield a common type, which is also the type of the result. This pattern is called the usual arithmetic conversions, which are defined as follows:
And then of interest specifically:
floating point rules that don't matter here
Otherwise, the integral promotions (4.5) shall be performed on both operands
Then, if either operand is unsigned long the other shall be converted to unsigned long.
Otherwise, if one operand is a long int and the other unsigned int, then if a long int can represent
all the values of an unsigned int, the unsigned int shall be converted to a long int;
otherwise both operands shall be converted to unsigned long int.
Otherwise, if either operand is long, the other shall be converted to long.
In summary the compiler tries to use the "best" type it can to do binary operations, with int being the smallest size used.
the 'natural size' is the width of integer that is processed most efficiently by a particular hardware.
Not really. Consider the x64 architecture. Arithmetic on any size from 8 to 64 bits will be essentially the same speed. So why have all x64 compilers settled on a 32-bit int? Well, because there was a lot of code out there which was originally written for 32-bit processors, and a lot of it implicitly relied on ints being 32-bits. And given the near-uselessness of a type which can represent values up to nine quintillion, the extra four bytes per integer would have been virtually unused. So we've decided that 32-bit ints are "natural" for this 64-bit platform.
Compare the 80286 architecture. Only 16 bits in a register. Performing 32-bit integer addition on such a platform basically requires splitting it into two 16-bit additions. Doing virtually anything with it involves splitting it up, really-- and an attendant slowdown. The 80286's "natural integer size" is most definitely not 32 bits.
So really, "natural" comes down to considerations like processing efficiency, memory usage, and programmer-friendliness. It is not an acid test. It is very much a matter of subjective judgment on the part of the architecture/compiler designer.
What exactly determines this 'natural size'?
For some processors (e.g. 32-bit ARM, and most DSP-style processors), it's determined by the architecture; the processor registers are a particular size, and arithmetic can only be done on values of that size.
Others (e.g. Intel x64) are more flexible, and there's no single "natural" size; it's up to the compiler designers to choose a size, a compromise between efficiency, range of values, and memory usage.
why this is most efficient
If the processor requires values to be a particular size for arithmetic, then choosing another size will force you to convert the values to the required size - probably for a cost.
why a short must be converted before doing arithmetic operations on it
Presumably, that was a good match for the behaviour of commonly-used processors when C was developed, half a century ago. C++ inherited the promotion rules from C. I can't really comment on exactly why it was deemed a good idea, since I wasn't born then.
What happens when arithmetic operations are conducted on a long integer?
If the processor registers are large enough to hold a long, then the arithmetic will be much the same as for int. Otherwise, the operations will have to be broken down into several operations on values split between multiple registers.
I understand that the 'natural size' is the width of integer that is processed most efficiently by a particular hardware.
That's an excellent start.
Q: What exactly determines this 'natural size'?
The paragraph above is the definition of "natural size". Nothing else determines it.
I want to understand why this is most efficient
By definition.
and why a short must be converted before doing arithmetic operations on it.
It is so because the C language definitions says so. There are no deep architectural reasons (there could have been some when C was invented).
Bonus Q: What happens when arithmetic operations are conducted on a long integer?
A bunch of electrons rushes through dirty sand and meets a bunch of holes. (No, really. Ask a vague question...)

How to portably check extremal values for SuSv3 data types?

By SuSv3, ssize_t is required to be a signed integer type. If I want to check if a value I calculate is larger than the maximal value allowed for such a data type, I could compare it to INT_MAX, which isn't nice.
Is there a more portable way this comparison can be done - a macro/function f that works as in
f(<typedef'ed datatype>) = {maximum value allowed for <TDDT> on this system)?
, or a short sequence of such operations to the same sort?
System:
Ubuntu 12.04.
glibc 2.15
Kernel 3.2.0
P.S.: When googling this, I first thought that the gcc extension 'typeof' sounded promising; but it seemed to not help here (or does it?). This is to say I'm fine with anything that might be a gcc extension/attribute/etc.
For an unsigned arithmetic type, (type)-1 is the maximum value. Since you don't know what the relative size of types is, cast to uintmax_t:
#define UNSIGNED_TYPE_MAX(t) ((uintmax_t)(t)-1)
if ((uintmax_t)x > UNSIGNED_TYPE_MAX(size_t)) puts("too large");
There is no such shortcut for signed types. In fact, I don't think there's any way of determining the largest value of a signed type in strictly portable C89 or C99, without using the corresponding constant, such as SSIZE_MAX for ssize_t. C99 specifies constants for each type designed for arithmetic defined in stdint.h for the types defined in ISO C. For types defined in POSIX but not in standard C, there are many values in limits.h; note that they are the limit of what can be valid values for what the type is intended for, rather than the limit of what can fit in the type. For example, if size_t is a 32-bit type, then SIZE_MAX is guaranteed to be 232-1, whereas SSIZE_MAX could be less than 231-1 if the implementation doesn't support any byte count larger than that.
With the added assumption that integers are represented in binary and there are no padding bits, which is safe if you're limiting yourself to POSIX (where CHAR_BIT is always 8), you can deduce the maximum value by computing the size of the type: there is one sign bit in a signed type, and everything else is a value bit.
#define SIGNED_TYPE_MAX(t) (((uintmax_t)1 << (sizeof(t) * CHAR_BIT - 1)) - 1)
Note that things like “double until it stops growing” or “shove in the bit pattern 0111…111” are dodgy. The C standard says that the behavior is undefined for signed types, and GCC takes advantage of this to perform optimizations on operations on signed types that can result in the wrong value if an overflow happens. For example, it might perform computations in a larger-size register, so that the overflow turns out not to happen.

How to guarantee a C++ type's number of bits

I am looking to typedef my own arithmetic types (e.g. Byte8, Int16, Int32, Float754, etc) with the intention of ensuring they comprise a specific number of bits (and in the case of the float, adhere to the IEEE754 format). How can I do this in a completely cross-platform way?
I have seen snippets of the C/C++ standards here and there and there is a lot of:
"type is at least x bytes"
and not very much of:
"type is exactly x bytes".
Given that typedef Int16 unsigned short int may not necessarily result in a 16-bit Int16, is there a cross-platform way to guarantee my types will have specific sizes?
You can use exact-width integer types int8_t, int16_t, int32_t, int64_t declared in <cstdint>. This way the sizes are fixed on all the platforms
The only available way to truly guarantee an exact number of bits is to use a bit-field:
struct X {
int abc : 14; // exactly 14 bits, regardless of platform
};
There is some upper limit on the size you can specify this way -- at least 16 bits for int, and 32 bits for long (but a modern platform may easily allow up to 64 bits for either). Note, however, that while this guarantees that arithmetic on X::abc will use (or at least emulate) exactly 14 bits, it does not guarantee that the size of a struct X is the minimum number of bytes necessary to provide 14 bits (e.g., given 8-bit bytes, its size could easily be 4 or 8 instead of the 2 that are absolutely necessary).
The C and C++ standards both now include a specification for fixed-size types (e.g., int8_t, int16_t), but no guarantee that they'll be present. They're required if the platform provides the right type, but otherwise won't be present. If memory serves, these are also required to use a 2's complement representation, so a platform with a 16-bit 1's complement integer type (for example) still won't define int16_t.
Have a look at the types declared in stdint.h. This is part of the standard library, so it is expected (though technically not guaranteed) to be available everywhere. Among the types declared here are int8_t, uint8_t, int16_t, uint16_t, int32_t, uint32_t, int64_t, and uint64_t. Local implementations will map these types to the appropriate-width type for the given complier and architecture.
This is not possible.
There are platforms where char is 16 or even 32 bits.
Note that I'm not saying there are in theory platforms where this happens... it is a real and quite concrete possibility (e.g. DSPs).
On that kind of hardware there is just no way to use 8 bit only for an operation and for example if you need 8 bit modular arithmetic then the only way is doing a masking operation yourself.
The C language doesn't provide this kind of emulation for you...
With C++ you could try to build a class that behaves like the expected native elementary type in most cases (with the exclusion of sizeof, obviously). The result will have however truly horrible performances.
I can think to no use case in which forcing the hardware this way against its nature would be a good idea.
It is possible to use C++ templates at compile time to check and create new types on the fly that do fit your requirements, specifically that sizeof() of the type is the correct size that you want.
Take a look at this code: Compile time "if".
Do note that if the requested type is not available then it is entirely possible that your program will simply not compile. It simply depends on whether or not that works for you or not!