How does one safely static_cast between unsigned int and int? - c++

I have an 8-character string representing a hexadecimal number and I need to convert it to an int. This conversion has to preserve the bit pattern for strings "80000000" and higher, i.e., those numbers should come out negative. Unfortunately, the naive solution:
int hex_str_to_int(const string hexStr)
{
stringstream strm;
strm << hex << hexStr;
unsigned int val = 0;
strm >> val;
return static_cast<int>(val);
}
doesn't work for my compiler if val > MAX_INT (the returned value is 0). Changing the type of val to int also results in a 0 for the larger numbers. I've tried several different solutions from various answers here on SO and haven't been successful yet.
Here's what I do know:
I'm using HP's C++ compiler on OpenVMS (using, I believe, an Itanium processor).
sizeof(int) will be at least 4 on every architecture my code will run on.
Casting from a number > INT_MAX to int is implementation-defined. On my machine, it usually results in a 0 but interestingly casting from long to int results in INT_MAX when the value is too big.
This is surprisingly difficult to do correctly, or at least it has been for me. Does anyone know of a portable solution to this?
Update:
Changing static_cast to reinterpret_cast results in a compiler error. A comment prompted me to try a C-style cast: return (int)val in the code above, and it worked. On this machine. Will that still be safe on other architectures?

Quoting the C++03 standard, §4.7/3 (Integral Conversions):
If the destination type is signed, the value is unchanged if it can be represented in the destination type (and bit-field width); otherwise, the value is implementation-defined.
Because the result is implementation-defined, by definition it is impossible for there to be a truly portable solution.

While there are ways to do this using casts and conversions, most rely on undefined behavior that happen to have well-defined behaviors on some machines / with some compilers. Instead of relying on undefined behavior, copy the data:
int signed_val;
std::memcpy (&signed_val, &val, sizeof(int));
return signed_val;

You can negate an unsigned twos-complement number by taking the complement and adding one. So let's do that for negatives:
if (val < 0x80000000) // positive values need no conversion
return val;
if (val == 0x80000000) // Complement-and-addition will overflow, so special case this
return -0x80000000; // aka INT_MIN
else
return -(int)(~val + 1);
This assumes that your ints are represented with 32-bit twos-complement representation (or have similar range). It does not rely on any undefined behavior related to signed integer overflow (note that the behavior of unsigned integer overflow is well-defined - although that should not happen here either!).
Note that if your ints are not 32-bit, things get more complex. You may need to use something like ~(~0U >> 1) instead of 0x80000000. Further, if your ints are no twos-complement, you may have overflow issues on certain values (for example, on a ones-complement machine, -0x80000000 cannot be represented in a 32-bit signed integer). However, non-twos-complement machines are very rare today, so this is unlikely to be a problem.

Here's another solution that worked for me:
if (val <= INT_MAX) {
return static_cast<int>(val);
}
else {
int ret = static_cast<int>(val & ~INT_MIN);
return ret | INT_MIN;
}
If I mask off the high bit, I avoid overflow when casting. I can then OR it back safely.

C++20 will have std::bit_cast that copies bits verbatim:
#include <bit>
#include <cassert>
#include <iostream>
int main()
{
int i = -42;
auto u = std::bit_cast<unsigned>(i);
// Prints 4294967254 on two's compliment platforms where int is 32 bits
std::cout << u << "\n";
auto roundtripped = std::bit_cast<int>(u);
assert(roundtripped == i);
std::cout << roundtripped << "\n"; // Prints -42
return 0;
}
cppreference shows an example of how one can implement their own bit_cast in terms of memcpy (under Notes).
While OpenVMS is not likely to gain C++20 support anytime soon, I hope this answer helps someone arriving at the same question via internet search.

unsigned int u = ~0U;
int s = *reinterpret_cast<int*>(&u); // -1
Сontrariwise:
int s = -1;
unsigned int u = *reinterpret_cast<unsigned int*>(&s); // all ones

Related

Remove undefined behavior from overflow of signed integers in constant expressions?

EDIT In the actual example, it appears possible that negative overflow can happen, I've also added an example to demonstrate the error there
I'm using C++20 and trying to convert a library which relies on signed integer overflow in Java and C# into C++ code. I'm also trying to generate the tables it uses at compile time, and allow those to be available at compile time.
In my code I get errors in reference to code that looks like this (Minimal example to reproduce the error, the solution to this will solve my problem as well):
#include <iostream>
constexpr auto foo(){
std::int64_t a = 2;
std::int64_t very_large_constant = 0x598CD327003817B5L;
std::int64_t x = a * very_large_constant;
return x;
}
int main(){
std::cout << foo() << std::endl;
return 0;
}
https://godbolt.org/z/TvM45vd8d
Negative overflow version
#include <iostream>
constexpr auto foo(){
std::int64_t a = -2;
std::int64_t very_large_constant = 0x598CD327003817B5L;
std::int64_t x = a * very_large_constant;
return x;
}
int main(){
std::cout << foo() << std::endl;
return 0;
}
https://godbolt.org/z/7zoE9r18E
I get 12905529061151879018 is out side of range representable by long long and -12905529061151879018 respectively.
I understand that undefined behavior here is not allowed, I also recognize that GCC and MSVC do not error here, and you can put a flag to make clang compile this anyway. But what am I supposed to do to actually solve this issue with out switching compilers or applying the flag to ignore invalid constexpr?
Is there some way I can define the behavior I expect and want to happen here?
Signed integers have two's complement layout in any implementation that you could name. It's also guaranteed to use two'
s complement layout since C++20.
This means that you can perform your math on unsigned integers and get well-defined overflow behavior that matches what you want your signed integers to do.
#include <iostream>
#include <bit>
constexpr auto foo(){
std::uint64_t a = 2;
std::uint64_t very_large_constant = 0x598CD327003817B5L;
std::uint64_t x = a * very_large_constant;
return static_cast<std::int64_t>(x);
}
You cannot do this with signed integers. However, there are some things you can rely on in C++20:
Unsigned integer overflow is well-defined.
Signed integers are required to be represented as 2's complement.
Conversions between corresponding sized and unsigned integers preserve the bitpattern.
So you can do all of your overflow-based math using explicitly unsigned types and literals, then cast them to signed values when you need to. This conversion is required to leave the bits unchanged.

24-bit to 32-bit conversion in C++

I need to convert a 24-bit integer (2s compliment) to 32-bit integer in C++. I have found a solution here, which is given as
int interpret24bitAsInt32(unsigned char* byteArray)
{
return (
(byteArray[0] << 24)
| (byteArray[1] << 16)
| (byteArray[2] << 8)
) >> 8;
}
Though I found it is working, I have the following concern about the piece of code.
byteArray[0] is only 8-bits, and hence how the operations like byteArray[0] << 24 will be possible?
It will be possible if the compiler up-converts the byteArray to an integer and does the operation. This may be the reason it is working now. But my question is whether this behaviour is guaranteed in all compilers and explicitly mentioned in the standard? It is not trivial to me as we are not explicitly giving the compiler any clue that the target is a 32-bit integer!
Also, please let me know any improvisation like vectorization is possible to improve the speed (may be using C++11), as I need to convert huge amount of 24-bit data to 32-bit.
int32_t interpret24bitAsInt32(unsigned char* byteArray)
{
int32_t number =
(((int32_t)byteArray[0]) << 16)
| (((int32_t)byteArray[1]) << 8)
| byteArray[2];
if (number >= ((int32_t)1) << 23)
//return (uint32_t)number | 0xFF000000u;
return number - 16777216;
return number;
}
this function should do what you want without invoking undefined behavior by shifting a 1 into the sign bit of int.
The int32_t cast is only necessary if sizeof(int) < 4, otherwise the default integer promotion to int happens.
If someone does not like the if: It does not get translated to a conditional jump by the compiler (gcc 9.2): https://godbolt.org/z/JDnJM2
It leaves a cmovg.
[expr.shift]/1 The operands shall be of integral or unscoped enumeration type and integral promotions are performed. The type of the result is that of the promoted left operand...
[conv.prom] 7.6 Integral promotions
1 A prvalue of an integer type other than bool, char16_t, char32_t, or wchar_t whose integer conversion rank (7.15) is less than the rank of int can be converted to a prvalue of type int if int can represent all the values of the source type; otherwise, the source prvalue can be converted to a prvalue of type unsigned int.
So yes, the standard requires that an argument of a shift operator, that has the type unsigned char, be promoted to int before the evaluation.
That said, the technique in your code relies on int a) being 32 bits large, and b) using two's-complement to represent negative values. Neither of which is guaranteed by the standard, though it's common with modern systems.
A version without branch; but multiplication:
int32_t interpret24bitAsInt32(unsigned char* bytes) {
unsigned char msb = UINT8_C(0xFF) * (bytes[0] >> UINT8_C(7));
uint32_t number =
(msb << UINT32_C(24))
| (bytes[0] << UINT32_C(16)))
| (bytes[1] << UINT32_C(8)))
| bytes[2];
return number;
}
You need to test if omitting the branch really gives you a performance advantage, though!
Adapted from older code of me which did this for 10 bit numbers. Test before use!
Oh, and it still relies upon implementation defined behaviour with regards to the conversion uint32_t to int32_t. If you want to go down that rabbit hole, have fun but be warned.
Or, much more simple: Use the trick from mchs answer. And also use shifts instead of multiplication:
int32_t interpret24bitAsInt32(unsigned char* bytes) {
int32_t const number =
(bytes[0] << INT32_C(16))
| (bytes[1] << INT32_C(8))
| bytes[2];
int32_t const correction =
(bytes[0] >> UINT8_C(7)) << INT32_C(24);
return number - correction;
}
Test case
There is indeed Integral_promotion for type smaller than int for operator_arithmetic
So assuming sizeof(char) < sizeof(int)
in
byteArray[0] << 24
byteArray is promoted in int and you do bit-shift on int.
First issue is that int can only be 16 bits.
Second issue (before C++20), int is signed, and Bitwise shift can easily lead to implementation-defined or UB (And you have both for negative 24 bits numbers).
In C++20, behavior of Bitwise shift has been simplified (behavior defined) and the problematic UB has been removed too.
The leading 1 of negative number are kept in neg >> 8.
So before C++20, you have to do something like:
std::int32_t interpret24bitAsInt32(const unsigned char* byteArray)
{
const std::int32_t res =
(std::int32_t(byteArray[0]) << 16)
| (byteArray[1] << 8)
| byteArray[2];
const std::int32_t int24Max = (std::int32_t(1) << 24) - 1;
return res <= int24Max ?
res : // Positive 24 bit numbers
int24Max - res; // Negative number
}
Integral promotions [conv.prom] are performed on the operands of a shift expression [expr.shift]/1. In your case, that means that your values of type unsigned char will be converted to type int before << is applied [conv.prom]/1. Thus, the C++ standard guarantees that the operands be "up-converted".
However, the standard only guarantees that int has at least 16 Bit. There is also no guarantee that unsigned char has exactly 8 Bit (it may have more). Thus, it is not guaranteed that int is always large enough to represent the result of these left shifts. If int does not happen to be large enough, the resulting signed integer overflow will invoke undefined behavior [expr]/4. Chances are that int has 32 Bit on your target platform and, thus, everything works out in the end.
If you need to work with a guaranteed, fixed number of Bits, I would generally recommend to use fixed-width integer types, for example:
std::int32_t interpret24bitAsInt32(const std::uint8_t* byteArray)
{
return
static_cast<std::int32_t>(
(std::uint32_t(byteArray[0]) << 24) |
(std::uint32_t(byteArray[1]) << 16) |
(std::uint32_t(byteArray[2]) << 8)
) >> 8;
}
Note that right shift of a negative value is currently implementation-defined [expr.shift]/3. Thus, it is not strictly guaranteed that this code will end up performing sign extension on a negative number. However, your compiler is required to document what exactly right-shifting a negative integer does [defns.impl.defined] (i.e., you can go and make sure it does what you need). And I have never heard of a compiler that does not implement right shift of a negative value as an arithmetic shift in practice. Also, it looks like C++20 is going to mandate arithmetic shift behavior…

How to get negative remainder with remainder operator on size_t?

Consider the following code sample:
#include <iostream>
#include <string>
int main()
{
std::string str("someString"); // length 10
int num = -11;
std::cout << num % str.length() << std::endl;
}
Running this code on http://cpp.sh, I get 5 as a result, while I was expecting it to be -1.
I know that this happens because the type of str.length() is size_t which is an implementation dependent unsigned, and because of the implicit type conversions that happen with binary operators that cause num to be converted from a signed int to an unsigned size_t (more here);
this causes the negative value to become a positive one and messes up the result of the operation.
One could think of addressing the problem with an explicit cast to int:
num % (int)str.length()
This might work but it's not guaranteed, for instance in the case of a string with length larger than the maximum value of int. One could reduce the risk using a larger type, like long long, but what if size_t is unsigned long long? Same problem.
How would you address this problem in a portable and robust way?
Since C++11, you can just cast the result of length to std::string::difference_type.
To address "But what if the size is too big?":
That won't happen on 64 bit platforms and even if you are on a smaller one: When was the last time you actually had a string that took up more than half of total RAM? Unless you are doing really specific stuff (which you would know), using the difference_type is just fine; quit fighting ghosts.
Alternatively, just use int64_t, that's certainly big enough. (Though maybe looping over one on some 32 bit processors is slower than int32_t, I don't know. Won't matter for that single modulus operation though.)
(Fun fact: Even some prominent committee members consider littering the standard library with unsigned types a mistake, for reference see
this panel at 9:50, 42:40, 1:02:50 )
Pre C++11, the sign of % with negative values was implementation defined, for well defined behavior, use std::div plus one of the casts described above.
We know that
-a % b == -(a % b)
So you could write something like this:
template<typename T, typename T2>
constexpr T safeModulo(T a, T2 b)
{
return (a >= 0 ? 1 : -1) * static_cast<T>(std::llabs(a) % b);
}
This won't overflow in 99.98% of the cases, because consider this
safeModulo(num, str.length());
If std::size_t is implemented as an unsigned long long, then T2 -> unsigned long long and T -> int.
As pointed out in the comments, using std::llabs instead of std::abs is important, because if a is the smallest possible value of int, removing the sign will overflow. Promoting a to a long long just before won't result in this problem, as long long has a larger range of values.
Now static_cast<int>(std::llabs(a) % b) will always result in a value that is smaller than a, so casting it to int will never overflow/underflow. Even if a gets promoted to an unsigned long long, it doesn't matter because a is already "unsigned" from std::llabs(a), and so the value is unchanged (i.e. didn't overflow/underflow).
Because of the property stated above, if a is negative, multiply the result with -1 and you get the correct result.
The only case where it results in undefined behavior is when a is std::numeric_limits<long long>::min(), as removing the sign overflows a, resulting in undefined behavior. There is probably another way to implement the function, I'll think about it.

Efficient unsigned-to-signed cast avoiding implementation-defined behavior

I want to define a function that takes an unsigned int as argument and returns an int congruent modulo UINT_MAX+1 to the argument.
A first attempt might look like this:
int unsigned_to_signed(unsigned n)
{
return static_cast<int>(n);
}
But as any language lawyer knows, casting from unsigned to signed for values larger than INT_MAX is implementation-defined.
I want to implement this such that (a) it only relies on behavior mandated by the spec; and (b) it compiles into a no-op on any modern machine and optimizing compiler.
As for bizarre machines... If there is no signed int congruent modulo UINT_MAX+1 to the unsigned int, let's say I want to throw an exception. If there is more than one (I am not sure this is possible), let's say I want the largest one.
OK, second attempt:
int unsigned_to_signed(unsigned n)
{
int int_n = static_cast<int>(n);
if (n == static_cast<unsigned>(int_n))
return int_n;
// else do something long and complicated
}
I do not much care about the efficiency when I am not on a typical twos-complement system, since in my humble opinion that is unlikely. And if my code becomes a bottleneck on the omnipresent sign-magnitude systems of 2050, well, I bet someone can figure that out and optimize it then.
Now, this second attempt is pretty close to what I want. Although the cast to int is implementation-defined for some inputs, the cast back to unsigned is guaranteed by the standard to preserve the value modulo UINT_MAX+1. So the conditional does check exactly what I want, and it will compile into nothing on any system I am likely to encounter.
However... I am still casting to int without first checking whether it will invoke implementation-defined behavior. On some hypothetical system in 2050 it could do who-knows-what. So let's say I want to avoid that.
Question: What should my "third attempt" look like?
To recap, I want to:
Cast from unsigned int to signed int
Preserve the value mod UINT_MAX+1
Invoke only standard-mandated behavior
Compile into a no-op on a typical twos-complement machine with optimizing compiler
[Update]
Let me give an example to show why this is not a trivial question.
Consider a hypothetical C++ implementation with the following properties:
sizeof(int) equals 4
sizeof(unsigned) equals 4
INT_MAX equals 32767
INT_MIN equals -232 + 32768
UINT_MAX equals 232 - 1
Arithmetic on int is modulo 232 (into the range INT_MIN through INT_MAX)
std::numeric_limits<int>::is_modulo is true
Casting unsigned n to int preserves the value for 0 <= n <= 32767 and yields zero otherwise
On this hypothetical implementation, there is exactly one int value congruent (mod UINT_MAX+1) to each unsigned value. So my question would be well-defined.
I claim that this hypothetical C++ implementation fully conforms to the C++98, C++03, and C++11 specifications. I admit I have not memorized every word of all of them... But I believe I have read the relevant sections carefully. So if you want me to accept your answer, you either must (a) cite a spec that rules out this hypothetical implementation or (b) handle it correctly.
Indeed, a correct answer must handle every hypothetical implementation permitted by the standard. That is what "invoke only standard-mandated behavior" means, by definition.
Incidentally, note that std::numeric_limits<int>::is_modulo is utterly useless here for multiple reasons. For one thing, it can be true even if unsigned-to-signed casts do not work for large unsigned values. For another, it can be true even on one's-complement or sign-magnitude systems, if arithmetic is simply modulo the entire integer range. And so on. If your answer depends on is_modulo, it's wrong.
[Update 2]
hvd's answer taught me something: My hypothetical C++ implementation for integers is not permitted by modern C. The C99 and C11 standards are very specific about the representation of signed integers; indeed, they only permit twos-complement, ones-complement, and sign-magnitude (section 6.2.6.2 paragraph (2); ).
But C++ is not C. As it turns out, this fact lies at the very heart of my question.
The original C++98 standard was based on the much older C89, which says (section 3.1.2.5):
For each of the signed integer types, there is a corresponding (but
different) unsigned integer type (designated with the keyword
unsigned) that uses the same amount of storage (including sign
information) and has the same alignment requirements. The range of
nonnegative values of a signed integer type is a subrange of the
corresponding unsigned integer type, and the representation of the
same value in each type is the same.
C89 says nothing about only having one sign bit or only allowing twos-complement/ones-complement/sign-magnitude.
The C++98 standard adopted this language nearly verbatim (section 3.9.1 paragraph (3)):
For each of the signed integer types, there exists a corresponding
(but different) unsigned integer type: "unsigned char", "unsigned
short int", "unsigned int", and "unsigned long int", each of
which occupies the same amount of storage and has the same alignment
requirements (3.9) as the corresponding signed integer type ; that
is, each signed integer type has the same object representation as
its corresponding unsigned integer type. The range of nonnegative
values of a signed integer type is a subrange of the corresponding
unsigned integer type, and the value representation of each
corresponding signed/unsigned type shall be the same.
The C++03 standard uses essentially identical language, as does C++11.
No standard C++ spec constrains its signed integer representations to any C spec, as far as I can tell. And there is nothing mandating a single sign bit or anything of the kind. All it says is that non-negative signed integers must be a subrange of the corresponding unsigned.
So, again I claim that INT_MAX=32767 with INT_MIN=-232+32768 is permitted. If your answer assumes otherwise, it is incorrect unless you cite a C++ standard proving me wrong.
Expanding on user71404's answer:
int f(unsigned x)
{
if (x <= INT_MAX)
return static_cast<int>(x);
if (x >= INT_MIN)
return static_cast<int>(x - INT_MIN) + INT_MIN;
throw x; // Or whatever else you like
}
If x >= INT_MIN (keep the promotion rules in mind, INT_MIN gets converted to unsigned), then x - INT_MIN <= INT_MAX, so this won't have any overflow.
If that is not obvious, take a look at the claim "If x >= -4u, then x + 4 <= 3.", and keep in mind that INT_MAX will be equal to at least the mathematical value of -INT_MIN - 1.
On the most common systems, where !(x <= INT_MAX) implies x >= INT_MIN, the optimizer should be able (and on my system, is able) to remove the second check, determine that the two return statements can be compiled to the same code, and remove the first check too. Generated assembly listing:
__Z1fj:
LFB6:
.cfi_startproc
movl 4(%esp), %eax
ret
.cfi_endproc
The hypothetical implementation in your question:
INT_MAX equals 32767
INT_MIN equals -232 + 32768
is not possible, so does not need special consideration. INT_MIN will be equal to either -INT_MAX, or to -INT_MAX - 1. This follows from C's representation of integer types (6.2.6.2), which requires n bits to be value bits, one bit to be a sign bit, and only allows one single trap representation (not including representations that are invalid because of padding bits), namely the one that would otherwise represent negative zero / -INT_MAX - 1. C++ doesn't allow any integer representations beyond what C allows.
Update: Microsoft's compiler apparently does not notice that x > 10 and x >= 11 test the same thing. It only generates the desired code if x >= INT_MIN is replaced with x > INT_MIN - 1u, which it can detect as the negation of x <= INT_MAX (on this platform).
[Update from questioner (Nemo), elaborating on our discussion below]
I now believe this answer works in all cases, but for complicated reasons. I am likely to award the bounty to this solution, but I want to capture all the gory details in case anybody cares.
Let's start with C++11, section 18.3.3:
Table 31 describes the header <climits>.
...
The contents are the same as the Standard C library header <limits.h>.
Here, "Standard C" means C99, whose specification severely constrains the representation of signed integers. They are just like unsigned integers, but with one bit dedicated to "sign" and zero or more bits dedicated to "padding". The padding bits do not contribute to the value of the integer, and the sign bit contributes only as twos-complement, ones-complement, or sign-magnitude.
Since C++11 inherits the <climits> macros from C99, INT_MIN is either -INT_MAX or -INT_MAX-1, and hvd's code is guaranteed to work. (Note that, due to the padding, INT_MAX could be much less than UINT_MAX/2... But thanks to the way signed->unsigned casts work, this answer handles that fine.)
C++03/C++98 is trickier. It uses the same wording to inherit <climits> from "Standard C", but now "Standard C" means C89/C90.
All of these -- C++98, C++03, C89/C90 -- have the wording I give in my question, but also include this (C++03 section 3.9.1 paragraph 7):
The representations of integral types shall define values by use of a
pure binary numeration system.(44) [Example: this International
Standard permits 2’s complement, 1’s complement and signed magnitude
representations for integral types.]
Footnote (44) defines "pure binary numeration system":
A positional representation for integers that uses the binary digits 0
and 1, in which the values represented by successive bits are
additive, begin with 1, and are multiplied by successive integral
power of 2, except perhaps for the bit with the highest position.
What is interesting about this wording is that it contradicts itself, because the definition of "pure binary numeration system" does not permit a sign/magnitude representation! It does allow the high bit to have, say, the value -2n-1 (twos complement) or -(2n-1-1) (ones complement). But there is no value for the high bit that results in sign/magnitude.
Anyway, my "hypothetical implementation" does not qualify as "pure binary" under this definition, so it is ruled out.
However, the fact that the high bit is special means we can imagine it contributing any value at all: A small positive value, huge positive value, small negative value, or huge negative value. (If the sign bit can contribute -(2n-1-1), why not -(2n-1-2)? etc.)
So, let's imagine a signed integer representation that assigns a wacky value to the "sign" bit.
A small positive value for the sign bit would result in a positive range for int (possibly as large as unsigned), and hvd's code handles that just fine.
A huge positive value for the sign bit would result in int having a maximum larger than unsigned, which is is forbidden.
A huge negative value for the sign bit would result in int representing a non-contiguous range of values, and other wording in the spec rules that out.
Finally, how about a sign bit that contributes a small negative quantity? Could we have a 1 in the "sign bit" contribute, say, -37 to the value of the int? So then INT_MAX would be (say) 231-1 and INT_MIN would be -37?
This would result in some numbers having two representations... But ones-complement gives two representations to zero, and that is allowed according to the "Example". Nowhere does the spec say that zero is the only integer that might have two representations. So I think this new hypothetical is allowed by the spec.
Indeed, any negative value from -1 down to -INT_MAX-1 appears to be permissible as a value for the "sign bit", but nothing smaller (lest the range be non-contiguous). In other words, INT_MIN might be anything from -INT_MAX-1 to -1.
Now, guess what? For the second cast in hvd's code to avoid implementation-defined behavior, we just need x - (unsigned)INT_MIN less than or equal to INT_MAX. We just showed INT_MIN is at least -INT_MAX-1. Obviously, x is at most UINT_MAX. Casting a negative number to unsigned is the same as adding UINT_MAX+1. Put it all together:
x - (unsigned)INT_MIN <= INT_MAX
if and only if
UINT_MAX - (INT_MIN + UINT_MAX + 1) <= INT_MAX
-INT_MIN-1 <= INT_MAX
-INT_MIN <= INT_MAX+1
INT_MIN >= -INT_MAX-1
That last is what we just showed, so even in this perverse case, the code actually works.
That exhausts all of the possibilities, thus ending this extremely academic exercise.
Bottom line: There is some seriously under-specified behavior for signed integers in C89/C90 that got inherited by C++98/C++03. It is fixed in C99, and C++11 indirectly inherits the fix by incorporating <limits.h> from C99. But even C++11 retains the self-contradictory "pure binary representation" wording...
This code relies only on behavior, mandated by the spec, so requirement (a) is easily satisfied:
int unsigned_to_signed(unsigned n)
{
int result = INT_MAX;
if (n > INT_MAX && n < INT_MIN)
throw runtime_error("no signed int for this number");
for (unsigned i = INT_MAX; i != n; --i)
--result;
return result;
}
It's not so easy with requirement (b). This compiles into a no-op with gcc 4.6.3 (-Os, -O2, -O3) and with clang 3.0 (-Os, -O, -O2, -O3). Intel 12.1.0 refuses to optimize this. And I have no info about Visual C.
The original answer solved the problem only for unsigned => int. What if we want to solve the general problem of "some unsigned type" to its corresponding signed type? Furthermore, the original answer was excellent at citing sections of the standard and analyzing some corner cases, but it did not really help me get a feel for why it worked, so this answer will try to give a strong conceptual basis. This answer will try to help explain "why", and use modern C++ features to try to simplify the code.
C++20 answer
The problem has simplified dramatically with P0907: Signed Integers are Two’s Complement and the final wording P1236 that was voted into the C++20 standard. Now, the answer is as simple as possible:
template<std::unsigned_integral T>
constexpr auto cast_to_signed_integer(T const value) {
return static_cast<std::make_signed_t<T>>(value);
}
That's it. A static_cast (or C-style cast) is finally guaranteed to do the thing you need for this question, and the thing many programmers thought it always did.
C++17 answer
In C++17, things are much more complicated. We have to deal with three possible integer representations (two's complement, ones' complement, and sign-magnitude). Even in the case where we know it must be two's complement because we checked the range of possible values, the conversion of a value outside the range of the signed integer to that signed integer still gives us an implementation-defined result. We have to use tricks like we have seen in other answers.
First, here is the code for how to solve the problem generically:
template<typename T, typename = std::enable_if_t<std::is_unsigned_v<T>>>
constexpr auto cast_to_signed_integer(T const value) {
using result = std::make_signed_t<T>;
using result_limits = std::numeric_limits<result>;
if constexpr (result_limits::min() + 1 != -result_limits::max()) {
if (value == static_cast<T>(result_limits::max()) + 1) {
throw std::runtime_error("Cannot convert the maximum possible unsigned to a signed value on this system");
}
}
if (value <= result_limits::max()) {
return static_cast<result>(value);
} else {
using promoted_unsigned = std::conditional_t<sizeof(T) <= sizeof(unsigned), unsigned, T>;
using promoted_signed = std::make_signed_t<promoted_unsigned>;
constexpr auto shift_by_window = [](auto x) {
// static_cast to avoid conversion warning
return x - static_cast<decltype(x)>(result_limits::max()) - 1;
};
return static_cast<result>(
shift_by_window( // shift values from common range to negative range
static_cast<promoted_signed>(
shift_by_window( // shift large values into common range
static_cast<promoted_unsigned>(value) // cast to avoid promotion to int
)
)
)
);
}
}
This has a few more casts than the accepted answer, and that is to ensure there are no signed / unsigned mismatch warnings from your compiler and to properly handle integer promotion rules.
We first have a special case for systems that are not two's complement (and thus we must handle the maximum possible value specially because it doesn't have anything to map to). After that, we get to the real algorithm.
The second top-level condition is straightforward: we know the value is less than or equal to the maximum value, so it fits in the result type. The third condition is a little more complicated even with the comments, so some examples would probably help understand why each statement is necessary.
Conceptual basis: the number line
First, what is this window concept? Consider the following number line:
| signed |
<.........................>
| unsigned |
It turns out that for two's complement integers, you can divide the subset of the number line that can be reached by either type into three equally sized categories:
- => signed only
= => both
+ => unsigned only
<..-------=======+++++++..>
This can be easily proven by considering the representation. An unsigned integer starts at 0 and uses all of the bits to increase the value in powers of 2. A signed integer is exactly the same for all of the bits except the sign bit, which is worth -(2^position) instead of 2^position. This means that for all n - 1 bits, they represent the same values. Then, unsigned integers have one more normal bit, which doubles the total number of values (in other words, there are just as many values with that bit set as without it set). The same logic holds for signed integers, except that all the values with that bit set are negative.
The other two legal integer representations, ones' complement and sign-magnitude, have all of the same values as two's complement integers except for one: the most negative value. C++ defines everything about integer types, except for reinterpret_cast (and the C++20 std::bit_cast), in terms of the range of representable values, not in terms of the bit representation. This means that our analysis will hold for each of these three representations as long as we do not ever try to create the trap representation. The unsigned value that would map to this missing value is a rather unfortunate one: the one right in the middle of the unsigned values. Fortunately, our first condition checks (at compile time) whether such a representation exists, and then handles it specially with a runtime check.
The first condition handles the case where we are in the = section, which means that we are in the overlapping region where the values in one can be represented in the other without change. The shift_by_window function in the code moves all values down by the size of each of these segments (we have to subtract the max value then subtract 1 to avoid arithmetic overflow issues). If we are outside of that region (we are in the + region), we need to jump down by one window size. This puts us in the overlapping range, which means we can safely convert from unsigned to signed because there is no change in value. However, we are not done yet because we have mapped two unsigned values to each signed value. Therefore, we need to shift down to the next window (the - region) so that we have a unique mapping again.
Now, does this give us a result congruent mod UINT_MAX + 1, as requested in the question? UINT_MAX + 1 is equivalent to 2^n, where n is the number of bits in the value representation. The value we use for our window size is equal to 2^(n - 1) (the final index in a sequence of values is one less than the size). We subtract that value twice, which means we subtract 2 * 2^(n - 1) which is equal to 2^n. Adding and subtracting x is a no-op in arithmetic mod x, so we have not affected the original value mod 2^n.
Properly handling integer promotions
Because this is a generic function and not just int and unsigned, we also have to concern ourselves with integral promotion rules. There are two possibly interesting cases: one in which short is smaller than int and one in which short is the same size as int.
Example: short smaller than int
If short is smaller than int (common on modern platforms) then we also know that unsigned short can fit in an int, which means that any operations on it will actually happen in int, so we explicitly cast to the promoted type to avoid this. Our final statement is pretty abstract and becomes easier to understand if we substitute in real values. For our first interesting case, with no loss of generality let us consider a 16-bit short and a 17-bit int (which is still allowed under the new rules, and would just mean that at least one of those two integer types have some padding bits):
constexpr auto shift_by_window = [](auto x) {
return x - static_cast<decltype(x)>(32767) - 1;
};
return static_cast<int16_t>(
shift_by_window(
static_cast<int17_t>(
shift_by_window(
static_cast<uint17_t>(value)
)
)
)
);
Solving for the greatest possible 16-bit unsigned value
constexpr auto shift_by_window = [](auto x) {
return x - static_cast<decltype(x)>(32767) - 1;
};
return int16_t(
shift_by_window(
int17_t(
shift_by_window(
uint17_t(65535)
)
)
)
);
Simplifies to
return int16_t(
int17_t(
uint17_t(65535) - uint17_t(32767) - 1
) -
int17_t(32767) -
1
);
Simplifies to
return int16_t(
int17_t(uint17_t(32767)) -
int17_t(32767) -
1
);
Simplifies to
return int16_t(
int17_t(32767) -
int17_t(32767) -
1
);
Simplifies to
return int16_t(-1);
We put in the largest possible unsigned and get back -1, success!
Example: short same size as int
If short is the same size as int (uncommon on modern platforms), the integral promotion rule are slightly different. In this case, short promotes to int and unsigned short promotes to unsigned. Fortunately, we explicitly cast each result to the type we want to do the calculation in, so we end up with no problematic promotions. With no loss of generality let us consider a 16-bit short and a 16-bit int:
constexpr auto shift_by_window = [](auto x) {
return x - static_cast<decltype(x)>(32767) - 1;
};
return static_cast<int16_t>(
shift_by_window(
static_cast<int16_t>(
shift_by_window(
static_cast<uint16_t>(value)
)
)
)
);
Solving for the greatest possible 16-bit unsigned value
auto x = int16_t(
uint16_t(65535) - uint16_t(32767) - 1
);
return int16_t(
x - int16_t(32767) - 1
);
Simplifies to
return int16_t(
int16_t(32767) - int16_t(32767) - 1
);
Simplifies to
return int16_t(-1);
We put in the largest possible unsigned and get back -1, success!
What if I just care about int and unsigned and don't care about warnings, like the original question?
constexpr int cast_to_signed_integer(unsigned const value) {
using result_limits = std::numeric_limits<int>;
if constexpr (result_limits::min() + 1 != -result_limits::max()) {
if (value == static_cast<unsigned>(result_limits::max()) + 1) {
throw std::runtime_error("Cannot convert the maximum possible unsigned to a signed value on this system");
}
}
if (value <= result_limits::max()) {
return static_cast<int>(value);
} else {
constexpr int window = result_limits::min();
return static_cast<int>(value + window) + window;
}
}
See it live
https://godbolt.org/z/74hY81
Here we see that clang, gcc, and icc generate no code for cast and cast_to_signed_integer_basic at -O2 and -O3, and MSVC generates no code at /O2, so the solution is optimal.
You can explicitly tell the compiler what you want to do:
int unsigned_to_signed(unsigned n) {
if (n > INT_MAX) {
if (n <= UINT_MAX + INT_MIN) {
throw "no result";
}
return static_cast<int>(n + INT_MIN) - (UINT_MAX + INT_MIN + 1);
} else {
return static_cast<int>(n);
}
}
Compiles with gcc 4.7.2 for x86_64-linux (g++ -O -S test.cpp) to
_Z18unsigned_to_signedj:
movl %edi, %eax
ret
If x is our input...
If x > INT_MAX, we want to find a constant k such that 0 < x - k*INT_MAX < INT_MAX.
This is easy -- unsigned int k = x / INT_MAX;. Then, let unsigned int x2 = x - k*INT_MAX;
We can now cast x2 to int safely. Let int x3 = static_cast<int>(x2);
We now want to subtract something like UINT_MAX - k * INT_MAX + 1 from x3, if k > 0.
Now, on a 2s complement system, so long as x > INT_MAX, this works out to:
unsigned int k = x / INT_MAX;
x -= k*INT_MAX;
int r = int(x);
r += k*INT_MAX;
r -= UINT_MAX+1;
Note that UINT_MAX+1 is zero in C++ guaranteed, the conversion to int was a noop, and we subtracted k*INT_MAX then added it back on "the same value". So an acceptable optimizer should be able to erase all that tomfoolery!
That leaves the problem of x > INT_MAX or not. Well, we create 2 branches, one with x > INT_MAX, and one without. The one without does a strait cast, which the compiler optimizes to a noop. The one with ... does a noop after the optimizer is done. The smart optimizer realizes both branches to the same thing, and drops the branch.
Issues: if UINT_MAX is really large relative to INT_MAX, the above might not work. I am assuming that k*INT_MAX <= UINT_MAX+1 implicitly.
We could probably attack this with some enums like:
enum { divisor = UINT_MAX/INT_MAX, remainder = UINT_MAX-divisor*INT_MAX };
which work out to 2 and 1 on a 2s complement system I believe (are we guaranteed for that math to work? That's tricky...), and do logic based on these that easily optimize away on non-2s complement systems...
This also opens up the exception case. It is only possible if UINT_MAX is much larger than (INT_MIN-INT_MAX), so you can put your exception code in an if block asking exactly that question somehow, and it won't slow you down on a traditional system.
I'm not exactly sure how to construct those compile-time constants to deal correctly with that.
std::numeric_limits<int>::is_modulo is a compile time constant. so you can use it for template specialization. problem solved, at least if compiler plays along with inlining.
#include <limits>
#include <stdexcept>
#include <string>
#ifdef TESTING_SF
bool const testing_sf = true;
#else
bool const testing_sf = false;
#endif
// C++ "extensions"
namespace cppx {
using std::runtime_error;
using std::string;
inline bool hopefully( bool const c ) { return c; }
inline bool throw_x( string const& s ) { throw runtime_error( s ); }
} // namespace cppx
// C++ "portability perversions"
namespace cppp {
using cppx::hopefully;
using cppx::throw_x;
using std::numeric_limits;
namespace detail {
template< bool isTwosComplement >
int signed_from( unsigned const n )
{
if( n <= unsigned( numeric_limits<int>::max() ) )
{
return static_cast<int>( n );
}
unsigned const u_max = unsigned( -1 );
unsigned const u_half = u_max/2 + 1;
if( n == u_half )
{
throw_x( "signed_from: unsupported value (negative max)" );
}
int const i_quarter = static_cast<int>( u_half/2 );
int const int_n1 = static_cast<int>( n - u_half );
int const int_n2 = int_n1 - i_quarter;
int const int_n3 = int_n2 - i_quarter;
hopefully( n == static_cast<unsigned>( int_n3 ) )
|| throw_x( "signed_from: range error" );
return int_n3;
}
template<>
inline int signed_from<true>( unsigned const n )
{
return static_cast<int>( n );
}
} // namespace detail
inline int signed_from( unsigned const n )
{
bool const is_modulo = numeric_limits< int >::is_modulo;
return detail::signed_from< is_modulo && !testing_sf >( n );
}
} // namespace cppp
#include <iostream>
using namespace std;
int main()
{
int const x = cppp::signed_from( -42u );
wcout << x << endl;
}
EDIT: Fixed up code to avoid possible trap on non-modular-int machines (only one is known to exist, namely the archaically configured versions of the Unisys Clearpath). For simplicity this is done by not supporting the value -2n-1 where n is the number of int value bits, on such machine (i.e., on the Clearpath). in practice this value will not be supported by the machine either (i.e., with sign-and-magnitude or 1’s complement representation).
I think the int type is at least two bytes, so the INT_MIN and INT_MAX may change in different platforms.
Fundamental types
≤climits≥ header
My money is on using memcpy. Any decent compiler knows to optimise it away:
#include <stdio.h>
#include <memory.h>
#include <limits.h>
static inline int unsigned_to_signed(unsigned n)
{
int result;
memcpy( &result, &n, sizeof(result));
return result;
}
int main(int argc, const char * argv[])
{
unsigned int x = UINT_MAX - 1;
int xx = unsigned_to_signed(x);
return xx;
}
For me (Xcode 8.3.2, Apple LLVM 8.1, -O3), that produces:
_main: ## #main
Lfunc_begin0:
.loc 1 21 0 ## /Users/Someone/main.c:21:0
.cfi_startproc
## BB#0:
pushq %rbp
Ltmp0:
.cfi_def_cfa_offset 16
Ltmp1:
.cfi_offset %rbp, -16
movq %rsp, %rbp
Ltmp2:
.cfi_def_cfa_register %rbp
##DEBUG_VALUE: main:argc <- %EDI
##DEBUG_VALUE: main:argv <- %RSI
Ltmp3:
##DEBUG_VALUE: main:x <- 2147483646
##DEBUG_VALUE: main:xx <- 2147483646
.loc 1 24 5 prologue_end ## /Users/Someone/main.c:24:5
movl $-2, %eax
popq %rbp
retq
Ltmp4:
Lfunc_end0:
.cfi_endproc

Packing 32bit floats into 30 bits (c++)

Here are the goals I'm trying to achieve:
I need to pack 32 bit IEEE floats into 30 bits.
I want to do this by decreasing the size of mantissa by 2 bits.
The operation itself should be as fast as possible.
I'm aware that some precision will be lost, and this is acceptable.
It would be an advantage, if this operation would not ruin special cases like SNaN, QNaN, infinities, etc. But I'm ready to sacrifice this over speed.
I guess this questions consists of two parts:
1) Can I just simply clear the least significant bits of mantissa? I've tried this, and so far it works, but maybe I'm asking for trouble... Something like:
float f;
int packed = (*(int*)&f) & ~3;
// later
f = *(float*)&packed;
2) If there are cases where 1) will fail, then what would be the fastest way to achieve this?
Thanks in advance
You actually violate the strict aliasing rules (section 3.10 of the C++ standard) with these reinterpret casts. This will probably blow up in your face when you turn on the compiler optimizations.
C++ standard, section 3.10 paragraph 15 says:
If a program attempts to access the stored value of an object through an lvalue of other than one of the following types the behavior is undefined
the dynamic type of the object,
a cv-qualified version of the dynamic type of the object,
a type similar to the dynamic type of the object,
a type that is the signed or unsigned type corresponding to the dynamic type of the object,
a type that is the signed or unsigned type corresponding to a cv-qualified version of the dynamic type of the object,
an aggregate or union type that includes one of the aforementioned types among its members (including, recursively, a member of a subaggregate or contained union),
a type that is a (possibly cv-qualified) base class type of the dynamic type of the object,
a char or unsigned char type.
Specifically, 3.10/15 doesn't allow us to access a float object via an lvalue of type unsigned int. I actually got bitten myself by this. The program I wrote stopped working after turning on optimizations. Apparently, GCC didn't expect an lvalue of type float to alias an lvalue of type int which is a fair assumption by 3.10/15. The instructions got shuffled around by the optimizer under the as-if rule exploiting 3.10/15 and it stopped working.
Under the following assumptions
float really corresponds to a 32bit IEEE-float,
sizeof(float)==sizeof(int)
unsigned int has no padding bits or trap representations
you should be able to do it like this:
/// returns a 30 bit number
unsigned int pack_float(float x) {
unsigned r;
std::memcpy(&r,&x,sizeof r);
return r >> 2;
}
float unpack_float(unsigned int x) {
x <<= 2;
float r;
std::memcpy(&r,&x,sizeof r);
return r;
}
This doesn't suffer from the "3.10-violation" and is typically very fast. At least GCC treats memcpy as an intrinsic function. In case you don't need the functions to work with NaNs, infinities or numbers with extremely high magnitude you can even improve accuracy by replacing "r >> 2" with "(r+1) >> 2":
unsigned int pack_float(float x) {
unsigned r;
std::memcpy(&r,&x,sizeof r);
return (r+1) >> 2;
}
This works even if it changes the exponent due to a mantissa overflow because the IEEE-754 coding maps consecutive floating point values to consecutive integers (ignoring +/- zero). This mapping actually approximates a logarithm quite well.
Blindly dropping the 2 LSBs of the float may fail for small number of unusual NaN encodings.
A NaN is encoded as exponent=255, mantissa!=0, but IEEE-754 doesn't say anything about which mantiassa values should be used. If the mantissa value is <= 3, you could turn a NaN into an infinity!
You should encapsulate it in a struct, so that you don't accidentally mix the usage of the tagged float with regular "unsigned int":
#include <iostream>
using namespace std;
struct TypedFloat {
private:
union {
unsigned int raw : 32;
struct {
unsigned int num : 30;
unsigned int type : 2;
};
};
public:
TypedFloat(unsigned int type=0) : num(0), type(type) {}
operator float() const {
unsigned int tmp = num << 2;
return reinterpret_cast<float&>(tmp);
}
void operator=(float newnum) {
num = reinterpret_cast<int&>(newnum) >> 2;
}
unsigned int getType() const {
return type;
}
void setType(unsigned int type) {
this->type = type;
}
};
int main() {
const unsigned int TYPE_A = 1;
TypedFloat a(TYPE_A);
a = 3.4;
cout << a + 5.4 << endl;
float b = a;
cout << a << endl;
cout << b << endl;
cout << a.getType() << endl;
return 0;
}
I can't guarantee its portability though.
How much precision do you need? If 16-bit float is enough (sufficient for some types of graphics), then ILM's 16-bit float ("half"), part of OpenEXR is great, obeys all kinds of rules (http://www.openexr.com/), and you'll have plenty of space left over after you pack it into a struct.
On the other hand, if you know the approximate range of values they're going to take, you should consider fixed point. They're more useful than most people realize.
I can't select any of the answers as the definite one, because most of them have valid information, but not quite what I was looking for. So I'll just summarize my conclusions.
The method for conversion I've posted in my question's part 1) is clearly wrong by C++ standard, so other methods to extract float's bits should be used.
And most important... as far as I understand from reading the responses and other sources about IEEE754 floats, it's ok to drop the least significant bits from mantissa. It will mostly affect only precision, with one exception: sNaN. Since sNaN is represented by exponent set to 255, and mantissa != 0, there can be situation where mantissa would be <= 3, and dropping last two bits would convert sNaN to +/-Infinity. But since sNaN are not generated during floating point operations on CPU, its safe under controlled environment.