With regard to to those definitions found in stdint.h, I wish to test a function for converting vectors of int8_t or vectors of int64_t to vectors of std::string.
Here are my tests:
TEST(TestAlgorithms, toStringForInt8)
{
std::vector<int8_t> input = boost::assign::list_of(-128)(0)(127);
Container container(input);
EXPECT_TRUE(boost::apply_visitor(ToString(),container) == boost::assign::list_of("-128")("0")("127"));
}
TEST(TestAlgorithms, toStringForInt64)
{
std::vector<int64_t> input = boost::assign::list_of(-9223372036854775808)(0)(9223372036854775807);
Container container(input);
EXPECT_TRUE(boost::apply_visitor(ToString(),container) == boost::assign::list_of("-9223372036854775808")("0")("9223372036854775807"));
}
However, I am getting a warning in visual studio for the line:
std::vector<int64_t> input = boost::assign::list_of(-9223372036854775808)(0)(9223372036854775807);
as follows:
warning C4146: unary minus operator applied to unsigned type, result still unsigned
If I change -9223372036854775808 to -9223372036854775807, the warning disappears.
What is the issue here? With regard to my original code, the test is passing.
It's like the compiler says, -9223372036854775808 is not a valid number because the - and the digits are treated separately.
You could try -9223372036854775807 - 1 or use std::numeric_limits<int64_t>::min() instead.
The issue is that integer literals are not negative; so -42 is not a literal with a negative value, but rather the - operator applied to the literal 42.
In this case, 9223372036854775808 is out of the range of int64_t, so it will be given an unsigned type. Due to the magic of modular arithmetic, you can still negate it, assign it to int64_t, and end up with the result you expect; but the compiler will warn you about the unsigned negation (if you tell it to) since that can often be the result of an error.
You could avoid the warning (and make the code more obviously correct) by using std::numeric_limits<int64_t>::min() instead.
Change -9223372036854775808 to -9223372036854775807-1.
The issue is that -9223372036854775808 isn't -9223372036854775808 but rather -(9223372036854775808) and 9223372036854775808 cannot fit into a signed 64-bit type (decimal integer constants by default are a signed type), so it instead becomes unsigned. Applying negation with - to an unsigned type is suspicious, hence the warning.
Related
I want to check if the reverse of an signed int value x lies inside INT_MAX and INT_MIN. For this, I have reversed x twice and checked if it is equal to the original x, if so then it lies inside INT_MAX and INT_MIN, else it does not.
But online compilers are giving a runtime error, but my g++ compiler is working fine and giving the correct output. Can anybody tell me the reason?
int reverse(int x) {
int tx=x,rx=0,ans;
while(tx!=0){
rx = rx+rx+rx+rx+rx+rx+rx+rx+rx+rx+tx%10;
tx/=10;
}
ans = tx = rx;
rx=0;
while(tx!=0){
rx = rx*10 + tx%10;
tx/=10;
}
while(x%10==0&&x!=0)x/=10;
//triming trailing zeros
if(rx!=x){
return 0;
}else{
return ans;
}
}
ERROR:
Line 6: Char 23: runtime error: signed integer overflow: 1929264870 + 964632435 cannot be represented in type 'int' (solution.cpp)
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior prog_joined.cpp:15:23
According to the cppreference:
Overflows
...
When signed integer arithmetic operation overflows (the result does not fit in the result type), the behavior is undefined: it may wrap around according to the rules of the representation (typically 2's complement), it may trap on some platforms or due to compiler options (e.g. -ftrapv in GCC and Clang), or may be completely optimized out by the compiler.
So the online compiler may comes with more strict checking, while your local GCC choosed to "wrap" on overflow.
Instead, if you want to wrap on overflow for sure, you may promote your operands to 64 bit width, perform the addition and then convert the result to 32 bit width again. According to cppreference this seems to be well defined after C++20.
Numeric conversions
Unlike the promotions, numeric conversions may change the values, with potential loss of precision.
Integral conversions
...
if the destination type is signed, the value does not change if the source integer can be represented in the destination type. Otherwise the result is { implementation-defined (until C++20) } / { the unique value of the destination type equal to the source value modulo 2n
where n is the number of bits used to represent the destination type. (since C++20) }
Example code on godbolt
I'm not sure what's wrong with your algorithm, but let me suggest an alternative that avoids doing math and the possibility of integer overflows.
If you want to find of if say, 2147483641 is a valid integer when reversed (e.g. 1463847412), you can do it entirely as string comparisons after converting the initial value to a string and reversing that string.
Basic algorithm for a non-negative value it this:
convert integer to string (let's call this string s)
convert INT_MAX to a string (let's call this string max_s)
reverse s. Handle leading zero's and negative chars appropriately. That is, 120 reversed is "21" and -456 reversed is "-654". The result of this conversion is a string called rev.
If rev.length() < s_max.length() Then rev is valid as an integer.
If rev.length() > s_max.length(), then rev is not valid as a integer.
If rev.size() == s_max.length(), then the reversed string is the same length as INT_MAX as a string. A lexical comparison will suffice to see if it's valid. That is isValid = (rev <= s_max) or isValid = strcmp(rev, s_max) < 1
The algorithm for a negative value is identical except replace INT_MAX with INT_MIN.
I'm new to C++, so sorry if this sounds like a dumb question. I'm attempting to assign the 64 bit minimum (-9223372036854775808) to an int64_t (from cstdint), however I'm getting the current error message:
main.cpp:5:27: warning: integer literal is too large to be represented in a signed integer type, interpreting as unsigned [-Wimplicitly-unsigned-literal]
int64_t int_64_min = -9223372036854775808;
The code is as follows:
#include <iostream>
#include <cstdint>
int main() {
int64_t int_64_min = -9223372036854775808;
std::cout << int_64_min << std::endl;
std::cout << INT64_MIN << std::endl;
return 0;
}
Am I missing something obvious?
This is exactly why the INT_MIN family of macros are often defined like -INT_MAX - 1 on a 2's complement platform (virtually ubiquitous and compulsory from C++20).
There is no such thing as a negative literal in C++: just the unary negation of a positive number which produces a compile time evaluable constant expression.
9223372036854775808 is too big to fit into a 64 bit signed integral type, so the behavior of negating this and assigning to a signed 64 bit integral type is implementation defined.
Writing -9223372036854775807 - 1 is probably what your C++ standard library implementation does. If I were you, I'd use INT64_MIN directly or std::numeric_limits<std::int64_t>::min().
The problem is that -9223372036854775808 is not a literal. 9223372036854775808 is a literal, and -9223372036854775808 is an expression consisting of that literal with a unary - applied to it.
All unsuffixed decimal literals are of type int, long int, or long long int. Since 9223372036854775808 is 263, it's bigger than the maximum value of long long int (assuming long long int is 64 bits, which it almost certainly is).
The simplest solution is to use the INT64_MIN macro defined in <cstdint>.
Aside from that, the problem is that there are no literals for negative values. Another solution is to replace -9223372036854775808 by -9223372036854775807-1 -- or by (-9223372036854775807-1) to avoid ambiguity. (The <cstdint> header very likely does something similar).
Consider this C++ code in VS13:
long long Number;
Number = 10000000000*900;
Number = 10000000*214;
Number = 10000000*215;
Number = (long long) 10000000*215;
When you compile it you get warning C4307: '*' : integral constant overflow for line 4. When you run the code there is effectively an integral overflow. The other lines are OK. Am I overlooking something?
The 10000000 constant (literal) is by default treated as a int constant as it fits the int type:
The type of the integer literal is the first type in which the value
can fit, from the list of types which depends on which numeric base
and which integer-suffix was used
(http://en.cppreference.com/w/cpp/language/integer_literal)
Therefore the multiplication is done within int type (no integer promotion happens). You can consider the line
Number = 10000000*215;
as if it was
const int tmp1 = 10000000;
const int tmp2 = 215;
Number = tmp1*tmp2;
Obviously this will produce overflow, and similarly the third and the last line will not produce it. For the second line, the compiler understands that 10000000000 does not fit neither int neither long, and therefore uses long long for it.
You can use the ll suffix to force a constant to be long long:
Number = 10000000ll*215;
Literal numbers are by default, understood to be of type int. In Visual Studio, that's a 32-bit integer. Multiplying two ints together results in an int.
Therefore this expression:
10000000*215 // (hex value is 0x80266580.)
is already going to be an overflowed since the expected value can't be expressed as a 32-bit positive int. The compiler will interpret it as -2144967296, which is completely different than 2150000000.
Hence, for force the expression as a 64-bit, at least one of those operands has to be 64-bit. Hence, this works nicely:
Number = 10000000LL * 215; // LL qualifies the literal number as a "long long"
It forces the whole expression (long long multiplied by int) to be treated as a long long. Hence, no overflow.
-2147483648 is the smallest integer for integer type with 32 bits, but it seems that it will overflow in the if(...) sentence:
if (-2147483648 > 0)
std::cout << "true";
else
std::cout << "false";
This will print true in my testing. However, if we cast -2147483648 to integer, the result will be different:
if (int(-2147483648) > 0)
std::cout << "true";
else
std::cout << "false";
This will print false.
I'm confused. Can anyone give an explanation on this?
Update 02-05-2012:
Thanks for your comments, in my compiler, the size of int is 4 bytes. I'm using VC for some simple testing. I've changed the description in my question.
That's a lot of very good replys in this post, AndreyT gave a very detailed explanation on how the compiler will behave on such input, and how this minimum integer was implemented. qPCR4vir on the other hand gave some related "curiosities" and how integers are represented. So impressive!
-2147483648 is not a "number". C++ language does not support negative literal values.
-2147483648 is actually an expression: a positive literal value 2147483648 with unary - operator in front of it. Value 2147483648 is apparently too large for the positive side of int range on your platform. If type long int had greater range on your platform, the compiler would have to automatically assume that 2147483648 has long int type. (In C++11 the compiler would also have to consider long long int type.) This would make the compiler to evaluate -2147483648 in the domain of larger type and the result would be negative, as one would expect.
However, apparently in your case the range of long int is the same as range of int, and in general there's no integer type with greater range than int on your platform. This formally means that positive constant 2147483648 overflows all available signed integer types, which in turn means that the behavior of your program is undefined. (It is a bit strange that the language specification opts for undefined behavior in such cases, instead of requiring a diagnostic message, but that's the way it is.)
In practice, taking into account that the behavior is undefined, 2147483648 might get interpreted as some implementation-dependent negative value which happens to turn positive after having unary - applied to it. Alternatively, some implementations might decide to attempt using unsigned types to represent the value (for example, in C89/90 compilers were required to use unsigned long int, but not in C99 or C++). Implementations are allowed to do anything, since the behavior is undefined anyway.
As a side note, this is the reason why constants like INT_MIN are typically defined as
#define INT_MIN (-2147483647 - 1)
instead of the seemingly more straightforward
#define INT_MIN -2147483648
The latter would not work as intended.
The compiler (VC2012) promote to the "minimum" integers that can hold the values. In the first case, signed int (and long int) cannot (before the sign is applied), but unsigned int can: 2147483648 has unsigned int ???? type.
In the second you force int from the unsigned.
const bool i= (-2147483648 > 0) ; // --> true
warning C4146: unary minus operator applied to unsigned type, result still unsigned
Here are related "curiosities":
const bool b= (-2147483647 > 0) ; // false
const bool i= (-2147483648 > 0) ; // true : result still unsigned
const bool c= ( INT_MIN-1 > 0) ; // true :'-' int constant overflow
const bool f= ( 2147483647 > 0) ; // true
const bool g= ( 2147483648 > 0) ; // true
const bool d= ( INT_MAX+1 > 0) ; // false:'+' int constant overflow
const bool j= ( int(-2147483648)> 0) ; // false :
const bool h= ( int(2147483648) > 0) ; // false
const bool m= (-2147483648L > 0) ; // true
const bool o= (-2147483648LL > 0) ; // false
C++11 standard:
2.14.2 Integer literals [lex.icon]
…
An integer literal is a sequence of digits that has no period or
exponent part. An integer literal may have a prefix that specifies its
base and a suffix that specifies its type.
…
The type of an integer literal is the first of the corresponding list
in which its value can be represented.
If an integer literal cannot be represented by any type in its list
and an extended integer type (3.9.1) can represent its value, it may
have that extended integer type. If all of the types in the list for
the literal are signed, the extended integer type shall be signed. If
all of the types in the list for the literal are unsigned, the
extended integer type shall be unsigned. If the list contains both
signed and unsigned types, the extended integer type may be signed or
unsigned. A program is ill-formed if one of its translation units
contains an integer literal that cannot be represented by any of the
allowed types.
And these are the promotions rules for integers in the standard.
4.5 Integral promotions [conv.prom]
A prvalue of an integer type other than bool, char16_t, char32_t, or
wchar_t whose integer conversion rank (4.13) is less than the rank of
int can be converted to a prvalue of type int if int can represent all
the values of the source type; otherwise, the source prvalue can be
converted to a prvalue of type unsigned int.
In Short, 2147483648 overflows to -2147483648, and (-(-2147483648) > 0) is true.
This is how 2147483648 looks like in binary.
In addition, in the case of signed binary calculations, the most significant bit ("MSB") is the sign bit. This question may help explain why.
Because -2147483648 is actually 2147483648 with negation (-) applied to it, the number isn't what you'd expect. It is actually the equivalent of this pseudocode: operator -(2147483648)
Now, assuming your compiler has sizeof(int) equal to 4 and CHAR_BIT is defined as 8, that would make 2147483648 overflow the maximum signed value of an integer (2147483647). So what is the maximum plus one? Lets work that out with a 4 bit, 2s compliment integer.
Wait! 8 overflows the integer! What do we do? Use its unsigned representation of 1000 and interpret the bits as a signed integer. This representation leaves us with -8 being applied the 2s complement negation resulting in 8, which, as we all know, is greater than 0.
This is why <limits.h> (and <climits>) commonly define INT_MIN as ((-2147483647) - 1) - so that the maximum signed integer (0x7FFFFFFF) is negated (0x80000001), then decremented (0x80000000).
I encountered a strange thing when I was programming under c++. It's about a simple multiplication.
Code:
unsigned __int64 a1 = 255*256*256*256;
unsigned __int64 a2= 255 << 24; // same as the above
cerr()<<"a1 is:"<<a1;
cerr()<<"a2 is:"<<a2;
interestingly the result is:
a1 is: 18446744073692774400
a2 is: 18446744073692774400
whereas it should be:(using calculator confirms)
4278190080
Can anybody tell me how could it be possible?
255*256*256*256
all operands are int you are overflowing int. The overflow of a signed integer is undefined behavior in C and C++.
EDIT:
note that the expression 255 << 24 in your second declaration also invokes undefined behavior if your int type is 32-bit. 255 x (2^24) is 4278190080 which cannot be represented in a 32-bit int (the maximum value is usually 2147483647 on a 32-bit int in two's complement representation).
C and C++ both say for E1 << E2 that if E1 is of a signed type and positive and that E1 x (2^E2) cannot be represented in the type of E1, the program invokes undefined behavior. Here ^ is the mathematical power operator.
Your literals are int. This means that all the operations are actually performed on int, and promptly overflow. This overflowed value, when converted to an unsigned 64bit int, is the value you observe.
It is perhaps worth explaining what happened to produce the number 18446744073692774400. Technically speaking, the expressions you wrote trigger "undefined behavior" and so the compiler could have produced anything as the result; however, assuming int is a 32-bit type, which it almost always is nowadays, you'll get the same "wrong" answer if you write
uint64_t x = (int) (255u*256u*256u*256u);
and that expression does not trigger undefined behavior. (The conversion from unsigned int to int involves implementation-defined behavior, but as nobody has produced a ones-complement or sign-and-magnitude CPU in many years, all implementations you are likely to encounter define it exactly the same way.) I have written the cast in C style because everything I'm saying here applies equally to C and C++.
First off, let's look at the multiplication. I'm writing the right hand side in hex because it's easier to see what's going on that way.
255u * 256u = 0x0000FF00u
255u * 256u * 256u = 0x00FF0000u
255u * 256u * 256u * 256u = 0xFF000000u (= 4278190080)
That last result, 0xFF000000u, has the highest bit of a 32-bit number set. Casting that value to a signed 32-bit type therefore causes it to become negative as-if 232 had been subtracted from it (that's the implementation-defined operation I mentioned above).
(int) (255u*256u*256u*256u) = 0xFF000000 = -16777216
I write the hexadecimal number there, sans u suffix, to emphasize that the bit pattern of the value does not change when you convert it to a signed type; it is only reinterpreted.
Now, when you assign -16777216 to a uint64_t variable, it is back-converted to unsigned as-if by adding 264. (Unlike the unsigned-to-signed conversion, this semantic is prescribed by the standard.) This does change the bit pattern, setting all of the high 32 bits of the number to 1 instead of 0 as you had expected:
(uint64_t) (int) (255u*256u*256u*256u) = 0xFFFFFFFFFF000000u
And if you write 0xFFFFFFFFFF000000 in decimal, you get 18446744073692774400.
As a closing piece of advice, whenever you get an "impossible" integer from C or C++, try printing it out in hexadecimal; it's much easier to see oddities of twos-complement fixed-width arithmetic that way.
The answer is simple -- overflowed.
Here Overflow occurred on int and when you are assigning it to unsigned int64 its converted in to 18446744073692774400 instead of 4278190080