Unsigned long long overflow error? - c++

I have been having some strange issues with unsigned long long.
It happens when I set an unsigned long long (I used size_t, however the problem is repeatable with u-l-l). I have set it to 2^31, however for some reason it reverts to 18446744071562067968, or 2^64 - 2^31. Keep in mind I am using an x64 compilation:
unsigned long long a = 1 << 31;
cout << a;
//Outputs 18446744071562067968, Expected 2147483648
I thought the limits of u-l-l were 2^64-1? So why can 2^31 not be stored? 2^30 works just fine. Sizeof(a) returns 8, which is 64 bits if I am not mistaken, proving the limit of 2^64-1.
I am compiling on Visual C++ 2013 Express Desktop.
My only guess is that it is some type of overflow error because it doesn't fit a normal long type.

What you're seeing is sign extension when the negative integer value is assigned to the unsigned long long.
To fix it you need to make the value unsigned to begin with, something like this:
#include <iostream>
#include <iomanip>
int main()
{
unsigned long long a = 1ull << 31ull;
std::cout << a << "\n";
std::cout << std::hex << a << "\n";
return 0;
}
If you have the warning level set high enough (/W4) you'd see a warning about the signed/unsigned mismatch.
Just to be complete, you don't need to qualify both arguments, just the left operand is fine, so unsigned long long a = 1u << 31; would work. I just prefer to be as explicit as possible.

Related

Casting from long double to unsigned long long appears broken in the MSVC C++ compiler

Consider the following code:
#include <iostream>
using namespace std;
int main(int argc, char *argv[])
{
long double test = 0xFFFFFFFFFFFFFFFF;
cout << "1: " << test << endl;
unsigned long long test2 = test;
cout << "2: " << test2 << endl;
cout << "3: " << (unsigned long long)test << endl;
return 0;
}
Compiling this code with GCC g++ (7.5.0) and running produces the following output as expected:
1: 1.84467e+19
2: 18446744073709551615
3: 18446744073709551615
However compiling this with the Microsoft Visual C++ compiler (16.8.31019.35, both 64-bit and 32-bit) and running produces the following output:
1: 1.84467e+19
2: 9223372036854775808
3: 9223372036854775808
When casting a value to an unsigned long long, the MSVC compiler won't give a value lager than the max of a (signed) long long.
Am I doing something wrong? 
Am I running into a compiler limitation that I do not know about?
Does anyone know of a possible workaround to this problem?
Because a MSVC long double is really just a double (as pointed out by #drescherjm in the comments), it does not have enough precision to contain the exact value of 0xFFFFFFFFFFFFFFFF. When this value is stored in the long double it gets "rounded" to a value that is lager than 0xFFFFFFFFFFFFFFFF. This then causes undefined behaviour when converting to an unsigned long long.
You are seeing undefined behaviour because, as pointed out in the comments, a long double is the same as a double in MSVC and the 'converted' value of your 0xFFFFFFFFFFFFFFFF (or ULLONG_MAX) actually gets 'rounded' to a slightly (but significantly) larger value, as can be seen in the following code:
int main(int argc, char* argv[])
{
long double test = 0xFFFFFFFFFFFFFFFF;
cout << 0xFFFFFFFFFFFFFFFFuLL << endl;
cout << fixed << setprecision(16) << endl;
cout << test << endl;
return 0;
}
Output:
18446744073709551615
18446744073709551616.0000000000000000
Thus, when converting that floating-point value back to an unsigned long long, you are falling foul of the conversion rules specified in this Microsoft document:
For conversion to unsigned long or unsigned long long, the result of converting an out-of-range value may be some value other than the
highest or lowest representable value. Whether the result is a
sentinel or saturated value or not depends on the compiler options and
target architecture. Future compiler releases may return a saturated
or sentinel value instead.
This UB can be further 'verified' (for want of a better term) by switching to the clang-cl compiler that can be used from within Visual Studio. For your original code, this then gives 0 for the values on both the "2" and "3" output lines.
Assuming that the clang (LLVM) compiler is not bound by the aforementioned "Microsoft Rules," we can, instead, fall back on the C++ Standard:
7.10 Floating-integral conversions      [conv.fpint]
1     A prvalue of a floating-point type
can be converted to a prvalue of an integer type. The conversion
truncates; that is, the fractional part is discarded. The behavior is
undefined if the truncated value cannot be represented in the
destination type.

The value held by 'short int' get overflowed, but not with 'auto'?

#include <iostream>
using namespace std;
int main()
{
unsigned long maximum = 0;
unsigned long values[] = {60000, 50, 20, 40, 0};
for(short value : values){
cout << "Current value:" << value << "\n";
if(value > maximum)
maximum = value;
}
cout << "Maximum value is: " << maximum;
cout << '\n';
return 0;
}
Outputs are:
Current value:-5536
Current value:50
Current value:20
Current value:40
Current value:0
Maximum value is: 18446744073709546080
I know I should not use short inside for loop, better use auto, but I was just wondering, what is going on here?
I'm using Ubuntu with g++ 9.3.0 I believe.
The issue is with short value when element 60000 is reached.
That's too big to fit into a short on your platform, so your short is overflowed, with implementation-defined results.
What seems to be happening in your case is that 60000 wraps round to the negative -5536, then converted (in a well-defined) way to an unsigned long, which in your case is 264 - 5536: that's equal to the maximum displayed by your program.
One fix is to use the idiomatic
for(auto&& value: values){
The problem is pretty much simple, the 2-bytes short type integer can hold only the values between -32,768 to 32,767. Afterwards, it get overflowed. You've given 60000, which is obviously an overflow for a short int.
When you use auto here, the value get converted into an appropriate type which could hold such a large number (note that it's up to the platform in which you're running the program.)
In my case, the value is get converted into unsigned long which ranges between 0 to 4,294,967,295.

Get two big int in c++ and store them in array

I want to know if there is a way to + two big int like
562159862489621563489 + 51456235896321475268
without put them in string in c++
You can use types like long long or unsigned long long, but be aware of integer overflows ant that the actual biggest number you can get is platform dependent.
Have a look at
std::cout << std::numeric_limits<long long>::max() << std::endl;
std::cout << std::numeric_limits<unsigned long long>::max() << std::endl;
If this is not enough maybe it is worth looking at this

C++ literal integer type

Do literal expressions have types too ?
long long int a = 2147483647+1 ;
long long int b = 2147483648+1 ;
std::cout << a << ',' << b ; // -2147483648,2147483649
Yes, literal numbers have types. The type of an unsuffixed decimal integer literal is the first of int, long, long long in which the integer can be represented. The type of binary, hex and octal literals is selected similarly but with unsigned types in the list as well.
You can force the use of unsigned types by using a U suffix. If you use a single L in the suffix then the type will be at least long but it might be long long if it cannot be represented as a long. If you use LL, then the type must be long long (unless the implementation has extended types wider than long long).
The consequence is that if int is a 32-bit type and long is 64 bits, then 2147483647 has type int while 2147483648 has type long. That means that 2147483647+1 will overflow (which is undefined behaviour), while 2147483648+1 is simply 2147483649L.
This is defined by §2.3.12 ([lex.icon]) paragraph 2 of the C++ standard, and the above description is a summary of Table 7 from that section.
It's important to remember that the type of the destination of the assignment does not influence in any way the value of the expression on the right-hand side of the assignment. If you want to force a computation to have a long long result you need to force some argument of the computation to be long long; just assigning to a long long variable isn't enough:
long long a = 2147483647 + 1LL;
std::cout << a << '\n';
produces
2147483648
(live on coliru)
int a = INT_MAX ;
long long int b = a + 1 ; // adds 1 to a and convert it then to long long ing
long long int c = a; ++c; // convert a to long long int and increment the result with 1
cout << a << std::endl; // 2147483647
cout << b << std::endl; // -2147483648
cout << c << std::endl; // 2147483648
cout << 2147483647 + 1 << std::endl; // -2147483648 (by default integer literal is assumed to be int)
cout << 2147483647LL + 1 << std::endl; // 2147483648 (force the the integer literal to be interpreted as a long long int)
You can find more information about integer literals here.

Adding numbers larger than long long in C++

I want to add two numbers which is the largest values that a long long integer can hold; and print it. If I don't store the value of sum in a variable, I just print it using "cout" then will my computer will be able to print that? The code will be some what like this:
cout<<theLastValueOfLongLong + theLastValueOfLongLong;
I am assuming that a long long int is the largest primary variable type.
If you don't want to overflow, then you need to use a "long integer" library, such as Boost.Multiprecision. You can then perform arbitrary-long integer/f.p. operations, such as
#include <iostream>
#include <limits>
#include <boost/multiprecision/cpp_int.hpp>
int main()
{
using namespace boost::multiprecision;
cpp_int i; // multi-precision integer
i = std::numeric_limits<long long>::max();
std::cout << "Max long long: " << i << std::endl;
std::cout << "Sum: " << i + i << std::endl;
}
In particular, Boost.Multiprecision is extremely easy to use and integrates "naturally" with C++ streams, allowing you to treat the type almost like a built-in one.
No, at first it counts (theLastValueOfLongLong + theLastValueOfLongLong) (which causes overflow or freezes at max value available) and then it sends result into cout.<<(long long) operator
It's the same as:
long long temp = theLastValueOfLongLong + theLastValueOfLongLong;
cout << temp;
temp will contain the result of the addition, which will be undefined because you get an overflow, and then it will cout that result what ever it's value is.
Since long long is signed, the addition overflows. This is Undefined Behavior and anything may happen. It's unlikely to format your harddisk, especially in this simple case.
Once Undefined Behavior happens, you can't even count on std::cout working after that.