Do literal expressions have types too ?
long long int a = 2147483647+1 ;
long long int b = 2147483648+1 ;
std::cout << a << ',' << b ; // -2147483648,2147483649
Yes, literal numbers have types. The type of an unsuffixed decimal integer literal is the first of int, long, long long in which the integer can be represented. The type of binary, hex and octal literals is selected similarly but with unsigned types in the list as well.
You can force the use of unsigned types by using a U suffix. If you use a single L in the suffix then the type will be at least long but it might be long long if it cannot be represented as a long. If you use LL, then the type must be long long (unless the implementation has extended types wider than long long).
The consequence is that if int is a 32-bit type and long is 64 bits, then 2147483647 has type int while 2147483648 has type long. That means that 2147483647+1 will overflow (which is undefined behaviour), while 2147483648+1 is simply 2147483649L.
This is defined by §2.3.12 ([lex.icon]) paragraph 2 of the C++ standard, and the above description is a summary of Table 7 from that section.
It's important to remember that the type of the destination of the assignment does not influence in any way the value of the expression on the right-hand side of the assignment. If you want to force a computation to have a long long result you need to force some argument of the computation to be long long; just assigning to a long long variable isn't enough:
long long a = 2147483647 + 1LL;
std::cout << a << '\n';
produces
2147483648
(live on coliru)
int a = INT_MAX ;
long long int b = a + 1 ; // adds 1 to a and convert it then to long long ing
long long int c = a; ++c; // convert a to long long int and increment the result with 1
cout << a << std::endl; // 2147483647
cout << b << std::endl; // -2147483648
cout << c << std::endl; // 2147483648
cout << 2147483647 + 1 << std::endl; // -2147483648 (by default integer literal is assumed to be int)
cout << 2147483647LL + 1 << std::endl; // 2147483648 (force the the integer literal to be interpreted as a long long int)
You can find more information about integer literals here.
Related
I want to know how the unsigned integer works.
#include <iostream>
using namespace std;
int main(){
int a = 50; //basic integer data type
unsigned int b; //unsigned means the variable only holds positive values
cout << "How many hours do you work? ";
cin >> b;
cout << "Your wage is " << b << " pesos.";
}
But when I enter -5, the output is
Your wage is 4294967291 pesos.
And when I enter -1, the output is
Your wage is 4294967295 pesos.
Supposedly, unsigned integers hold positive values. How does this happen?
And I only know basic C++. I don't understand bits.
Conversion from signed to unsigned integer is done by the following rule (§[conv.integral]):
A prvalue of an integer type can be converted to a prvalue of another
integer type.[...]
If the destination type is unsigned, the resulting
value is the least unsigned integer congruent to the source integer
(modulo 2n where n is the number of bits used to represent the
unsigned type).
In your case, unsigned is apparently a 32-bit type, so -5 is being reduced modulo 232. So 232 = 4,294,967,296 and 4,294,967,296 - 5 = 4,294,967,291.
I've asked a similar question but after more research I came across something I cannot understand, and hopefully someone can explain what's causing this behavior:
// We wish to store a integral type, in this case 2 bytes long.
signed short Short = -390;
// In this case, signed short is required to be 2 bytes:
assert(sizeof(Short) == 2);
cout << "Short: " << Short << endl; // output: -390
signed long long Long = Short;
// in this case, signed long long is required to be 8 bytes long
assert(sizeof(Long) == 8);
cout << "Long: " << Long << endl; // output: -390
// enough bytes to store the signed short:
unsigned char Bytes[sizeof(Short)];
// Store Long in the byte array:
for (unsigned int i = 0; i < sizeof(Short); ++i)
Bytes[i] = (Long >> (i * 8)) & 0xff;
// Read the value from the byte array:
signed long long Long2 = (Bytes[0] << 0) + (Bytes[1] << 8);
cout << Long2 << endl; // output: 65146
signed short Short2 = static_cast<signed short>(Long2);
cout << Short2 << endl; // output: -390
output:
-390
-390
65146
-390
Can someone explain what's going on here? Is this undefined behavior? Why?
It is to do with the way negative numbers are stored. A negative number will begin with a 1 in its binary format.
signed long long Long = Short;
This is automatically doing a conversion for you. It isn't just assigning bits from one to the other, it is converting the value resulting in your 64-bit value starting with a 1 to indicate negative, and the rest denoting the 390 in 2s complement (can't be bothered working all the bits out).
signed long long Long2 = (Bytes[0] << 0) + (Bytes[1] << 8);
Now you're only retrieving the end two bytes, which will just represent the 390 magnitude. Your first two bytes will be zeros, so it thinks it is a positive number. It should work out as 2^16 - 390, and it does.
signed short Short2 = static_cast<signed short>(Long2);
This is an overflow. 65146 doesn't fit into a signed, 2-byte integer and so ends out populating the signing bit, making it get interpreted as negative. By no co-incidence, the negative number it represents is -390.
I have the following c++ code:
#include <iostream>
using namespace std;
int main()
{
long long int currentDt = 467510400*1000000;
long long int addedDt = 467510400*1000000;
if(currentDt-addedDt >= 0 && currentDt-addedDt <= 30*24*3600*1000000)
{
cout << "1" << endl;
cout << currentDt-addedDt << endl;
}
if(currentDt-addedDt > 30*24*3600*1000000 && currentDt-addedDt <= 60*24*3600*1000000)
{
cout << "2" << endl;
cout << currentDt-addedDt << endl;
}
if(currentDt-addedDt > 60*24*3600*1000000 && currentDt-addedDt <= 90*24*3600*1000000)
{
cout << "3" << endl;
cout << currentDt-addedDt << endl;
}
return 0;
}
Firstly, I get a warning for integer overflow, which strikes me as odd because the number 467510400*1000000 falls well within the range of a long long int, does it not? Secondly, I get the following output:
1
0
3
0
If in both cases currentDt-addedDt evaluates to 0, how could the third if statement possibly evaluate to true?
467510400*1000000 is within the range of long long, but it's not within the range of int. Since both literals are of type int, the type of the product is also of type int - and that will overflow. Just because you're assigning the result to a long long doesn't change the value that gets assigned. For the same reason that in:
double d = 1 / 2;
d will hold 0.0 and not 0.5.
You need to explicitly cast one of the literals to be of a larger integral type. For example:
long long int addedDt = 467510400LL * 1000000;
long long int currentDt = 467510400ll*1000000ll;
long long int addedDt = 467510400ll*1000000ll;
Note the two lowercase letter "l"s following the digits. These make your constants long long. C++ normally interpret strings of digits in source as plain ints.
The problem you are having is that all of your integer literals are int. When you multiply them they overflow giving you the unexpected behavior. To correct this you can make them long long literals using 467510400ll * 1000000ll
It because
60*24*3600*1000000 evaluates to -25526272
use
60LL*24LL*3600LL*1000000LL
instead (note the 'LL' suffix)
You have tagged this with C++.
My minimal change to your code would use the c++ static_cast to promote at least one of the literal numbers (of any overflow generating expression) to an int64_t (found in include file cstdint).
Example:
// 0 true
if(currentDt-addedDt >= 0
&& // true because vvvv
// 0 true
currentDt-addedDt <= 30*24*3600*static_cast<int64_t>(1000000))
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(for test 1, the result of the if clause is true.
for test 2 and 3 is false)
Upon finding the static_cast, the compiler promotes the 3 other integers (in the clause) to int64_t, and thus generates no warnings about overflow.
Yes, it adds a lot of chars for being, in some sense, 'minimal'.
This question already has answers here:
Arithmetic right shift gives bogus result?
(2 answers)
Closed 7 years ago.
I wanted to check that some big calculated memory needs (stored in an unsigned long long) would be roughly compatible with the memory model used to compile my code.
I assumed that right-shifting the needs by the number of bits in a pointer would result in 0 if and only if memory needs would fit in the virtual address space (independently of practical OS limitations).
Unfortunately, I found out some unexpected results when shifting a 64 bit number by 64 bits on some compilers.
Small demo:
const int ubits = sizeof (unsigned)*8; // number of bits, assuming 8 per byte
const int ullbits = sizeof (unsigned long long)*8;
cout << ubits << " bits for an unsigned\n";
cout << ullbits << " bits for a unsigned long long \n";
unsigned utest=numeric_limits<unsigned>::max(); // some big numbers
unsigned long long ulltest=numeric_limits<unsigned long long>::max();
cout << "unsigned "<<utest << " rshift by " << ubits << " = "
<< (utest>>ubits)<<endl;
cout << "unsigned long long "<<ulltest << " rshift by " << ullbits << " = "
<< (ulltest>>ullbits)<<endl;
I expected both displayed rshit results be 0.
This works as expected with gcc.
But with MSVC 13 :
in 32 bits debug: the 32 bit rshift on unsigned has NO EFFECT ( displays the original number) but the 64 bit shift of the unsigned long long is 0 as expected.
in 64 bits debug: the rshift has NO EFFECT in both cases.
in 32 and 64 bits release: the rshif is 0 as expected in both cases.
I'd like to know if this is a compiler bug, or if this is undefined behaviour.
According to the C++ Standard (5.8 Shift operators)
...The behavior is undefined if the right operand is negative, or greater than or equal to the length in bits of the promoted
left operand
The same is written in the C Standard (6.5.7 Bitwise shift operators)
3 The integer promotions are performed on each of the operands. The
type of the result is that of the promoted left operand. If the value
of the right operand is negative or is greater than or equal to the
width of the promoted left operand, the behavior is undefined.
I have been having some strange issues with unsigned long long.
It happens when I set an unsigned long long (I used size_t, however the problem is repeatable with u-l-l). I have set it to 2^31, however for some reason it reverts to 18446744071562067968, or 2^64 - 2^31. Keep in mind I am using an x64 compilation:
unsigned long long a = 1 << 31;
cout << a;
//Outputs 18446744071562067968, Expected 2147483648
I thought the limits of u-l-l were 2^64-1? So why can 2^31 not be stored? 2^30 works just fine. Sizeof(a) returns 8, which is 64 bits if I am not mistaken, proving the limit of 2^64-1.
I am compiling on Visual C++ 2013 Express Desktop.
My only guess is that it is some type of overflow error because it doesn't fit a normal long type.
What you're seeing is sign extension when the negative integer value is assigned to the unsigned long long.
To fix it you need to make the value unsigned to begin with, something like this:
#include <iostream>
#include <iomanip>
int main()
{
unsigned long long a = 1ull << 31ull;
std::cout << a << "\n";
std::cout << std::hex << a << "\n";
return 0;
}
If you have the warning level set high enough (/W4) you'd see a warning about the signed/unsigned mismatch.
Just to be complete, you don't need to qualify both arguments, just the left operand is fine, so unsigned long long a = 1u << 31; would work. I just prefer to be as explicit as possible.