Since the literal 0xffffffff needs 32 digits, it can be represented as an unsigned int but not as a signed int, and is of type unsigned int. But what happens with the negative of an unsigned integer?
#include <iostream>
#include <limits>
int main()
{
int N[] = {0,0,0};
if ( std::numeric_limits<long int>::digits==63 and
std::numeric_limits<int>::digits==31 and
std::numeric_limits<unsigned int>::digits==32 )
{
for (long int i = -0xffffffff; i ; --i)
{
N[i] = 1;
}
}
else
{
N[1]=1;
}
std::cout << N[0] <<N [1] << N[2];
}
output : 010
There is no such thing as a negative unsigned integer, by definition.
When you go beyond the lower bounds of an unsigned integer, the value "wraps around" starting from the highest possible value. (The same occurs vice versa).
This mechanism is also triggered when converting a negative "signed" value to an unsigned one.
So, the signed value -1 is converted to the unsigned value $maximumUnsignedValue. Similarly, the signed value -$maximumSignedValue is converted to the unsigned value $maximumUnsignedValue - $maximumSignedValue + 1.
Related
Please take a look at this simple program:
#include <iostream>
#include <vector>
using namespace std;
int main() {
vector<int> a;
std::cout << "vector size " << a.size() << std::endl;
int b = -1;
if (b < a.size())
std::cout << "Less";
else
std::cout << "Greater";
return 0;
}
I'm confused by the fact that it outputs "Greater" despite it's obvious that -1 is less than 0. I understand that size method returns unsigned value but comparison is still applied to -1 and 0. So what's going on? can anyone explain this?
Because the size of a vector is an unsigned integral type. You are comparing an unsigned type with a signed one, and the two's complement negative signed integer is being promoted to unsigned. That corresponds to a large unsigned value.
This code sample shows the same behaviour that you are seeing:
#include <iostream>
int main()
{
std::cout << std::boolalpha;
unsigned int a = 0;
int b = -1;
std::cout << (b < a) << "\n";
}
output:
false
The signature for vector::size() is:
size_type size() const noexcept;
size_type is an unsigned integral type. When comparing an unsigned and a signed integer, the signed one is promoted to unsigned. Here, -1 is negative so it rolls over, effectively yielding the maximal representable value of the size_type type. Hence it will compare as greater than zero.
-1 unsigned is a higher value than zero because the high bit is set to indicate that it's negative but unsigned comparison uses this bit to expand the range of representable numbers so it's no longer used as a sign bit. The comparison is done as (unsigned int)-1 < 0 which is false.
Please take a look at this simple program:
#include <iostream>
#include <vector>
using namespace std;
int main() {
vector<int> a;
std::cout << "vector size " << a.size() << std::endl;
int b = -1;
if (b < a.size())
std::cout << "Less";
else
std::cout << "Greater";
return 0;
}
I'm confused by the fact that it outputs "Greater" despite it's obvious that -1 is less than 0. I understand that size method returns unsigned value but comparison is still applied to -1 and 0. So what's going on? can anyone explain this?
Because the size of a vector is an unsigned integral type. You are comparing an unsigned type with a signed one, and the two's complement negative signed integer is being promoted to unsigned. That corresponds to a large unsigned value.
This code sample shows the same behaviour that you are seeing:
#include <iostream>
int main()
{
std::cout << std::boolalpha;
unsigned int a = 0;
int b = -1;
std::cout << (b < a) << "\n";
}
output:
false
The signature for vector::size() is:
size_type size() const noexcept;
size_type is an unsigned integral type. When comparing an unsigned and a signed integer, the signed one is promoted to unsigned. Here, -1 is negative so it rolls over, effectively yielding the maximal representable value of the size_type type. Hence it will compare as greater than zero.
-1 unsigned is a higher value than zero because the high bit is set to indicate that it's negative but unsigned comparison uses this bit to expand the range of representable numbers so it's no longer used as a sign bit. The comparison is done as (unsigned int)-1 < 0 which is false.
Please take a look at this simple program:
#include <iostream>
#include <vector>
using namespace std;
int main() {
vector<int> a;
std::cout << "vector size " << a.size() << std::endl;
int b = -1;
if (b < a.size())
std::cout << "Less";
else
std::cout << "Greater";
return 0;
}
I'm confused by the fact that it outputs "Greater" despite it's obvious that -1 is less than 0. I understand that size method returns unsigned value but comparison is still applied to -1 and 0. So what's going on? can anyone explain this?
Because the size of a vector is an unsigned integral type. You are comparing an unsigned type with a signed one, and the two's complement negative signed integer is being promoted to unsigned. That corresponds to a large unsigned value.
This code sample shows the same behaviour that you are seeing:
#include <iostream>
int main()
{
std::cout << std::boolalpha;
unsigned int a = 0;
int b = -1;
std::cout << (b < a) << "\n";
}
output:
false
The signature for vector::size() is:
size_type size() const noexcept;
size_type is an unsigned integral type. When comparing an unsigned and a signed integer, the signed one is promoted to unsigned. Here, -1 is negative so it rolls over, effectively yielding the maximal representable value of the size_type type. Hence it will compare as greater than zero.
-1 unsigned is a higher value than zero because the high bit is set to indicate that it's negative but unsigned comparison uses this bit to expand the range of representable numbers so it's no longer used as a sign bit. The comparison is done as (unsigned int)-1 < 0 which is false.
Following are different programs/scenarios using unsigned int with respective outputs. I don't know why some of them are not working as intended.
Expected output: 2
Program 1:
int main()
{
int value = -2;
std::cout << (unsigned int)value;
return 0;
}
// OUTPUT: 4294967294
Program 2:
int main()
{
int value;
value = -2;
std::cout << (unsigned int)value;
return 0;
}
// OUTPUT: 4294967294
Program 3:
int main()
{
int value;
std::cin >> value; // 2
std::cout << (unsigned int)value;
return 0;
}
// OUTPUT: 2
Can someone explain why Program 1 and Program 2 don't work? Sorry, I'm new at coding.
You are expecting the cast from int to unsigned int to simply change the sign of a negative value while maintaining its magnitude. But that isn't how it works in C or C++. when it comes to overflow, unsigned integers follow modular arithmetic, meaning that assigning or initializing from negatives values such as -1 or -2 wraps around to the largest and second largest unsigned values, and so on. So, for example, these two are equivalent:
unsigned int n = -1;
unsigned int m = -2;
and
unsigned int n = std::numeric_limits<unsigned int>::max();
unsigned int m = std::numeric_limits<unsigned int>::max() - 1;
See this working example.
Also note that there is no substantial difference between programs 1 and 2. It is all down to the sign of the value used to initialize or assign to the unsigned integer.
Casting a value from signed to unsigned changes how the single bits of the value are interpreted. Lets have a look at a simple example with an 8 bit value like char and unsigned char.
The values of a character value range from -128 to 127. Including the 0 these are 256 (2^8) values. Usually the first bit indicates wether the value is negativ or positive. Therefore only the last 7 bits can be used to describe the actual value.
An unsigned character can't take any negative values because there is no bit to determine wether the value should be negative or positiv. Therfore its value ranges from 0 to 256.
When all bits are set (1111 1111) the unsigned character will have the value 256. However the simple character value will treat the first bit as an indicator for a negative value. Sticking to the two's complement this value will be -1.
This is the reason the cast from int to unsigned int does not what you expected it to do, but it does exactly what its supposed to do.
EDIT
If you just want to switch from negative to positive values write yourself a simple function like that
uint32_t makeUnsigned(int32_t toCast)
{
if (toCast < 0)
toCast *= -1;
return static_cast<uint32_t>(toCast);
}
This way you will convert your incoming int to an unsigned int with an maximal value of 2^32 - 1
I try
long long int l = 42343254325322343224;
but to no avail. Why does it tell me, "integer constant is too long." I am using the long long int type which should be able to hold more than 19 digits. Am I doing something wrong here or is there a special secret I do not know of just yet?
Because it's more, on my x86_64 system, of 2^64
// 42343254325322343224
// maximum for 8 byte long long int (2^64) 18446744073709551616
// (2^64-1 maximum unsigned representable)
std::cout << sizeof(long long int); // 8
you shouldn't confuse the number of digits with the number of bits necessary to represent a number
Take a look at Boost.Multiprecision at Boost.Multiprecision
It defines templates and classes to handle larger numbers.
Here is the example from the Boost tutorial:
#include <boost/multiprecision/cpp_int.hpp>
using namespace boost::multiprecision;
int128_t v = 1;
// Do some fixed precision arithmetic:
for(unsigned i = 1; i <= 20; ++i)
v *= i;
std::cout << v << std::endl; // prints 20!
// Repeat at arbitrary precision:
cpp_int u = 1;
for(unsigned i = 1; i <= 100; ++i)
u *= i;
std::cout << u << std::endl; // prints 100!
It seems that the value of the integer literal exceeds the acceptable value for type long long int
Try the following program that to determine maximum values of types long long int and unsigned long long int
#include <iostream>
#include <limits>
int main()
{
std::cout << std::numeric_limits<long long int>::max() << std::endl;
std::cout << std::numeric_limits<unsigned long long int>::max() << std::endl;
return 0;
}
I have gotten the following results at www.ideone.com
9223372036854775807
18446744073709551615
You can compare it with the value you specified
42343254325322343224
Take into account that in general case there is no need to specify suffix ll for a integer decimal literal that is so big that can be stored only in type long long int The compiler itself will determine the most appropriate type ( int or long int or long long int ) for the integral decimal literal.