This question already has answers here:
maximum value of int
(7 answers)
Closed 7 years ago.
Is there a C++ cross-platform library that provides me with a portable maximum integer number?
I want to declare:
const int MAX_NUM = /* call some library here */;
I use MSVC 2008 unmanaged.
In the C++ standard library header <limits>, you will find:
std::numeric_limits<int>::max()
Which will tell you the maximum value that can be stored in a variable of type int. numeric_limits is a class template, and you can pass it any of the numeric types to get the maximum value that they can hold.
The numeric_limits class template has a lot of other information about numeric types as well.
See limits.h (C) or climits (C++). In this case you would want the INT_MAX constant.
I know it's an old question but maybe someone can use this solution:
int size = 0; // Fill all bits with zero (0)
size = ~size; // Negate all bits, thus all bits are set to one (1)
So far we have -1 as result 'till size is a signed int.
size = (unsigned int)size >> 1; // Shift the bits of size one position to the right.
As Standard says, bits that are shifted in are 1 if variable is signed and negative and 0 if variable would be unsigned or signed and positive.
As size is signed and negativ we would shift in sign bit which is 1, which is not helping much, so we cast to unsigned int, forcing to shift in 0 instead, setting the sign bit to 0 while letting all other bits remain 1.
cout << size << endl; // Prints out size which is now set to maximum positive value.
We could also use a mask and xor but then we had to know the exact bitsize of the variable. With shifting in bits front, we don't have to know at any time how many bits the int has on machine or compiler nor need we include extra libraries.
I know the answer has been given but I just want to know from my old days, I used to do
int max = (unsigned int)-1
Will it give the same as
std::numeric_limits<int>::max()
?
On Hp UX with aCC compiler:
#include <iostream>
#include <limits>
using namespace std;
int main () {
if (sizeof(int)==sizeof(long)){
cout<<"sizeof int == sizeof long"<<endl;
} else {
cout<<"sizeof int != sizeof long"<<endl;
}
if (numeric_limits<int>::max()==numeric_limits<long>::max()){
cout<<"INT_MAX == lONG_MAX"<<endl;
} else {
cout<<"INT_MAX != LONG_MAX"<<endl;
}
cout << "Maximum value for int: " << numeric_limits<int>::max() << endl;
cout << "Maximum value for long: " << numeric_limits<long>::max() << endl;
return 0;
}
It prints:
sizeof int == sizeof long
INT_MAX != LONG_MAX
I checked both int and long types are 4bytes.
manpage limits(5) says that INT_MAX and LONG_MAX are both 2147483647
http://nixdoc.net/man-pages/HP-UX/man5/limits.5.html
So, conclusion std::numeric_limits< type >:: is not portable.
Related
Please take a look at this simple program:
#include <iostream>
#include <vector>
using namespace std;
int main() {
vector<int> a;
std::cout << "vector size " << a.size() << std::endl;
int b = -1;
if (b < a.size())
std::cout << "Less";
else
std::cout << "Greater";
return 0;
}
I'm confused by the fact that it outputs "Greater" despite it's obvious that -1 is less than 0. I understand that size method returns unsigned value but comparison is still applied to -1 and 0. So what's going on? can anyone explain this?
Because the size of a vector is an unsigned integral type. You are comparing an unsigned type with a signed one, and the two's complement negative signed integer is being promoted to unsigned. That corresponds to a large unsigned value.
This code sample shows the same behaviour that you are seeing:
#include <iostream>
int main()
{
std::cout << std::boolalpha;
unsigned int a = 0;
int b = -1;
std::cout << (b < a) << "\n";
}
output:
false
The signature for vector::size() is:
size_type size() const noexcept;
size_type is an unsigned integral type. When comparing an unsigned and a signed integer, the signed one is promoted to unsigned. Here, -1 is negative so it rolls over, effectively yielding the maximal representable value of the size_type type. Hence it will compare as greater than zero.
-1 unsigned is a higher value than zero because the high bit is set to indicate that it's negative but unsigned comparison uses this bit to expand the range of representable numbers so it's no longer used as a sign bit. The comparison is done as (unsigned int)-1 < 0 which is false.
I am a beginner in C++, and I just finished reading chapter 1 of the C++ Primer. So I try the problem of computing the largest prime factor, and I find out that my program works well up to a number of sizes 10e9 but fails after that e.g.600851475143 as it always returns a wired number e.g.2147483647 when I feed any large number into it. I know a similar question has been asked many times, I just wonder why this could happen to me. Thanks in advance.
P.S. I guess the reason has to do with some part of my program that is not capable to handle some large number.
#include <iostream>
int main()
{
int val = 0, temp = 0;
std::cout << "Please enter: " << std::endl;
std::cin >> val;
for (int num = 0; num != 1; val = num){
num = val/2;
temp = val;
while (val%num != 0)
--num;
}
std::cout << temp << std::endl;
return 0;
}
Your int type is 32 bits (like on most systems). The largest value a two's complement signed 32 bit value can store is 2 ** 31 - 1, or 2147483647. Take a look at man limits.h if you want to know the constants defining the limits of your types, and/or use larger types (e.g. unsigned would double your range at basically no cost, uint64_t from stdint.h/inttypes.h would expand it by a factor of 8.4 billion and only cost something meaningful on 32 bit systems).
2147483647 isn't a wired number its INT_MAX which is defined in climits header file. This happens when you reach maximum capacity of an int.
You can use a bigger data type such as std::size_t or unsigned long long int, for that purpose, which have a maximum value of 18446744073709551615.
Please take a look at this simple program:
#include <iostream>
#include <vector>
using namespace std;
int main() {
vector<int> a;
std::cout << "vector size " << a.size() << std::endl;
int b = -1;
if (b < a.size())
std::cout << "Less";
else
std::cout << "Greater";
return 0;
}
I'm confused by the fact that it outputs "Greater" despite it's obvious that -1 is less than 0. I understand that size method returns unsigned value but comparison is still applied to -1 and 0. So what's going on? can anyone explain this?
Because the size of a vector is an unsigned integral type. You are comparing an unsigned type with a signed one, and the two's complement negative signed integer is being promoted to unsigned. That corresponds to a large unsigned value.
This code sample shows the same behaviour that you are seeing:
#include <iostream>
int main()
{
std::cout << std::boolalpha;
unsigned int a = 0;
int b = -1;
std::cout << (b < a) << "\n";
}
output:
false
The signature for vector::size() is:
size_type size() const noexcept;
size_type is an unsigned integral type. When comparing an unsigned and a signed integer, the signed one is promoted to unsigned. Here, -1 is negative so it rolls over, effectively yielding the maximal representable value of the size_type type. Hence it will compare as greater than zero.
-1 unsigned is a higher value than zero because the high bit is set to indicate that it's negative but unsigned comparison uses this bit to expand the range of representable numbers so it's no longer used as a sign bit. The comparison is done as (unsigned int)-1 < 0 which is false.
I need to know whether an integer is 32 bits long or not (I want to know if it's exactly 32 bits long (8 hexadecimal characters). How could I achieve this in C++? Should I do this with the hexadecimal representation or with the unsigned int one?
My code is as follows:
mistream.open("myfile.txt");
if(mistream)
{
for(int i=0; i<longArray; i++)
{
mistream >> hex >> datos[i];
}
}
mistream.close();
Where mistream is of type ifstream, and datos is an unsigned int array
Thank you
std::numeric_limits<unsigned>::digits
is a static integer constant (or constexpr in C++11) giving the number of bits (since unsigned is stored in base 2, it gives binary digits).
You need to #include <limits> to get this, and you'll notice here that this gives the same value as Thomas' answer (while also being generalizable to other primitive types)
For reference (you changed your question after I answered), every integer of a given type (eg, unsigned) in a given program is exactly the same size.
What you're now asking is not the size of the integer in bits, because that never varies, but whether the top bit is set. You can test this trivially with
bool isTopBitSet(uint32_t v) {
return v & 0x80000000u;
}
(replace the unsigned hex literal with something like T{1} << (std::numeric_limits<T>::digits-1) if you want to generalise to unsigned T other than uint32_t).
As already hinted in a comment by #chux, you can use a combination of the sizeof operator and the CHAR_BIT macro constant. The former tells you (at compile-time) the size (in multiples of sizeof(char) aka bytes) of its argument type. The latter is the number of bits to the byte (usually 8).
You can encapsulate this nicely into a function template.
#include <climits> // CHAR_BIT
#include <cstddef> // std::size_t
#include <iostream> // std::cout, std::endl
template <typename T>
constexpr std::size_t
bit_size() noexcept
{
return sizeof(T) * CHAR_BIT;
}
int
main()
{
std::cout << bit_size<int>() << std::endl;
std::cout << bit_size<long>() << std::endl;
}
On my implementation, it outputs 32 and 64.
Since the function is a constexpr, you can use it in static contexts, such as in static_assert<bit_size<int>() >= 32, "too small");.
Try this:
#include <climits>
unsigned int bits_per_byte = CHAR_BIT;
unsigned int bits_per_integer = CHAR_BIT * sizeof(int);
The identifier CHAR_BIT represents the number of bits in a char.
The sizeof returns the number of char locations occupied by the integer.
Multiplying them gives us the number of bits for an integer.
OP said "if it's exactly 32 bits long (8 hexadecimal characters)" and further with ".. interested in knowing if the value is between power(2, 31) and power(2, 32) - 1". So it is a little fuzzy on negative 32-bit numbers.
Certainly OP wants to know the result based on the value and not the type.
bool integer_is_32_bits_long(int x) =
// cope with 32-bit int
((INT_MAX == 0x7FFFFFFF) && (x < 0)) ||
// larger 32-bit int
((INT_MAX > 0x7FFFFFFF) && (x >= 0x80000000) && (x <= 0xFFFFFFFF));
Of course if int is 16-bit, then the result is always false.
I want to know if it's exactly 32 bits long (8 hexadecimal characters)
I am interested in knowing if the value is between power(2, 31) and power(2, 32) - 1
So you want to know if the upper bit is set? Then you can simply test if the number is negative:
bool upperBitSet(int x)
{
return x < 0;
}
For unsigned numbers, you can simply shift left and back right and then check if you lost data:
bool upperBitSet(unsigned x)
{
return (x << 1 >> 1) != x;
}
The simplest way probably is to check if the 32nd bit is set:
bool isReally32bitsLong(uint32_t in) {
return (in >> 31)!=0;
}
bool isExactly32BitsLong(uint64_t in) {
return ((in >> 31)!=0) && ((in >> 32) == 0);
}
int sum;
The largest number represented by an int N is 2^0 + 2^1 + ... + 2^(sizeof(int)*8-1). What happens if I set sum = N + N? I'm somewhat new to programming, just so you new.
If the result of an int addition exceeds the range of values that can be represented in an int (INT_MIN .. INT_MAX), you have an overflow.
For signed integer types, the behavior of integer overflow is undefined.
On many implementations, you'll usually get a result that's consistent with ignoring all but the low-order N bits of the mathematical result (where N is the number of bits in an int) -- but the language does not guarantee that.
Furthermore, a compiler is permitted to assume that your code's behavior is defined, and to perform optimizations based on that assumption.
For example, with clang++ 3.0, this program:
#include <climits>
#include <iostream>
int max() { return INT_MAX; }
int main()
{
int x = max();
int y = x + 1;
if (x < y) {
std::cout << x << " < " << y << "\n";
}
}
prints nothing when compiled with -O0, but prints
2147483647 < -2147483648
when compiled with -O1 or higher.
Summary: Don't do that.
(Incidentally, the largest representable value of type int is more simply expressed as 2N-1-1, where N is the width in bits of type int -- or even more simply as INT_MAX. For a typical system with 32-bit int, that's 231-1, or 2147483647. You're also assuming that sizeof is in units of 8 bits; in fact it's in units of CHAR_BIT bits, where CHAR_BIT is at least 8, but can (very rarely) be larger.)
you will have an integer overflow as the number will be too big for int if you want to do this you will have to declare it as a long
Hope this helps