Finding SHRT_MAX on systems without <limits.h> or <values.h> - c++

I am reading The C++ Answer Book by Tony L Hansen. It says somewhere that the value of SHRT_MAX (the largest value of a short) can be derived as follows:
const CHAR_BIT= 8;
#define BITS(type) (CHAR_BIT*(int)sizeof(type))
#define HIBIT(type) ((type)(1<< (BITS(type)-1)))
#define TYPE_MAX(type) ((type)~HIBIT(type));
const SHRT_MAX= TYPE_MAX(short);
Could someone explain in simple words what is happening in the above 5 lines?

const CHAR_BIT= 8;
Assuming int is added here (and below): CHAR_BIT is the number of bits in a char. Its value is assumed here without checking.
#define BITS(type) (CHAR_BIT*(int)sizeof(type))
BITS(type) is the number of bits in type. If sizeof(short) == 2, then BITS(short) is 8*2.
Note that C++ does not guarantee that all bits in integer types other than char contribute to the value, but the below will assume that nonetheless.
#define HIBIT(type) ((type)(1<< (BITS(type)-1)))
If BITS(short) == 16, then HIBIT(short) is ((short)(1<<15)). This is implementation-dependent, but assumed to have the sign bit set, and all value bits zero.
#define TYPE_MAX(type) ((type)~HIBIT(type));
If HIBIT(short) is (short)32768, then TYPE_MAX(short) is (short)~(short)32768. This is assumed to have the sign bit cleared, and all value bits set.
const SHRT_MAX= TYPE_MAX(short);
If all assumptions are met, if this indeed has all value bits set, but not the sign bit, then this is the highest value representable in short.
It's possible to get the maximum value more reliably in modern C++ when you know that:
the maximum value for an unsigned type is trivially obtainable
the maximum value for a signed type is assuredly either equal to the maximum value of the corresponding unsigned type, or that value right-shifted until it's in the signed type's range
a conversion of an out-of-range value to a signed type does not have undefined behaviour, but instead gives an implementation-defined value in the signed type's range:
template <typename S, typename U>
constexpr S get_max_value(U u) {
S s = u;
while (s < 0 || s != u)
s = u >>= 1;
return u;
}
constexpr unsigned short USHRT_MAX = -1;
constexpr short SHRT_MAX = get_max_value<short>(USHRT_MAX);

Reformatting a bit:
const CHAR_BIT = 8;
Invalid code in C++, it looks like old C code. Let's assume that const int was meant.
#define BITS(type) (CHAR_BIT * (int)sizeof(type))
Returns the number of bits that a type takes assuming 8-bit bytes, because sizeof returns the number of bytes of the object representation of type.
#define HIBIT(type) ((type) (1 << (BITS(type) - 1)))
Assuming type is a signed integer in two's complement, this would return an integer of that type with the highest bit set. For instance, for a 8-bit integer, you would get 1 << (8 - 1) == 1 << 7 == 0b10000000 == -1.
#define TYPE_MAX(type) ((type) ~HIBIT(type));
The bitwise not of the previous thing, i.e. flips each bit. Following the same example as before, you would get ~0b10000000 == 0b01111111 == 127.
const SHRT_MAX = TYPE_MAX(short);
Again invalid, both in C and C++. In C++ due to the missing int, in C due to the fact that CHAR_BIT is not a constant expression. Let's assume const int. Uses the previous code to get the maximum of the short type.

Taking it one line at a time:
const CHAR_BIT= 8;
Declare and initialize CHAR_BIT as a variable of type const int with
value 8. This works because int is the default type (wrong: see comments below), though it’s
better practice to specify the type.
#define BITS(type) (CHAR_BIT* (int)sizeof(type))
Preprocessor macro, converting a type to the number of bits in that
type. (The asterisk isn’t making anything a pointer, it’s for
multiplication. Would be clearer if the author had put a space before
it.)
#define HIBIT(type) ((type)(1<< (BITS(type)-1)))
Macro, converting a type to a number of that type with the highest bit
set to one and all other bits zero.
#define TYPE_MAX(type) ((type)~HIBIT(type));
Macro, inverting HIBIT so the highest bit is zero and all others are
one. This will be the maximum value of type if it’s a signed type and
the machine uses two’s complement. The semicolon shouldn’t be there, but
it will work in this code.
const SHRT_MAX= TYPE_MAX(short);
Uses the above macros to compute the maximum value of a short.

Related

Negative integral constant converted to unsigned type [duplicate]

-2147483648 is the smallest integer for integer type with 32 bits, but it seems that it will overflow in the if(...) sentence:
if (-2147483648 > 0)
std::cout << "true";
else
std::cout << "false";
This will print true in my testing. However, if we cast -2147483648 to integer, the result will be different:
if (int(-2147483648) > 0)
std::cout << "true";
else
std::cout << "false";
This will print false.
I'm confused. Can anyone give an explanation on this?
Update 02-05-2012:
Thanks for your comments, in my compiler, the size of int is 4 bytes. I'm using VC for some simple testing. I've changed the description in my question.
That's a lot of very good replys in this post, AndreyT gave a very detailed explanation on how the compiler will behave on such input, and how this minimum integer was implemented. qPCR4vir on the other hand gave some related "curiosities" and how integers are represented. So impressive!
-2147483648 is not a "number". C++ language does not support negative literal values.
-2147483648 is actually an expression: a positive literal value 2147483648 with unary - operator in front of it. Value 2147483648 is apparently too large for the positive side of int range on your platform. If type long int had greater range on your platform, the compiler would have to automatically assume that 2147483648 has long int type. (In C++11 the compiler would also have to consider long long int type.) This would make the compiler to evaluate -2147483648 in the domain of larger type and the result would be negative, as one would expect.
However, apparently in your case the range of long int is the same as range of int, and in general there's no integer type with greater range than int on your platform. This formally means that positive constant 2147483648 overflows all available signed integer types, which in turn means that the behavior of your program is undefined. (It is a bit strange that the language specification opts for undefined behavior in such cases, instead of requiring a diagnostic message, but that's the way it is.)
In practice, taking into account that the behavior is undefined, 2147483648 might get interpreted as some implementation-dependent negative value which happens to turn positive after having unary - applied to it. Alternatively, some implementations might decide to attempt using unsigned types to represent the value (for example, in C89/90 compilers were required to use unsigned long int, but not in C99 or C++). Implementations are allowed to do anything, since the behavior is undefined anyway.
As a side note, this is the reason why constants like INT_MIN are typically defined as
#define INT_MIN (-2147483647 - 1)
instead of the seemingly more straightforward
#define INT_MIN -2147483648
The latter would not work as intended.
The compiler (VC2012) promote to the "minimum" integers that can hold the values. In the first case, signed int (and long int) cannot (before the sign is applied), but unsigned int can: 2147483648 has unsigned int ???? type.
In the second you force int from the unsigned.
const bool i= (-2147483648 > 0) ; // --> true
warning C4146: unary minus operator applied to unsigned type, result still unsigned
Here are related "curiosities":
const bool b= (-2147483647 > 0) ; // false
const bool i= (-2147483648 > 0) ; // true : result still unsigned
const bool c= ( INT_MIN-1 > 0) ; // true :'-' int constant overflow
const bool f= ( 2147483647 > 0) ; // true
const bool g= ( 2147483648 > 0) ; // true
const bool d= ( INT_MAX+1 > 0) ; // false:'+' int constant overflow
const bool j= ( int(-2147483648)> 0) ; // false :
const bool h= ( int(2147483648) > 0) ; // false
const bool m= (-2147483648L > 0) ; // true
const bool o= (-2147483648LL > 0) ; // false
C++11 standard:
2.14.2 Integer literals [lex.icon]
…
An integer literal is a sequence of digits that has no period or
exponent part. An integer literal may have a prefix that specifies its
base and a suffix that specifies its type.
…
The type of an integer literal is the first of the corresponding list
in which its value can be represented.
If an integer literal cannot be represented by any type in its list
and an extended integer type (3.9.1) can represent its value, it may
have that extended integer type. If all of the types in the list for
the literal are signed, the extended integer type shall be signed. If
all of the types in the list for the literal are unsigned, the
extended integer type shall be unsigned. If the list contains both
signed and unsigned types, the extended integer type may be signed or
unsigned. A program is ill-formed if one of its translation units
contains an integer literal that cannot be represented by any of the
allowed types.
And these are the promotions rules for integers in the standard.
4.5 Integral promotions [conv.prom]
A prvalue of an integer type other than bool, char16_t, char32_t, or
wchar_t whose integer conversion rank (4.13) is less than the rank of
int can be converted to a prvalue of type int if int can represent all
the values of the source type; otherwise, the source prvalue can be
converted to a prvalue of type unsigned int.
In Short, 2147483648 overflows to -2147483648, and (-(-2147483648) > 0) is true.
This is how 2147483648 looks like in binary.
In addition, in the case of signed binary calculations, the most significant bit ("MSB") is the sign bit. This question may help explain why.
Because -2147483648 is actually 2147483648 with negation (-) applied to it, the number isn't what you'd expect. It is actually the equivalent of this pseudocode: operator -(2147483648)
Now, assuming your compiler has sizeof(int) equal to 4 and CHAR_BIT is defined as 8, that would make 2147483648 overflow the maximum signed value of an integer (2147483647). So what is the maximum plus one? Lets work that out with a 4 bit, 2s compliment integer.
Wait! 8 overflows the integer! What do we do? Use its unsigned representation of 1000 and interpret the bits as a signed integer. This representation leaves us with -8 being applied the 2s complement negation resulting in 8, which, as we all know, is greater than 0.
This is why <limits.h> (and <climits>) commonly define INT_MIN as ((-2147483647) - 1) - so that the maximum signed integer (0x7FFFFFFF) is negated (0x80000001), then decremented (0x80000000).

Why does (unsigned int = -1) show the largest value that it can store? [duplicate]

In C or C++ it is said that the maximum number a size_t (an unsigned int data type) can hold is the same as casting -1 to that data type. for example see Invalid Value for size_t
Why?
I mean, (talking about 32 bit ints) AFAIK the most significant bit holds the sign in a signed data type (that is, bit 0x80000000 to form a negative number). then, 1 is 0x00000001.. 0x7FFFFFFFF is the greatest positive number a int data type can hold.
Then, AFAIK the binary representation of -1 int should be 0x80000001 (perhaps I'm wrong). why/how this binary value is converted to anything completely different (0xFFFFFFFF) when casting ints to unsigned?? or.. how is it possible to form a binary -1 out of 0xFFFFFFFF?
I have no doubt that in C: ((unsigned int)-1) == 0xFFFFFFFF or ((int)0xFFFFFFFF) == -1 is equally true than 1 + 1 == 2, I'm just wondering why.
C and C++ can run on many different architectures, and machine types. Consequently, they can have different representations of numbers: Two's complement, and Ones' complement being the most common. In general you should not rely on a particular representation in your program.
For unsigned integer types (size_t being one of those), the C standard (and the C++ standard too, I think) specifies precise overflow rules. In short, if SIZE_MAX is the maximum value of the type size_t, then the expression
(size_t) (SIZE_MAX + 1)
is guaranteed to be 0, and therefore, you can be sure that (size_t) -1 is equal to SIZE_MAX. The same holds true for other unsigned types.
Note that the above holds true:
for all unsigned types,
even if the underlying machine doesn't represent numbers in Two's complement. In this case, the compiler has to make sure the identity holds true.
Also, the above means that you can't rely on specific representations for signed types.
Edit: In order to answer some of the comments:
Let's say we have a code snippet like:
int i = -1;
long j = i;
There is a type conversion in the assignment to j. Assuming that int and long have different sizes (most [all?] 64-bit systems), the bit-patterns at memory locations for i and j are going to be different, because they have different sizes. The compiler makes sure that the values of i and j are -1.
Similarly, when we do:
size_t s = (size_t) -1
There is a type conversion going on. The -1 is of type int. It has a bit-pattern, but that is irrelevant for this example because when the conversion to size_t takes place due to the cast, the compiler will translate the value according to the rules for the type (size_t in this case). Thus, even if int and size_t have different sizes, the standard guarantees that the value stored in s above will be the maximum value that size_t can take.
If we do:
long j = LONG_MAX;
int i = j;
If LONG_MAX is greater than INT_MAX, then the value in i is implementation-defined (C89, section 3.2.1.2).
It's called two's complement. To make a negative number, invert all the bits then add 1. So to convert 1 to -1, invert it to 0xFFFFFFFE, then add 1 to make 0xFFFFFFFF.
As to why it's done this way, Wikipedia says:
The two's-complement system has the advantage of not requiring that the addition and subtraction circuitry examine the signs of the operands to determine whether to add or subtract. This property makes the system both simpler to implement and capable of easily handling higher precision arithmetic.
Your first question, about why (unsigned)-1 gives the largest possible unsigned value is only accidentally related to two's complement. The reason -1 cast to an unsigned type gives the largest value possible for that type is because the standard says the unsigned types "follow the laws of arithmetic modulo 2n where n is the number of bits in the value representation of that particular size of integer."
Now, for 2's complement, the representation of the largest possible unsigned value and -1 happen to be the same -- but even if the hardware uses another representation (e.g. 1's complement or sign/magnitude), converting -1 to an unsigned type must still produce the largest possible value for that type.
Two's complement is very nice for doing subtraction just like addition :)
11111110 (254 or -2)
+00000001 ( 1)
---------
11111111 (255 or -1)
11111111 (255 or -1)
+00000001 ( 1)
---------
100000000 ( 0 + 256)
That is two's complement encoding.
The main bonus is that you get the same encoding whether you are using an unsigned or signed int. If you subtract 1 from 0 the integer simply wraps around. Therefore 1 less than 0 is 0xFFFFFFFF.
Because the bit pattern for an int
-1 is FFFFFFFF in hexadecimal unsigned.
11111111111111111111111111111111 binary unsigned.
But in int the first bit signifies whether it is negative.
But in unsigned int the first bit is just extra number because a unsigned int cannot be negative. So the extra bit makes an unsigned int able to store bigger numbers.
As with an unsigned int 11111111111111111111111111111111 (binary) or FFFFFFFF (hexadecimal) is the biggest number a uint can store.
Unsigned Ints are not recommended because if they go negative then it overflows and goes to the biggest number.

(-2147483648> 0) returns true in C++?

-2147483648 is the smallest integer for integer type with 32 bits, but it seems that it will overflow in the if(...) sentence:
if (-2147483648 > 0)
std::cout << "true";
else
std::cout << "false";
This will print true in my testing. However, if we cast -2147483648 to integer, the result will be different:
if (int(-2147483648) > 0)
std::cout << "true";
else
std::cout << "false";
This will print false.
I'm confused. Can anyone give an explanation on this?
Update 02-05-2012:
Thanks for your comments, in my compiler, the size of int is 4 bytes. I'm using VC for some simple testing. I've changed the description in my question.
That's a lot of very good replys in this post, AndreyT gave a very detailed explanation on how the compiler will behave on such input, and how this minimum integer was implemented. qPCR4vir on the other hand gave some related "curiosities" and how integers are represented. So impressive!
-2147483648 is not a "number". C++ language does not support negative literal values.
-2147483648 is actually an expression: a positive literal value 2147483648 with unary - operator in front of it. Value 2147483648 is apparently too large for the positive side of int range on your platform. If type long int had greater range on your platform, the compiler would have to automatically assume that 2147483648 has long int type. (In C++11 the compiler would also have to consider long long int type.) This would make the compiler to evaluate -2147483648 in the domain of larger type and the result would be negative, as one would expect.
However, apparently in your case the range of long int is the same as range of int, and in general there's no integer type with greater range than int on your platform. This formally means that positive constant 2147483648 overflows all available signed integer types, which in turn means that the behavior of your program is undefined. (It is a bit strange that the language specification opts for undefined behavior in such cases, instead of requiring a diagnostic message, but that's the way it is.)
In practice, taking into account that the behavior is undefined, 2147483648 might get interpreted as some implementation-dependent negative value which happens to turn positive after having unary - applied to it. Alternatively, some implementations might decide to attempt using unsigned types to represent the value (for example, in C89/90 compilers were required to use unsigned long int, but not in C99 or C++). Implementations are allowed to do anything, since the behavior is undefined anyway.
As a side note, this is the reason why constants like INT_MIN are typically defined as
#define INT_MIN (-2147483647 - 1)
instead of the seemingly more straightforward
#define INT_MIN -2147483648
The latter would not work as intended.
The compiler (VC2012) promote to the "minimum" integers that can hold the values. In the first case, signed int (and long int) cannot (before the sign is applied), but unsigned int can: 2147483648 has unsigned int ???? type.
In the second you force int from the unsigned.
const bool i= (-2147483648 > 0) ; // --> true
warning C4146: unary minus operator applied to unsigned type, result still unsigned
Here are related "curiosities":
const bool b= (-2147483647 > 0) ; // false
const bool i= (-2147483648 > 0) ; // true : result still unsigned
const bool c= ( INT_MIN-1 > 0) ; // true :'-' int constant overflow
const bool f= ( 2147483647 > 0) ; // true
const bool g= ( 2147483648 > 0) ; // true
const bool d= ( INT_MAX+1 > 0) ; // false:'+' int constant overflow
const bool j= ( int(-2147483648)> 0) ; // false :
const bool h= ( int(2147483648) > 0) ; // false
const bool m= (-2147483648L > 0) ; // true
const bool o= (-2147483648LL > 0) ; // false
C++11 standard:
2.14.2 Integer literals [lex.icon]
…
An integer literal is a sequence of digits that has no period or
exponent part. An integer literal may have a prefix that specifies its
base and a suffix that specifies its type.
…
The type of an integer literal is the first of the corresponding list
in which its value can be represented.
If an integer literal cannot be represented by any type in its list
and an extended integer type (3.9.1) can represent its value, it may
have that extended integer type. If all of the types in the list for
the literal are signed, the extended integer type shall be signed. If
all of the types in the list for the literal are unsigned, the
extended integer type shall be unsigned. If the list contains both
signed and unsigned types, the extended integer type may be signed or
unsigned. A program is ill-formed if one of its translation units
contains an integer literal that cannot be represented by any of the
allowed types.
And these are the promotions rules for integers in the standard.
4.5 Integral promotions [conv.prom]
A prvalue of an integer type other than bool, char16_t, char32_t, or
wchar_t whose integer conversion rank (4.13) is less than the rank of
int can be converted to a prvalue of type int if int can represent all
the values of the source type; otherwise, the source prvalue can be
converted to a prvalue of type unsigned int.
In Short, 2147483648 overflows to -2147483648, and (-(-2147483648) > 0) is true.
This is how 2147483648 looks like in binary.
In addition, in the case of signed binary calculations, the most significant bit ("MSB") is the sign bit. This question may help explain why.
Because -2147483648 is actually 2147483648 with negation (-) applied to it, the number isn't what you'd expect. It is actually the equivalent of this pseudocode: operator -(2147483648)
Now, assuming your compiler has sizeof(int) equal to 4 and CHAR_BIT is defined as 8, that would make 2147483648 overflow the maximum signed value of an integer (2147483647). So what is the maximum plus one? Lets work that out with a 4 bit, 2s compliment integer.
Wait! 8 overflows the integer! What do we do? Use its unsigned representation of 1000 and interpret the bits as a signed integer. This representation leaves us with -8 being applied the 2s complement negation resulting in 8, which, as we all know, is greater than 0.
This is why <limits.h> (and <climits>) commonly define INT_MIN as ((-2147483647) - 1) - so that the maximum signed integer (0x7FFFFFFF) is negated (0x80000001), then decremented (0x80000000).

Using sizeof to get maximum power of two

Is there a way in C/C++ to compute the maximum power of two that is representable by a certain data type using the sizeof operator?
For example, say I have an unsigned short int. Its values can range between 0 and 65535.
Therefore the maximum power of two that an unsigned short int can contain is 32768.
I pass this unsigned short int to a function and I have (at the moment) and algorithm that looks like this:
if (ushortParam > 32768) {
ushortParam = 32768; // Bad hardcoded literals
}
However, in the future, I may want to change the variable type to incorporate larger powers of two. Is there a type-independent formula using sizeof() that can achieve the following:
if (param > /*Some function...*/sizeof(param) )
{
param = /*Some function...*/sizeof(param);
}
Note the parameter will never require floating-point precision - integers only.
Setting most significant bit of your a variable of that parameter size will give you the highest power of 2.
1 << (8*sizeof(param)-1)
What about:
const T max_power_of_two = (std::numeric_limits<T>::max() >> 1) + 1;
To get the highest power of 2 representable by a certain integer type you may use limits.h instead of the sizeof operator. For instance:
#include <stdlib.h>
#include <stdio.h>
#include <limits.h>
int main() {
int max = INT_MAX;
int hmax = max>>1;
int mpow2 = max ^ hmax;
printf("The maximum representable integer is %d\n",max);
printf("The maximum representable power of 2 is %d\n",mpow2);
return 0;
}
This should always work as the right shift of a positive integer is always defined. Quoting from the standard C section 6.5.7.5 (Bitwise shift operator):
The result of E1 >> E2 is E1 right-shifted E2 bit positions. If E1
has an unsigned type or if E1 has a signed type and a nonnegative
value, the value of the result is the integral part of the quotient of
E1 divided by the quantity, 2 raised to the power E2.
If the use of sizeof is mandatory you can use:
1 << (CHAR_BIT*sizeof(param)-1)
for unsigned integer types and:
1 << (CHAR_BIT*sizeof(param)-2)
for signed integer types. The lines above will work only in the case of integer types without padding bits. The part of the standard C ensuring these lines to work is in section 6.2.6.2. In particular:
For unsigned integer types other than unsigned char, the bits of the
object representation shall be divided into two groups: value bits and
padding bits (there need not be any of the latter). If there are N
value bits, each bit shall represent a different power of 2 between 1
and 2N-1, so that objects of that type shall be capable of
representing values from 0 to 2N - 1 using a pure binary
representation; this shall be known as the value representation.
guarantees the first method to work while:
For signed integer types, the bits of the object representation shall
be divided into three groups: value bits, padding bits, and the sign
bit. There need not be any padding bits; there shall be exactly one
sign bit.
...
A valid (non-trap) object representation of a signed integer type
where the sign bit is zero is a valid object representation of the
corresponding unsigned type, and shall represent the same value.
explains why the second line give the right answer.
The accepted answer will probably work on Posix platforms, but is not general C/C++. It assumes that CHAR_BIT is 8, doesn't specify the type, and assumes that the type has no padding bits.
Here are more general versions for any/all unsigned integer types and don't require including any headers, dependencies, etc.:
#define MAX_VAL(UNSIGNED_TYPE) ((UNSIGNED_TYPE) -1)
#define MAX_POW2(UNSIGNED_TYPE) (~(MAX_VAL(UNSIGNED_TYPE) >> 1))
#define MAX_POW2_VER2(UNSIGNED_TYPE) (MAX_VAL(UNSIGNED_TYPE) ^ (MAX_VAL(UNSIGNED_TYPE) >> 1))
#define MAX_POW2_VER3(UNSIGNED_TYPE) ((MAX_VAL(UNSIGNED_TYPE) >> 1) + 1)
The standards, even C90, guarantee that casting -1 to an unsigned type always yields the maximum value that type can represent. From there, all of the bitwise operators above are well defined.
http://c0x.coding-guidelines.com/6.3.1.3.html
6.3.1.3 Signed and unsigned integers
682 When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.
683 Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.
684 Otherwise, the new type is signed and the value cannot be represented in it;
685 either the result is implementation-defined or an implementation-defined signal is raised.
The maximum value of an unsigned type is one less than a power of 2 and has all value bits set. The above expressions result in the highest bit alone being set, which is the maximum power of 2 that the type can represent.
http://c0x.coding-guidelines.com/6.2.6.2.html
6.2.6.2 Integer types
593 For unsigned integer types other than unsigned char, the bits of the object representation shall be divided into two groups: value bits and padding bits (there need not be any of the latter).
594 If there are N value bits, each bit shall represent a different power of 2 between 1 and 2^(N - 1), so that objects of that type shall be capable of representing values from 0 to 2^N − 1 using a pure binary representation;
595 this shall be known as the value representation.
596 The values of any padding bits are unspecified.

Testing for a maximum unsigned value

Is this the correct way to test for a maximum unsigned value in C and C++ code:
if(foo == -1)
{
// at max possible value
}
where foo is an unsigned int, an unsigned short, and so on.
For C++, I believe you should preferably use the numeric_limits template from the <limits> header :
if (foo == std::numeric_limits<unsigned int>::max())
/* ... */
For C, others have already pointed out the <limits.h> header and UINT_MAX.
Apparently, "solutions which are allowed to name the type are easy", so you can have :
template<class T>
inline bool is_max_value(const T t)
{
return t == std::numeric_limits<T>::max();
}
[...]
if (is_max_value(foo))
/* ... */
I suppose that you ask this question since at a certain point you don't know the concrete type of your variable foo, otherwise you naturally would use UINT_MAX etc.
For C your approach is the right one only for types with a conversion rank of int or higher. This is because before being compared an unsigned short value, e.g, is first converted to int, if all values fit, or to unsigned int otherwise. So then your value foo would be compared either to -1 or to UINT_MAX, not what you expect.
I don't see an easy way of implementing the test that you want in C, since basically using foo in any type of expression would promote it to int.
With gcc's typeof extension this is easily possible. You'd just have to do something like
if (foo == (typeof(foo))-1)
As already noted, you should probably use if (foo == std::numeric_limits<unsigned int>::max()) to get the value.
However for completeness, in C++ -1 is "probably" guaranteed to be the max unsigned value when converted to unsigned (this wouldn't be the case if there were unused bit patterns at the upper end of the unsigned value range).
See 4.7/2:
If the destination type is unsigned, the resulting value is the
least unsigned integer congruent to
the source integer (modulo 2^n where n
is the number of bits used to
represent the unsigned type). [Note:
In a two’s complement representation,
this conversion is conceptual and
there is no change in the bit pattern
(if there is no truncation). ]
Note that specifically for the unsigned int case, due to the rules in 5/9 it appears that if either operand is unsigned, the other will be converted to unsigned automatically so you don't even need to cast the -1 (if I'm reading the standard correctly). In the case of unsigned short you'll need a direct check or explicit cast because of the automatic integral promotion induced by the ==.
using #include <limits.h> you could just do
if(foo == UINT_MAX)
if foo is an unsigned int it has valued [0 - +4,294,967,295] (if 32 bit)
More : http://en.wikipedia.org/wiki/Limits.h
edit: in C
if you do
#include <limits.h>
#include <stdio.h>
int main() {
unsigned int x = -1;
printf("%u",x);
return 0;
}
you will get the result 4294967295 (in a 32-bit system) and that is because internally, -1 is represented by 11111111111111111111111111111111 in two's complement. But because it is an unsigned, there is now no "sign bit" therefore making it work in the range [0-2^n]
Also see : http://en.wikipedia.org/wiki/Two%27s_complement
See other's answers for the C++ part std::numeric_limits<unsigned int>::max()
I would define a constant that would hold the maximum value as needed by the design of your code. Using "-1" is confusing. Imagine that someone in the future will change the type from unsigned int to int, it will mess your code.
Here's an attempt at doing this in C. It depends on the implementation not having padding bits:
#define IS_MAX_UNSIGNED(x) ( (sizeof(x)>=sizeof(int)) ? ((x)==-1) : \
((x)==(1<<CHAR_BIT*sizeof(x))-1) )
Or, if you can modify the variable, just do something like:
if (!(x++,x--)) { /* x is at max possible value */ }
Edit: And if you don't care about possible implementation-defined extended integer types:
#define IS_MAX_UNSIGNED(x) ( (sizeof(x)>=sizeof(int)) ? ((x)==-1) : \
(sizeof(x)==sizeof(short)) ? ((x)==USHRT_MAX) : \
(sizeof(x)==1 ? ((x)==UCHAR_MAX) : 42 )
You could use sizeof(char) in the last line, of course, but I consider it a code smell and would typically catch it grepping for code smells, so I just wrote 1. Of course you could also just remove the last conditional entirely.