definition of UINT_MAX macro - c++

I would like to know if there is a particular reason to define the macro UINT_MAX as (2147483647 * 2U + 1U) and not directly its true value (4294967295U) in the climits header file.
Thank you all.

As far as the compiled code is concerned, there would be no difference, because the compiler would evaluate both constant expressions to produce the same value at compile time.
Defining UINT_MAX in terms of INT_MAX lets you reuse a constant that you have already defined:
#define UINT_MAX (INT_MAX * 2U + 1U)
In fact, this is very much what clang's header does, reusing an internal constant __INT_MAX__ for both INT_MAX and UINT_MAX:
#define INT_MAX __INT_MAX__
#define UINT_MAX (__INT_MAX__ *2U +1U)

Related

campare a long variable with INT_MAX + 1

//Code here
long a = 42;
if(a > INT_MAX + 1)
When I do this comparison, a > INT_MAX + 1 actually returns true, which makes me confused.
The reason seems like INT_MAX + 1 is overflowed. But Why? INT_MAX should be just a macro which defined by a constant like 2^32 - 1, therefore INT_MAX + 1 should be just another constant value 2^32. And since a is long, then during compiling the compiler should also implicitly convert the INT_MAX + 1 to long type, which should be longer than int and not be overflowed.
I cannot understand why it is actually overflowed.
Could anybody help me? Thanks a lot.
therefore INT_MAX + 1 should be just another constant value
It is an arithmetic expression. More specifically, it is an addition operation. The addition overflows and behaviour of the program is undefined.
therefore during compiling the compiler should also implicitly convert the INT_MAX + 1 to long type
It does. But the conversion of the result happens after the operation.
You can fix the expression by using a - 1 > INT_MAX. Although that also has a failure case when A is LONG_MIN. Another approach is to convert one of the operands of the addition to a larger type (if a larger type exists on the system).
You can do:
(long long)INT_MAX + 1
In order to treat the values as 64-bit BEFORE the addition takes places, avoiding the overflow.
Keep in mind, long is 32-bit on some compilers (MSVC). long long, I believe, has a guaranty of at least 64.
INT_MAX + 1 is evaluated as an int before the comparison. It overflows and causes undefined behavior. Some implementations evaluate it to be -1 using wrap around logic. In some cases, that can be useful. You can read more about it at https://en.wikipedia.org/wiki/Integer_overflow.
If sizeof(long) is greater than sizeof(int) on your platform, you can get the expected result by using
if(a > INT_MAX + 1L)
the only thing you will have to do is just create another variable of type long and add 1 after that. here is the code for that:
long a = 42;
long b = INT_MAX;
b = b + 1;
if(a > b){
cout<<"long greater"<<b;
}

Preprocessor "invalid integer constant expression" comparing int to double

Somewhere in my code, I have preprocessor definition
#define ZOOM_FACTOR 1
In another place I have
#ifdef ZOOM_FACTOR
#if (ZOOM_FACTOR == 1)
#define FONT_SIZE 8
#else
#define FONT_SIZE 12
#endif
#else
#define FONT_SIZE 8
#endif
The problem is when I change ZOOM_FACTOR value to floating point value, for example 1.5, I'm getting compile error C1017: invalid integer constant expression.
Does anyone know why am I getting this error and is there any way to make a comparison between integer and floating point number within preprocessor directive?
The error is because the language does not permit it.
As per the C++ standard, [cpp.cond]/1:
The expression that controls conditional inclusion shall be an integral constant expression.
Instead of defining ZOOM_FACTOR as floating point value 1.5, why not define it as a multiple of such value. For example, multiply with a constant such as 2 and then make your comparisons.

Code is unable to pick the macro declared

In the code below, the output values are not as defined in macro , is that because the values have to be available before pre processor stage?
#define INT_MAX 100
#include <iostream>
using namespace std;
int main()
{
int x = INT_MAX;
x++;
cout<<x<<INT_MAX;
}
Result is -2147483648
There is a macro named INT_MAX defined in limits.h. I assume that iostreamincludes limits.h and overwrites your own definition of INT_MAX.
This causes an integer overflow at x++ because INT_MAX is the largest value that can be represented by an integer.
What is happening is that after you are defining INT_MAX yourself, you are including iostream. That pulls in limits.h which redefines INT_MAX to be the maximum available 32-bit int - see http://www.cplusplus.com/reference/climits. Incrementing an int with the maximum value is undefined, but wraps around to the minimum possible int value on most CPU architecture/compiler combinations.
Depending on your compiler warning level, you should be getting a warning about INT_MAX being redefined. If you define it to 100 after the include statement, you should get 101.
Redefining macros provided by the standard library tends to lead to confusion, so I recommend you to pick a different name for your macro.

How to detect negative number assigned to size_t?

This declaration compiles without warnings in g++ -pedantic -Wall (version 4.6.3):
std::size_t foo = -42;
Less visibly bogus is declaring a function with a size_t argument, and calling it with a negative value. Can such a function protect against an inadvertent negative argument (which appears as umpteen quintillion, obeying ยง4.7/2)?
Incomplete answers:
Just changing size_t to (signed) long discards the semantics and other advantages of size_t.
Changing it to ssize_t is merely POSIX, not Standard.
Changing it to ptrdiff_t is brittle and sometimes broken.
Testing for huge values (high-order bit set, etc) is arbitrary.
The problem with issuing a warning for this is that it's not undefined behavior according to the standard. If you convert a signed value to an unsigned type of the same size (or larger), you can later convert that back to a signed value of the original signed type and get the original value1 on any standards-compliant compiler.
In addition, using negative values converted to size_t is fairly common practice for various error conditions -- many system calls return an unsigned (size_t or off_t) value for success or a -1 (converted to unsigned) for an error. So adding such a warning to the compiler would cause spurious warnings for much existing code. POSIX attempts to codify this with ssize_t, but that breaks calls that may be successful with a return value greater than the maximum signed value for ssize_t.
1"original value" here actually means "a bit pattern that compares as equal to the original bit pattern when compared as that signed type" -- padding bits might not be preserved, and if the signed representation has redundant encodings (eg, -0 and +0 in a sign-magnitude representation) it might be 'canonicalized'
The following excerpt is from a private library.
#include <limits.h>
#if __STDC__ == 1 && __STDC_VERSION__ >= 199901L || \
defined __GNUC__ || defined _MSC_VER
/* Has long long. */
#ifdef __GNUC__
#define CORE_1ULL __extension__ 1ULL
#else
#define CORE_1ULL 1ULL
#endif
#define CORE_IS_POS(x) ((x) && ((x) & CORE_1ULL << (sizeof (x)*CHAR_BIT - 1)) == 0)
#define CORE_IS_NEG(x) (((x) & CORE_1ULL << (sizeof (x)*CHAR_BIT - 1)) != 0)
#else
#define CORE_IS_POS(x) ((x) && ((x) & 1UL << (sizeof (x)*CHAR_BIT - 1)) == 0)
#define CORE_IS_NEG(x) (((x) & 1UL << (sizeof (x)*CHAR_BIT - 1)) != 0)
#endif
#define CORE_IS_ZPOS(x) (!(x) || CORE_IS_POS(x))
#define CORE_IS_ZNEG(x) (!(x) || CORE_IS_NEG(x))
This should work with all unsigned types.

size guarantee for integral/arithmetic types in C and C++

I know that the C++ standard explicitly guarantees the size of only char, signed char and unsigned char. Also it gives guarantees that, say, short is at least as big as char, int as big as short etc. But no explicit guarantees about absolute value of, say, sizeof(int). This was the info in my head and I lived happily with it. Some time ago, however, I came across a comment in SO (can't find it) that in C long is guaranteed to be at least 4 bytes, and that requirement is "inherited" by C++. Is that the case? If so, what other implicit guarantees do we have for the sizes of arithmetic types in C++? Please note that I am absolutely not interested in practical guarantees across different platforms in this question, just theoretical ones.
18.2.2 guarantees that <climits> has the same contents as the C library header <limits.h>.
The ISO C90 standard is tricky to get hold of, which is a shame considering that C++ relies on it, but the section "Numerical limits" (numbered 2.2.4.2 in a random draft I tracked down on one occasion and have lying around) gives minimum values for the INT_MAX etc. constants in <limits.h>. For example ULONG_MAX must be at least 4294967295, from which we deduce that the width of long is at least 32 bits.
There are similar restrictions in the C99 standard, but of course those aren't the ones referenced by C++03.
This does not guarantee that long is at least 4 bytes, since in C and C++ "byte" is basically defined to mean "char", and it is not guaranteed that CHAR_BIT is 8 in C or C++. CHAR_BIT == 8 is guaranteed by both POSIX and Windows.
Don't know about C++. In C you have
Annex E
(informative)
Implementation limits
[#1] The contents of the header are given below,
in alphabetical order. The minimum magnitudes shown shall
be replaced by implementation-defined magnitudes with the
same sign. The values shall all be constant expressions
suitable for use in #if preprocessing directives. The
components are described further in 5.2.4.2.1.
#define CHAR_BIT 8
#define CHAR_MAX UCHAR_MAX or SCHAR_MAX
#define CHAR_MIN 0 or SCHAR_MIN
#define INT_MAX +32767
#define INT_MIN -32767
#define LONG_MAX +2147483647
#define LONG_MIN -2147483647
#define LLONG_MAX +9223372036854775807
#define LLONG_MIN -9223372036854775807
#define MB_LEN_MAX 1
#define SCHAR_MAX +127
#define SCHAR_MIN -127
#define SHRT_MAX +32767
#define SHRT_MIN -32767
#define UCHAR_MAX 255
#define USHRT_MAX 65535
#define UINT_MAX 65535
#define ULONG_MAX 4294967295
#define ULLONG_MAX 18446744073709551615
So char <= short <= int <= long <= long long
and
CHAR_BIT * sizeof (char) >= 8
CHAR_BIT * sizeof (short) >= 16
CHAR_BIT * size of (int) >= 16
CHAR_BIT * sizeof (long) >= 32
CHAR_BIT * sizeof (long long) >= 64
Yes, C++ type sizes are inherited from C89.
I can't find the specification right now. But it's in the Bible.
Be aware that the guaranteed ranges of these types are one less wide than on most machines:
signed char -127 ... +127 guranteed but most twos complement machines have -128 ... + 127
Likewise for the larger types.
There are several inaccuracies in what you read. These inaccuracies were either present in the source, or maybe you remembered it all incorrectly.
Firstly, a pedantic remark about one peculiar difference between C and C++. C language does not make any guarantees about the relative sizes of integer types (in bytes). C language only makes guarantees about their relative ranges. It is true that the range of int is always at least as large as the range of short and so on. However, it is formally allowed by C standard to have sizeof(short) > sizeof(int). In such case the extra bits in short would serve as padding bits, not used for value representation. Obviously, this is something that is merely allowed by the legal language in the standard, not something anyone is likely to encounter in practice.
In C++ on the other hand, the language specification makes guarantees about both the relative ranges and relative sizes of the types, so in C++ in addition to the above range relationship inherited from C it is guaranteed that sizeof(int) is greater or equal than sizeof(short).
Secondly, the C language standard guarantees minimum range for each integer type (these guarantees are present in both C and C++). Knowing the minimum range for the given type, you can always say how many value-forming bits this type is required to have (as minimum number of bits). For example, it is true that type long is required to have at least 32 value-forming bits in order to satisfy its range requirements. If you want to recalculate that into bytes, it will depend on what you understand under the term byte. If you are talking specifically about 8-bit bytes, then indeed type long will always consist of at least four 8-bit bytes. However, that does not mean that sizeof(long) is always at least 4, since in C/C++ terminology the term byte refers to char objects. char objects are not limited to 8-bits. It is quite possible to have 32-bit char type in some implementation, meaning that sizeof(long) in C/C++ bytes can legally be 1, for example.
The C standard do not explicitly say that long has to be at least 4 bytes, but they do specify a minimum range for the different integral types, which implies a minimum size.
For example, the minimum range of an unsigned long is 0 to 4,294,967,295. You need at least 32 bits to represent every single number in that range. So yes, the standard guarantee (indirectly) that a long is at least 32 bits.
C++ inherits the data types from C, so you have to go look at the C standard. The C++ standard actually references to parts of the C standard in this case.
Just be careful about the fact that some machines have chars that are more than 8 bits. For example, IIRC on the TI C5x, a long is 32 bits, but sizeof(long)==2 because chars, shorts and ints are all 16 bits with sizeof(char)==1.