C++ maximum non negative int - c++

Is the following going to work as expected on all platforms, sizes of int, etc? Or is there a more accepted way of doing it? (I made the following up.)
#define MAX_NON_NEGATIVE_INT ((int)(((unsigned int)-1) / 2))
I won't insult your intelligence by explaining what it's doing!
Edit: I should have mentioned that I cannot use any standard classes, because I'm running without the C runtime.

There is a standard way to this:
#include <limits>
#include <iostream>
cout << numeric_limits<unsigned int>::max();
Being standard, this is guaranteed to be portable across all platforms.

If you don't want to use defines (and you want a standard way of calculating the limits), then do this:
#include <limits>
std::numeric_limits<int>::min()
These are the ANSI standard defines in limits.h:
#define INT_MIN (-2147483647 - 1) /* minimum (signed) int value */
#define INT_MAX 2147483647 /* maximum (signed) int value */
#define UINT_MAX 0xffffffff /* maximum unsigned int value */
These are the defines from BaseTsd.h:
#define MAXUINT ((UINT)~((UINT)0))
#define MAXINT ((INT)(MAXUINT >> 1))
#define MININT ((INT)~MAXINT)

#include <climits>
INT_MAX

You can have a look at the class numeric_limits, included in the standard library.
See here.

I would modify what you supplied just slightly, since you are coding C++ and not C.
const int MAXINT =(int)(((unsigned int)-1) >> 1), MININT = -MAXINT -1;
I prefer the right shift over the divide by 2, though they do the same thing, because bit shifting is more suggestive of the bit mangling used to generate MAXINT.
MAXINT yields the same thing as you'd get by using
#include <limits>
const int OFFICIALMAXINT = numeric_limits<int>::max();
MININT yields the same thing as you'd get by using
#include <limits>
const int OFFICIALMININT = numeric_limits<int>::min();
Hardcoding these values, as some above suggested, is a baaad idea.
I prefer the bit mangling, because I know it is always correct and I don't have to rely on remembering the library and the syntax of the call, but it does come down to a matter of preference.

Related

How to do type punning correctly in C++

Let's say I have this code:
//Version 1
#include <iostream>
#include <cstdint>
int main()
{
uint32_t bits{0x3dfcb924}; //bits describe "0.1234" as IEEE 754 floating point
float num {*((float*) &bits)};
std::cout << num << std::endl;
}
All I want is to interpret the bits from the bits variable as a float. I came to understand that this is called "type punning".
The above code currently works on my machine with GCC 10 on Linux.
I have used this method to "reinterpret bits" for quite some time. However, recently I learned about the "strict aliasing rule" from this post:
What is the strict aliasing rule?
What I took away from there: Two pointers that point to objects of different types (for example uint32_t* and float*) produce undefined behaviour. So... is my code example above undefined behaviour?
I searched for a way to do it "correctly" and came across this post:
What is the modern, correct way to do type punning in C++?
The accepted answer just tells us "just use std::memcpy" and if the compiler supports it (mine doesn't) use "std::bit_cast"
I have also searched some other forums and read through some lengthy discussions (most of which were above my level of knowledge) but most of them agreed: Just use std::memcpy.
So... do I do it like this instead?
//Version 2
#include <iostream>
#include <cstdint>
#include <cstring>
int main()
{
uint32_t bits{0x3dfcb924};
float num {};
std::memcpy(&num, &bits, sizeof(bits));
std::cout << num << std::endl;
}
Here, &num and &bits are implicitly converted to a void-pointer, right? Is that ok?
Still... is version 1 REALLY undefined behaviour? I mean to recall some source (which I unfortunately can't link here because I can't find it again) said that the strict aliasing rule only applies when you try to convert to a class type and that reinterpreting between fundamental types is fine. Is this true or total nonsense?
Also... in version 1 I use C-style casting to convert a uint32_t* to a float*.
I recently learned that C-style casting will just attempt the various types of C++ casts in a certain order (https://en.cppreference.com/w/cpp/language/explicit_cast). Also, I heard I should genereally avoid C-style casts for that reason.
So IF version 1 was fine, would it be better to just do it like this instead?
//Version 3
#include <iostream>
#include <cstdint>
int main()
{
uint32_t bits{0x3dfcb924};
float num {*reinterpret_cast<float*>(&bits)};
std::cout << num << std::endl;
}
From my understanding, reinterpret_cast is used to convert some pointer to type A to some pointer to type B, "reintepreting" the underlying bits in the process, which is exactly what I want to do. I believed that version 1 did exactly this anyway since the C-style cast will detect that and automatically convert this to a reintepret_cast. If that was the case, Version 1 and Version 3 would be identical since they both do reinterpret_casts, only that Version 3 does so explicitly. Is that correct?
So... which one should I use? Version 1, Version 2 or Version 3? And why?
All three versions seem to work on my machine by the way.
EDIT: Forgot to mention... if Version 3 WAS undefined behaviour, what is the point of reinterpret_cast then anyway? I looked at this post:
When to use reinterpret_cast?
But I didn't really find an answer that I understood. So... what is reinterpret_cast good for then?
None of them. Use std::bit_cast instead. UB is UB. You can't trust it will work "next time".
#include <iostream>
#include <cstdint>
#include <bit>
int main() {
uint32_t bits{0x3dfcb924}; //bits describe "0.1234" as IEEE 754 floating point
float num = std::bit_cast<float>(bits);
std::cout << num << std::endl;
}
You could use a union like this:
#include <iostream>
#include <cstdint>
union floatbits {
uint32_t bits;
float fp;
};
int main()
{
floatbits fb {};
fb.bits = 0x3dfcb924; // float 0.1234
std::cout << fb.fp << std::endl;
}
Of course, the whole concept is dependent upon the size of uint32_t being the same as float and that may not always be true.

Why can the std::cout display value less than the minimum value of a float? [duplicate]

When I run this code:
#include <limits>
#include <cstdio>
#define T double
int main()
{
static const T val = std::numeric_limits<T>::min();
printf( "%g/2 = %g\n", val, val/2 );
}
I would expect to see an unpredictable result.
But I get the correct answer:
(16:53) > clang++ test_division.cpp -o test_division
(16:54) > ./test_division
2.22507e-308/2 = 1.11254e-308
How is this possible?
Because min gives you the smallest normalized value. You can still have smaller denormalized values (see http://en.wikipedia.org/wiki/Denormalized_number).
Historical reasons. std::numeric_limits was originally built
around the contents of <limits.h> (where you have e.g.
INT_MIN) and <float.h> (where you have e.g. DBL_MIN).
These two files were (I suspect) designed by different people;
people doing floating point don't need a separate most positive
and most negative value, because the most negative is always the
negation of the most positive, but they do need to know the
smallest value greater than 0. Regretfully, the values have the
same pattern for the name, and std::numeric_limits ended up
defining the semantics of min differently depending on
std::numeric_limits<>::is_integer.
This makes template programming more awkward, you keep having to
do things like std::numeric_limits<T>::is_integer ? std::numeric_limits<T>::min() : -std::numeric_limits<T>::max()
so C++11 adds std::numeric_limits<>::lowest(), which does
exactly what you'd expect.

Code is unable to pick the macro declared

In the code below, the output values are not as defined in macro , is that because the values have to be available before pre processor stage?
#define INT_MAX 100
#include <iostream>
using namespace std;
int main()
{
int x = INT_MAX;
x++;
cout<<x<<INT_MAX;
}
Result is -2147483648
There is a macro named INT_MAX defined in limits.h. I assume that iostreamincludes limits.h and overwrites your own definition of INT_MAX.
This causes an integer overflow at x++ because INT_MAX is the largest value that can be represented by an integer.
What is happening is that after you are defining INT_MAX yourself, you are including iostream. That pulls in limits.h which redefines INT_MAX to be the maximum available 32-bit int - see http://www.cplusplus.com/reference/climits. Incrementing an int with the maximum value is undefined, but wraps around to the minimum possible int value on most CPU architecture/compiler combinations.
Depending on your compiler warning level, you should be getting a warning about INT_MAX being redefined. If you define it to 100 after the include statement, you should get 101.
Redefining macros provided by the standard library tends to lead to confusion, so I recommend you to pick a different name for your macro.

Maximum value that can be stored in an integer type in C++

I have the following program in C++
#include <iostream>
#include <cstdio>
#include <cstring>
#include <cstdlib>
#include <limits>
using namespace std;
int main()
{
printf("range of int: %d to %d", SHRT_MIN, SHRT_MAX);
int a = 1000006;
printf("\n Integer a is equal to %d", a);
return 0;
}
My question is - How is a able to store an integer larger than the MAX limit?
See http://en.cppreference.com/w/cpp/header/climits and http://en.cppreference.com/w/cpp/types/numeric_limits
SHRT_MAX is the maximum value for an object of type short int, but a is of type int, so the appropriate constant would be INT_MAX. A usual value for this on 32-bit systems would be 32767 ( 2¹⁵-1). You probably have a 64-bit system, where 2147483647 ( 2³¹-1 ) might be the upper bound.
Also, as pointed out in a comment above, you might also rather want to run
#include <limits>
#include <iostream>
int main() {
std::cout << "type\tlowest\thighest\n";
std::cout << "int\t"
<< std::numeric_limits<int>::lowest() << '\t'
<< std::numeric_limits<int>::max() << '\n';
return 0;
}
in some cases (see INT_[MIN|MAX] limit macros vs numeric_limits<T> ) to determine these values (code copied from reference page mentioned above).
On a side note, if for some reason the width of the integer types is relevant to your code, you might also want to consider looking at http://en.cppreference.com/w/cpp/types/integer and http://en.cppreference.com/w/cpp/header/cstdint for fixed width integer types (see also Is there any reason not to use fixed width integer types (e.g. uint8_t)? for a discussion).
An integer type variable is a variable that can only hold whole numbers (eg. -2, -1, 0, 1, 2). C++ actually has four different integer variables available for use: char, short, int, and long. The only difference between these different integer types is that they have varying sizes
Your variable is of type int ( not short )
Minimum value for a variable of type short.
SHRT_MIN
–32768
Maximum value for a variable of type short.
SHRT_MAX
32767
Minimum value for a variable of type int.
INT_MIN
–2147483647 – 1
Maximum value for a variable of type int.
INT_MAX
2147483647
And a is able to store 1000006 , because
a = 1000006 < 2147483647
So there is not an issue :)

What is 1LL or 2LL in C and C++?

I was looking at some of the solutions in Google Code Jam and some people used this things that I had never seen before. For example,
2LL*r+1LL
What does 2LL and 1LL mean?
Their includes look like this:
#include <math.h>
#include <algorithm>
#define _USE_MATH_DEFINES
or
#include <cmath>
The LL makes the integer literal of type long long.
So 2LL, is a 2 of type long long.
Without the LL, the literal would only be of type int.
This matters when you're doing stuff like this:
1 << 40
1LL << 40
With just the literal 1, (assuming int to be 32-bits, you shift beyond the size of the integer type -> undefined behavior).
With 1LL, you set the type to long long before hand and now it will properly return 2^40.