Disclaimer, new to programming, working my way through C++ Prime Plus 6th ed.
I'm working though listing 3.1.
#include <iostream>
#include <climits>
int main()
{
using namespace std;
int n_int = INT_MAX;
cout << "int is " << sizeof n_int << " bytes." << endl;
return 0;
}
So I get, that creates a variable sets the max int value.
However, is there any reason why I should not and can't go:
cout << "int is " << sizeof (INT_MAX) << " bytes." << endl;
As it gives the correct length. But when I try with (SHRT_MAX) it returns 4 bytes, when I'd hoped it would return 2.
Again with (LLONG_MAX) it returns correctly 8 bytes, however (LONG_MAX) incorrectly returns 8.
Any clarification would be great.
The values defined in <climits> are macros that expand to integer literals. The type of an integer literal is the smallest integer type that can hold the value, but no smaller than int.
So INT_MAX will have type int, and so sizeof INT_MAX is the same as sizeof (int). However, SHRT_MAX will also have type int, and so sizeof SHRT_MAX will not necessarily equal sizeof (short).
Related
I have a weird problem about working with integers in C++.
I wrote a simple program that sets a value to a variable and then prints it, but it is not working as expected.
My program has only two lines of code:
uint8_t aa = 5;
cout << "value is " << aa << endl;
The output of this program is value is
I.e., it prints blank for aa.
When I change uint8_t to uint16_t the above code works like a charm.
I use Ubuntu 12.04 (Precise Pangolin), 64-bit, and my compiler version is:
gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5)
It doesn't really print a blank, but most probably the ASCII character with value 5, which is non-printable (or invisible). There's a number of invisible ASCII character codes, most of them below value 32, which is the blank actually.
You have to convert aa to unsigned int to output the numeric value, since ostream& operator<<(ostream&, unsigned char) tries to output the visible character value.
uint8_t aa=5;
cout << "value is " << unsigned(aa) << endl;
Adding a unary + operator before the variable of any primitive data type will give printable numerical value instead of ASCII character(in case of char type).
uint8_t aa = 5;
cout<<"value is "<< +aa <<endl; // value is 5
uint8_t will most likely be a typedef for unsigned char. The ostream class has a special overload for unsigned char, i.e. it prints the character with the number 5, which is non-printable, hence the empty space.
Making use of ADL (Argument-dependent name lookup):
#include <cstdint>
#include <iostream>
#include <typeinfo>
namespace numerical_chars {
inline std::ostream &operator<<(std::ostream &os, char c) {
return std::is_signed<char>::value ? os << static_cast<int>(c)
: os << static_cast<unsigned int>(c);
}
inline std::ostream &operator<<(std::ostream &os, signed char c) {
return os << static_cast<int>(c);
}
inline std::ostream &operator<<(std::ostream &os, unsigned char c) {
return os << static_cast<unsigned int>(c);
}
}
int main() {
using namespace std;
uint8_t i = 42;
{
cout << i << endl;
}
{
using namespace numerical_chars;
cout << i << endl;
}
}
output:
*
42
A custom stream manipulator would also be possible.
The unary plus operator is a neat idiom too (cout << +i << endl).
It's because the output operator treats the uint8_t like a char (uint8_t is usually just an alias for unsigned char), so it prints the character with the ASCII code (which is the most common character encoding system) 5.
See e.g. this reference.
cout is treating aa as char of ASCII value 5 which is an unprintable character, try typecasting to int before printing.
The operator<<() overload between std::ostream and char is a non-member function. You can explicitly use the member function to treat a char (or a uint8_t) as an int.
#include <iostream>
#include <cstddef>
int main()
{
uint8_t aa=5;
std::cout << "value is ";
std::cout.operator<<(aa);
std::cout << std::endl;
return 0;
}
Output:
value is 5
As others said before the problem occurs because standard stream treats signed char and unsigned char as single characters and not as numbers.
Here is my solution with minimal code changes:
uint8_t aa = 5;
cout << "value is " << aa + 0 << endl;
Adding "+0" is safe with any number including floating point.
For integer types it will change type of result to int if sizeof(aa) < sizeof(int). And it will not change type if sizeof(aa) >= sizeof(int).
This solution is also good for preparing int8_t to be printed to stream while some other solutions are not so good:
int8_t aa = -120;
cout << "value is " << aa + 0 << endl;
cout << "bad value is " << unsigned(aa) << endl;
Output:
value is -120
bad value is 4294967176
P.S. Solution with ADL given by pepper_chico and πάντα ῥεῖ is really beautiful.
Consider the following:
#include <iostream>
#include <cstdint>
int main() {
std::cout << std::hex
<< "0x" << std::strtoull("0xFFFFFFFFFFFFFFFF",0,16) << std::endl
<< "0x" << uint64_t(double(std::strtoull("0xFFFFFFFFFFFFFFFF",0,16))) << std::endl
<< "0x" << uint64_t(double(uint64_t(0xFFFFFFFFFFFFFFFF))) << std::endl;
return 0;
}
Which prints:
0xffffffffffffffff
0x0
0xffffffffffffffff
The first number is just the result of converting ULLONG_MAX, from a string to a uint64_t, which works as expected.
However, if I cast the result to double and then back to uint64_t, then it prints 0, the second number.
Normally, I would attribute this to the precision inaccuracy of floats, but what further puzzles me, is that if I cast the ULLONG_MAX from uint64_t to double and then back to uint64_t, the result is correct (third number).
Why the discrepancy between the second and the third result?
EDIT (by #Radoslaw Cybulski)
For another what-is-going-on-here try this code:
#include <iostream>
#include <cstdint>
using namespace std;
int main() {
uint64_t z1 = std::strtoull("0xFFFFFFFFFFFFFFFF",0,16);
uint64_t z2 = 0xFFFFFFFFFFFFFFFFull;
std::cout << z1 << " " << uint64_t(double(z1)) << "\n";
std::cout << z2 << " " << uint64_t(double(z2)) << "\n";
return 0;
}
which happily prints:
18446744073709551615 0
18446744073709551615 18446744073709551615
The number that is closest to 0xFFFFFFFFFFFFFFFF, and is representable by double (assuming 64 bit IEEE) is 18446744073709551616. You'll find that this is a bigger number than 0xFFFFFFFFFFFFFFFF. As such, the number is outside the representable range of uint64_t.
Of the conversion back to integer, the standard says (quoting latest draft):
[conv.fpint]
A prvalue of a floating-point type can be converted to a prvalue of an integer type.
The conversion truncates; that is, the fractional part is discarded.
The behavior is undefined if the truncated value cannot be represented in the destination type.
Why the discrepancy between the second and the third result?
Because the behaviour of the program is undefined.
Although it is mostly pointless to analyse reasons for differences in UB because the scope of variation is limitless, my guess at the reason for the discrepancy in this case is that in one case the value is compile time constant, while in the other there is a call to a library function that is invoked at runtime.
I have a weird problem about working with integers in C++.
I wrote a simple program that sets a value to a variable and then prints it, but it is not working as expected.
My program has only two lines of code:
uint8_t aa = 5;
cout << "value is " << aa << endl;
The output of this program is value is
I.e., it prints blank for aa.
When I change uint8_t to uint16_t the above code works like a charm.
I use Ubuntu 12.04 (Precise Pangolin), 64-bit, and my compiler version is:
gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5)
It doesn't really print a blank, but most probably the ASCII character with value 5, which is non-printable (or invisible). There's a number of invisible ASCII character codes, most of them below value 32, which is the blank actually.
You have to convert aa to unsigned int to output the numeric value, since ostream& operator<<(ostream&, unsigned char) tries to output the visible character value.
uint8_t aa=5;
cout << "value is " << unsigned(aa) << endl;
Adding a unary + operator before the variable of any primitive data type will give printable numerical value instead of ASCII character(in case of char type).
uint8_t aa = 5;
cout<<"value is "<< +aa <<endl; // value is 5
uint8_t will most likely be a typedef for unsigned char. The ostream class has a special overload for unsigned char, i.e. it prints the character with the number 5, which is non-printable, hence the empty space.
Making use of ADL (Argument-dependent name lookup):
#include <cstdint>
#include <iostream>
#include <typeinfo>
namespace numerical_chars {
inline std::ostream &operator<<(std::ostream &os, char c) {
return std::is_signed<char>::value ? os << static_cast<int>(c)
: os << static_cast<unsigned int>(c);
}
inline std::ostream &operator<<(std::ostream &os, signed char c) {
return os << static_cast<int>(c);
}
inline std::ostream &operator<<(std::ostream &os, unsigned char c) {
return os << static_cast<unsigned int>(c);
}
}
int main() {
using namespace std;
uint8_t i = 42;
{
cout << i << endl;
}
{
using namespace numerical_chars;
cout << i << endl;
}
}
output:
*
42
A custom stream manipulator would also be possible.
The unary plus operator is a neat idiom too (cout << +i << endl).
It's because the output operator treats the uint8_t like a char (uint8_t is usually just an alias for unsigned char), so it prints the character with the ASCII code (which is the most common character encoding system) 5.
See e.g. this reference.
cout is treating aa as char of ASCII value 5 which is an unprintable character, try typecasting to int before printing.
The operator<<() overload between std::ostream and char is a non-member function. You can explicitly use the member function to treat a char (or a uint8_t) as an int.
#include <iostream>
#include <cstddef>
int main()
{
uint8_t aa=5;
std::cout << "value is ";
std::cout.operator<<(aa);
std::cout << std::endl;
return 0;
}
Output:
value is 5
As others said before the problem occurs because standard stream treats signed char and unsigned char as single characters and not as numbers.
Here is my solution with minimal code changes:
uint8_t aa = 5;
cout << "value is " << aa + 0 << endl;
Adding "+0" is safe with any number including floating point.
For integer types it will change type of result to int if sizeof(aa) < sizeof(int). And it will not change type if sizeof(aa) >= sizeof(int).
This solution is also good for preparing int8_t to be printed to stream while some other solutions are not so good:
int8_t aa = -120;
cout << "value is " << aa + 0 << endl;
cout << "bad value is " << unsigned(aa) << endl;
Output:
value is -120
bad value is 4294967176
P.S. Solution with ADL given by pepper_chico and πάντα ῥεῖ is really beautiful.
I was wondering if there is any way by which one can find out what's the limit of char's in C++ on the lines of those provided for int (std::numeric_limits<int>::min())?
std::numeric_limits<char>::min() should work.
If you are printing the value make sure you use it converted to an integer. This is because by default, the C++ i/o streams convert 8 bit integer values to their ASCII counterpart (edit they don't really convert like that -- see comment by #MSalters).
code:
static const auto min_signed_char = std::numeric_limits<char>::min();
std::cout << "char min numerical value: "
<< static_cast<int>(min_signed_char) << "\n";
Second edit (addressing comment by #MSalters):
Also, your min_signed_char suggests that char is signed. That is an incorrect assumption - char has the same range as either signed char or unsigned char.
While a char has the same bit size (one byte), it doesn't have the same range:
The code:
#include <limits>
#include <iostream>
int main(int argc, char* argv[])
{
std::cout << "min char: "
<< static_cast<int>(std::numeric_limits<char>::min()) << "\n";
std::cout << "min unsigned char: "
<< static_cast<int>(std::numeric_limits<unsigned char>::min()) << "\n";
}
produces the output:
min char: -128
min unsigned char: 0
That is, while the size of the ranges is the same (8 bits), the ranges themselves do depend on the sign.
I still have not run it through enough tests however for some reason, using certain non-negative values, this function will sometimes pass back a negative value. I have done a lot of manual testing in calculator with different values but I have yet to have it display this same behavior.
I was wondering if someone would take a look at see if I am missing something.
float calcPop(int popRand1, int popRand2, int popRand3, float pERand, float pSRand)
{
return ((((((23000 * popRand1) * popRand2) * pERand) * pSRand) * popRand3) / 8);
}
The variables are all contain randomly generated values:
popRand1: between 1 and 30
popRand2: between 10 and 30
popRand3: between 50 and 100
pSRand: between 1 and 1000
pERand: between 1.0f and 5500.0f which is then multiplied by 0.001f before being passed to the function above
Edit:
Alright so after following the execution a bit more closely it is not the fault of this function directly. It produces an infinitely positive float which then flips negative when I use this code later on:
pPMax = (int)pPStore;
pPStore is a float that holds popCalc's return.
So the question now is, how do I stop the formula from doing this? Testing even with very high values in Calculator has never displayed this behavior. Is there something in how the compiler processes the order of operations that is causing this or are my values simply just going too high?
In this case it seems that when you are converting back to an int after the function returns it is possible that you reach the maximum value of an int, my suggestion is for you to use a type that can represent a greater range of values.
#include <iostream>
#include <limits>
#include <boost/multiprecision/cpp_int.hpp>
int main(int argc, char* argv[])
{
std::cout << "int min: " << std::numeric_limits<int>::min() << std::endl;
std::cout << "int max: " << std::numeric_limits<int>::max() << std::endl;
std::cout << "long min: " << std::numeric_limits<long>::min() << std::endl;
std::cout << "long max: " << std::numeric_limits<long>::max() << std::endl;
std::cout << "long long min: " << std::numeric_limits<long long>::min() << std::endl;
std::cout << "long long max: " << std::numeric_limits<long long>::max() << std::endl;
boost::multiprecision::cpp_int bigint = 113850000000;
int smallint = 113850000000;
std::cout << bigint << std::endl;
std::cout << smallint << std::endl;
std::cin.get();
return 0;
}
As you can see here, there are other types which have a bigger range. If these do not suffice I believe the latest boost version has just the thing for you.
Throw an exception:
if (pPStore > static_cast<float>(INT_MAX)) {
throw std::overflow_error("exceeds integer size");
} else {
pPMax = static_cast<int>(pPStore);
}
or use float instead of int.
When you multiply the maximum values of each term together you get a value around 1.42312e+12 which is somewhat larger than a 32 bit integer can hold, so let's see what the standard has to say about floating point-to-integer conversions, in 4.9/1:
A prvalue of a floating point type can be converted to a prvalue of an
integer type. The conversion trun- cates; that is, the fractional part
is discarded. The behavior is undefined if the truncated value cannot
be represented in the destination type.
So we learn that for a large segment of possible result values your function can generate, the conversion back to a 32 bit integer would be undefined, which includes making negative numbers.
You have a few options here. You could use a 64 bit integer type (long or long long possibly) to hold the value instead of truncating down to int.
Alternately you could scale down the results of your function by a factor of around 1000 or so, to keep the maximal results within the range of values that a 32 bit integer could hold.