#include <iostream>
#include <climits>
#include <cinttypes>
using namespace std;
int main()
{
uint16_t i = 0;
cout << USHRT_MAX << '\n' << i - 1 << '\n';
return 0;
}
Output
65535
-1
I expected two equal outputs, but it wasn't. Isn't this a non-standard-compliant behaviour?
*System: Windows7
*Compile Option: g++ -o $(FileNameNoExt) $(FileName) -std=c++11 -Wall -Wextra
When C++ sees the expression
i - 1
it automatically promotes i and 1 to int types, so the result of the expression is an int, hence the output of -1.
To fix this, either cast the overall result of the expression back to uint16_t, or do something like
i--;
to modify i in-place, then print i.
Hope this helps!
i is promoted to an int before the evaluation of i - 1, so the expression i - 1 is itself evaluated as a signed integer (int), try :
cout << USHRT_MAX << '\n' << (uint16_t)(i - 1) << '\n';
Live Demo
Related
Hex value of 6378624653 is : 0x17C32168D
But this code prints : 0x7C32168D
#include<iostream>
int main()
{
int x = 6378624653;
printf("0x%x", x);
}
can anyone explain why this happens ? and what should I do to get the right output?
The obtained result means that an object of the type int can not store such a big value as 6378624653.
Try the following test program.
#include <iostream>
#include <limits>
int main()
{
std::cout << std::numeric_limits<int>::max() << '\n';
std::cout << 6378624653 << '\n';
std::cout << std::numeric_limits<unsigned int>::max() << '\n';
}
and see what the maximum value that can be stored in an object of the type int. In most cases using different compilers you will get the following output
2147483647
6378624653
4294967295
That is even objects of the type unsigned int can not store such value as 6378624653.
You should declare the variable x as having the type unsigned long long int.
Here is a demonstration program.
#include <cstdio>
int main()
{
unsigned long long int x = 6378624653;
printf( "%#llx\n", x );
}
The program output is
0x17c32168d
This program, built with -std=c++20 flag:
#include <iostream>
using namespace std;
int main() {
auto [n, m] = minmax(3, 4);
cout << n << " " << m << endl;
}
produces expected result 3 4 when no optimization flags -Ox are used. With optimization flags it outputs 0 0. I tried it with multiple gcc versions with -O1, -O2 and -O3 flags.
Clang 13 works fine, but clang 10 and 11 outputs 0 4198864 with optimization level -O2 and higher. Icc works fine. What is happening here?
The code is here: https://godbolt.org/z/Wd4ex8bej
The overload of std::minmax taking two arguments returns a pair of references to the arguments. The lifetime of the arguments however end at the end of the full expression since they are temporaries.
Therefore the output line is reading dangling references, causing your program to have undefined behavior.
Instead you can use std::tie to receive by-value:
#include <iostream>
#include <tuple>
#include <algorithm>
int main() {
int n, m;
std::tie(n,m) = std::minmax(3, 4);
std::cout << n << " " << m << std::endl;
}
Or you can use the std::initializer_list overload of std::minmax, which returns a pair of values:
#include <iostream>
#include <algorithm>
int main() {
auto [n, m] = std::minmax({3, 4});
std::cout << n << " " << m << std::endl;
}
I can't understand why this loop prints "INFINITE". If the string length is 1, how can length()-2 result in a big integer?
for(int i=0;i<s.length()-2;i++)
{
cout<<"INFINITE"<<endl;
}
std::string.length() returns a size_t. This is an unsigned integer type. You are experiencing integer overflow. In pseudocode:
0 - 1 = int.maxvalue
In your case specifically it is:
(size_t)1 - 2 = SIZE_MAX
where SIZE_MAX usually equals 2^32 - 1
std::string::length() returns a std::string::size_type.
std::string::size_type is specified to be the same type as allocator_traits<>::size_type (of the string's allocator).
This is specified to be an unsigned type.
Hence, the number will wrap (defined behaviour) and become huge. Precisely how huge will depend on the architecture.
You can test it on your architecture with this little program:
#include <limits>
#include <iostream>
#include <string>
#include <utility>
#include <iomanip>
int main() {
using size_type = std::string::size_type;
std::cout << "unsigned : " << std::boolalpha << std::is_unsigned<size_type>::value << std::endl;
std::cout << "size : " << std::numeric_limits<size_type>::digits << " bits" << std::endl;
std::cout << "npos : " << std::hex << std::string::npos << std::endl;
}
in the case of apple x64:
unsigned : true
size : 64 bits
npos : ffffffffffffffff
Behold my code:
#include <iostream>
int main()
{
uint8_t no_value = 0xFF;
std::cout << "novalue: " << no_value << std::endl;
return 0;
}
Why does this output: novalue: ▒
On my terminal it looks like:
I was expecting -1.
After all, if we:
we get:
uint8_t is most likeley typedef-ed to unsigned char. When you pass this to the << operator, the overload for char is selected, which causes your 0xFF value to be interpreted as an ASCII character code, and displaying the "garbage".
If you really want to see -1, you should try this:
#include <iostream>
#include <stdint.h>
int main()
{
uint8_t no_value = 0xFF;
std::cout << "novalue (cast): " << (int)(int8_t)no_value << std::endl;
return 0;
}
Note that I first cast to int8_t, which causes your previously unsigned value to be instead interpretted as a signed value. This is where 255 becomes -1. Then, I cast to int, so that << understands it to mean "integer" instead of "character".
Your confusion comes from that fact that Windows calculator doesn't give you options for signed / unsigned -- it always considers values signed. So when you used an uint8_t, you made it unsigned.
Try this
#include <iostream>
int main()
{
uint8_t no_value = 0x41;
std::cout << "novalue: " << no_value << std::endl;
return 0;
}
You will get this output:
novalue: A
uint8_t probably the same thing as unsigned char.
std::cout with chars will output the char itself and not the char's ASCII value.
#include <iostream>
#include <string>
#include <vector>
using namespace std;
int main()
{
string str("0");
vector<int> vec;
int upper_bound = str.size()-(3-vec.size());
int i = 0;
if ( i < upper_bound ) // Line 1
cout << "i < upper_bound - 1\n";
else
cout << "pass\n";
if ( i < (str.size()-(3-vec.size())) ) // Line 2
cout << "i < upper_bound - 2\n";
else
cout << "pass\n";
return 0;
}
Output is as follows:
pass
i < upper_bound
Question> Why Line 1 and Line 2 print different results?
Mathematically, str.size()-(3-vec.size()) is 1-(3-0), which is -2. However, since these are unsigned values, the result is also unsigned, and so has a large positive value.
Converting this to int to initialise upper_bound technically gives undefined behaviour, but in practise will give you -2; so the first test passes.
The second test compares with the large unsigned value rather than -2.
The problem is that expression str.size()-(3-vec.size())) has some unsigned type. So it may not be negative. It is an unsigned type due to 1) str.size() and vec.size() are of some unsigned types according to definitions of std::string::size_type and std::vector<int>::size_type 2) the usual arithmetic conversion.
In the first expression you explicitly assigned an unsigned expression to type int. So as the sign bit is set then the value of this object is negative.
To understand this try for example to execute these statements
std::cout << -1 << std::endl;
std::cout << -1u << std::endl;
or
std::cout << 0u -1 << std::endl;
Here -1u and 0u - 1 have type unsigned int and the same bit combination as -1.
While comparing signed and unsigned types, compiler will convert the signed types to unsigned. That creates the weird result.