I was wondering if there is any way by which one can find out what's the limit of char's in C++ on the lines of those provided for int (std::numeric_limits<int>::min())?
std::numeric_limits<char>::min() should work.
If you are printing the value make sure you use it converted to an integer. This is because by default, the C++ i/o streams convert 8 bit integer values to their ASCII counterpart (edit they don't really convert like that -- see comment by #MSalters).
code:
static const auto min_signed_char = std::numeric_limits<char>::min();
std::cout << "char min numerical value: "
<< static_cast<int>(min_signed_char) << "\n";
Second edit (addressing comment by #MSalters):
Also, your min_signed_char suggests that char is signed. That is an incorrect assumption - char has the same range as either signed char or unsigned char.
While a char has the same bit size (one byte), it doesn't have the same range:
The code:
#include <limits>
#include <iostream>
int main(int argc, char* argv[])
{
std::cout << "min char: "
<< static_cast<int>(std::numeric_limits<char>::min()) << "\n";
std::cout << "min unsigned char: "
<< static_cast<int>(std::numeric_limits<unsigned char>::min()) << "\n";
}
produces the output:
min char: -128
min unsigned char: 0
That is, while the size of the ranges is the same (8 bits), the ranges themselves do depend on the sign.
Related
Behold my code:
#include <iostream>
int main()
{
uint8_t no_value = 0xFF;
std::cout << "novalue: " << no_value << std::endl;
return 0;
}
Why does this output: novalue: ▒
On my terminal it looks like:
I was expecting -1.
After all, if we:
we get:
uint8_t is most likeley typedef-ed to unsigned char. When you pass this to the << operator, the overload for char is selected, which causes your 0xFF value to be interpreted as an ASCII character code, and displaying the "garbage".
If you really want to see -1, you should try this:
#include <iostream>
#include <stdint.h>
int main()
{
uint8_t no_value = 0xFF;
std::cout << "novalue (cast): " << (int)(int8_t)no_value << std::endl;
return 0;
}
Note that I first cast to int8_t, which causes your previously unsigned value to be instead interpretted as a signed value. This is where 255 becomes -1. Then, I cast to int, so that << understands it to mean "integer" instead of "character".
Your confusion comes from that fact that Windows calculator doesn't give you options for signed / unsigned -- it always considers values signed. So when you used an uint8_t, you made it unsigned.
Try this
#include <iostream>
int main()
{
uint8_t no_value = 0x41;
std::cout << "novalue: " << no_value << std::endl;
return 0;
}
You will get this output:
novalue: A
uint8_t probably the same thing as unsigned char.
std::cout with chars will output the char itself and not the char's ASCII value.
I have a weird problem about working with integers in C++.
I wrote a simple program that sets a value to a variable and then prints it, but it is not working as expected.
My program has only two lines of code:
uint8_t aa = 5;
cout << "value is " << aa << endl;
The output of this program is value is
I.e., it prints blank for aa.
When I change uint8_t to uint16_t the above code works like a charm.
I use Ubuntu 12.04 (Precise Pangolin), 64-bit, and my compiler version is:
gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5)
It doesn't really print a blank, but most probably the ASCII character with value 5, which is non-printable (or invisible). There's a number of invisible ASCII character codes, most of them below value 32, which is the blank actually.
You have to convert aa to unsigned int to output the numeric value, since ostream& operator<<(ostream&, unsigned char) tries to output the visible character value.
uint8_t aa=5;
cout << "value is " << unsigned(aa) << endl;
Adding a unary + operator before the variable of any primitive data type will give printable numerical value instead of ASCII character(in case of char type).
uint8_t aa = 5;
cout<<"value is "<< +aa <<endl; // value is 5
uint8_t will most likely be a typedef for unsigned char. The ostream class has a special overload for unsigned char, i.e. it prints the character with the number 5, which is non-printable, hence the empty space.
Making use of ADL (Argument-dependent name lookup):
#include <cstdint>
#include <iostream>
#include <typeinfo>
namespace numerical_chars {
inline std::ostream &operator<<(std::ostream &os, char c) {
return std::is_signed<char>::value ? os << static_cast<int>(c)
: os << static_cast<unsigned int>(c);
}
inline std::ostream &operator<<(std::ostream &os, signed char c) {
return os << static_cast<int>(c);
}
inline std::ostream &operator<<(std::ostream &os, unsigned char c) {
return os << static_cast<unsigned int>(c);
}
}
int main() {
using namespace std;
uint8_t i = 42;
{
cout << i << endl;
}
{
using namespace numerical_chars;
cout << i << endl;
}
}
output:
*
42
A custom stream manipulator would also be possible.
The unary plus operator is a neat idiom too (cout << +i << endl).
It's because the output operator treats the uint8_t like a char (uint8_t is usually just an alias for unsigned char), so it prints the character with the ASCII code (which is the most common character encoding system) 5.
See e.g. this reference.
cout is treating aa as char of ASCII value 5 which is an unprintable character, try typecasting to int before printing.
The operator<<() overload between std::ostream and char is a non-member function. You can explicitly use the member function to treat a char (or a uint8_t) as an int.
#include <iostream>
#include <cstddef>
int main()
{
uint8_t aa=5;
std::cout << "value is ";
std::cout.operator<<(aa);
std::cout << std::endl;
return 0;
}
Output:
value is 5
As others said before the problem occurs because standard stream treats signed char and unsigned char as single characters and not as numbers.
Here is my solution with minimal code changes:
uint8_t aa = 5;
cout << "value is " << aa + 0 << endl;
Adding "+0" is safe with any number including floating point.
For integer types it will change type of result to int if sizeof(aa) < sizeof(int). And it will not change type if sizeof(aa) >= sizeof(int).
This solution is also good for preparing int8_t to be printed to stream while some other solutions are not so good:
int8_t aa = -120;
cout << "value is " << aa + 0 << endl;
cout << "bad value is " << unsigned(aa) << endl;
Output:
value is -120
bad value is 4294967176
P.S. Solution with ADL given by pepper_chico and πάντα ῥεῖ is really beautiful.
I am running the following C++ code on Coliru:
#include <iostream>
#include <string>
int main()
{
int num1 = 208;
unsigned char uc_num1 = (unsigned char) num1;
std::cout << "test1: " << uc_num1 << "\n";
int num2 = 255;
unsigned char uc_num2 = (unsigned char) num2;
std::cout << "test2: " << uc_num2 << "\n";
}
I am getting the output:
test1: �
test2: �
This is a simplified example of my code.
Why does this not print out:
test1: 208
test2: 255
Am I misusing std::cout, or am I not doing the casting correctly?
More background
I want to convert from int to unsigned char (rather than unsigned char*). I know that all my integers will be between 0 and 255 because I am using them in the RGBA color model.
I want to use LodePNG to encode images. The library in example_encode.cpp uses unsigned chars in std::vector<unsigned char>& image:
//Example 1
//Encode from raw pixels to disk with a single function call
//The image argument has width * height RGBA pixels or width * height * 4 bytes
void encodeOneStep(const char* filename, std::vector<unsigned char>& image, unsigned width, unsigned height)
{
//Encode the image
unsigned error = lodepng::encode(filename, image, width, height);
//if there's an error, display it
if(error) std::cout << "encoder error " << error << ": "<< lodepng_error_text(error) << std::endl;
}
std::cout is correct =)
Press ALT then 2 0 8
This is the char that you are printing with test1. The console might not know how to print that properly so it outputs the question mark. Same thing with 255. After reading the png and putting it in the std::vector, there is no use of writing it to the screen. This file contains binary data which is not writable.
If you want to see "208" and "255", you should not convert them to unsigned char first, or specify that you want to print numbers such as int for example, like this
std::cout << num1 << std::endl;
std::cout << (int) uc_num1 << std::endl;
You are looking at a special case of std::cout which is not easy to understand at first.
When std::cout is called, it checks the type of the right hand side operand. In your case, std::cout << uc_num1 tells cout that the operand is an unsigned char, so it does not perform a conversion because unsigned char are usually printable. Try this :
unsigned char uc_num3 = 65;
std::cout << uc_num3 << std::endl;
If you write std::cout << num1, then cout will realize that you are printing an int. It will then transform the int into a string and print that string for you.
You might want to check about c++ operator overloading to understand how it works, but it is not super crucial at the moment, you just need to realize that std::cout can behave differently for different data type you try to print.
Is char signed or unsigned on OS X.
I put the following snippet together to test, but was wondering how to tell for sure?
char a(0x80); //fill most sig bit
unsigned char b(0x80); //fill most sig bit
cout<<"char ";
(a==b)? cout<<"is not" : cout<<"is"; //compare most sig bits in diff't chars
cout<<" signed\n";
The result was: char is signed
I'd like to know how to find this out without a piece of trick code.
Check the value of std::numeric_limits<char>::is_signed
#include <iostream>
#include <limits>
int main() {
std::cout << "char "
<< (std::numeric_limits<char>::is_signed ? "is" : "is not")
<< " signed.\n";
return 0;
}
Disclaimer, new to programming, working my way through C++ Prime Plus 6th ed.
I'm working though listing 3.1.
#include <iostream>
#include <climits>
int main()
{
using namespace std;
int n_int = INT_MAX;
cout << "int is " << sizeof n_int << " bytes." << endl;
return 0;
}
So I get, that creates a variable sets the max int value.
However, is there any reason why I should not and can't go:
cout << "int is " << sizeof (INT_MAX) << " bytes." << endl;
As it gives the correct length. But when I try with (SHRT_MAX) it returns 4 bytes, when I'd hoped it would return 2.
Again with (LLONG_MAX) it returns correctly 8 bytes, however (LONG_MAX) incorrectly returns 8.
Any clarification would be great.
The values defined in <climits> are macros that expand to integer literals. The type of an integer literal is the smallest integer type that can hold the value, but no smaller than int.
So INT_MAX will have type int, and so sizeof INT_MAX is the same as sizeof (int). However, SHRT_MAX will also have type int, and so sizeof SHRT_MAX will not necessarily equal sizeof (short).