Is char signed or unsigned on OS X C++ - c++

Is char signed or unsigned on OS X.
I put the following snippet together to test, but was wondering how to tell for sure?
char a(0x80); //fill most sig bit
unsigned char b(0x80); //fill most sig bit
cout<<"char ";
(a==b)? cout<<"is not" : cout<<"is"; //compare most sig bits in diff't chars
cout<<" signed\n";
The result was: char is signed
I'd like to know how to find this out without a piece of trick code.

Check the value of std::numeric_limits<char>::is_signed
#include <iostream>
#include <limits>
int main() {
std::cout << "char "
<< (std::numeric_limits<char>::is_signed ? "is" : "is not")
<< " signed.\n";
return 0;
}

Related

%X format specifier prints value only up to 4 bytes?

Hex value of 6378624653 is : 0x17C32168D
But this code prints : 0x7C32168D
#include<iostream>
int main()
{
int x = 6378624653;
printf("0x%x", x);
}
can anyone explain why this happens ? and what should I do to get the right output?
The obtained result means that an object of the type int can not store such a big value as 6378624653.
Try the following test program.
#include <iostream>
#include <limits>
int main()
{
std::cout << std::numeric_limits<int>::max() << '\n';
std::cout << 6378624653 << '\n';
std::cout << std::numeric_limits<unsigned int>::max() << '\n';
}
and see what the maximum value that can be stored in an object of the type int. In most cases using different compilers you will get the following output
2147483647
6378624653
4294967295
That is even objects of the type unsigned int can not store such value as 6378624653.
You should declare the variable x as having the type unsigned long long int.
Here is a demonstration program.
#include <cstdio>
int main()
{
unsigned long long int x = 6378624653;
printf( "%#llx\n", x );
}
The program output is
0x17c32168d

sscanf into uint8 array fails

I am using sscanf to put a MAC address from a string into a uint8 array. For some reason, the uint8 array is all blank.
#include <iostream>
#include <string.h>
using namespace std;
int main()
{
std::string mac = "00:00:00:00:00:00";
uint8_t smac[7];
memset(smac, 0, 7);
sscanf(
mac.c_str(),
"%hhu:%hhu:%hhu:%hhu:%hhu:%hhu",
&smac[0],
&smac[1],
&smac[2],
&smac[3],
&smac[4],
&smac[5]
);
std::cout << "string: " << mac << std::endl;
std::cout << "uint8_t: "<< smac;
return 0;
}
uint8_t is on most platforms a typedef for unsigned char. Therefore, cout is trying to print it as a string, but it encounters a null byte (or string terminator) as the first character, so it stops printing.
A solution here would be to print all the MAC address members individually:
for(int c = 0; c < sizeof(smac); c++)
{
std::cout << +smac[c];
if(c != sizeof(smac) - 1)
std::cout << "::";
}
std::cout << '\n';
The + here performs integer promotion so smac[c] will be printed as a number and not a character.
The types uint8_t and unsigned char are generally equivalent to the compiler. The convention for outputting an array of char (unsigned or not) is to stop when you reach a value of zero, because that indicates the end of the string.

Using std::bitset for double representation

In my application i'm trying to display the bit representation of double variables.
It works for smaller double variables. Not working for 10^30 level.
Code:
#include <iostream>
#include <bitset>
#include <limits>
#include <string.h>
using namespace std;
void Display(double doubleValue)
{
bitset<sizeof(double) * 8> b(doubleValue);
cout << "Value : " << doubleValue << endl;
cout << "BitSet : " << b.to_string() << endl;
}
int main()
{
Display(1000000000.0);
Display(2000000000.0);
Display(3000000000.0);
Display(1000000000000000000000000000000.0);
Display(2000000000000000000000000000000.0);
Display(3000000000000000000000000000000.0);
return 0;
}
Output:
/home/sujith% ./a.out
Value : 1e+09
BitSet : 0000000000000000000000000000000000111011100110101100101000000000
Value : 2e+09
BitSet : 0000000000000000000000000000000001110111001101011001010000000000
Value : 3e+09
BitSet : 0000000000000000000000000000000010110010110100000101111000000000
Value : 1e+30
BitSet : 0000000000000000000000000000000000000000000000000000000000000000
Value : 2e+30
BitSet : 0000000000000000000000000000000000000000000000000000000000000000
Value : 3e+30
BitSet : 0000000000000000000000000000000000000000000000000000000000000000
My worry is why bitset always gives 64, zero for later 3. Interestingly "cout" for the actual values works as expected.
If you look at the std::bitset constructor you will see that it either takes a string as argument, or an integer.
That means your double value will be converted to an integer, and there is no standard integer type that can hold such large values, and that leads to undefined behavior.
If you want to get the actual bits of the double you need to do some casting tricks to make it work:
unsigned long long bits = *reinterpret_cast<unsigned long long*>(&doubleValue);
Note that type-punning like this is not defined in the C++ specification, but as long as sizeof(double) == sizeof(unsigned long long) it will work. If you want the behavior to be well-defined you have to go through arrays of char and char*.
With C++14, std::bitset now takes an unsigned long long constructor, so this might work:
union udouble {
double d;
unsigned long long u;
};
void Display(double doubleValue)
{
udouble ud;
ud.d = doubleValue;
bitset<sizeof(double) * 8> b(ud.u);
cout << "Value : " << doubleValue << endl;
cout << "BitSet : " << b.to_string() << endl;
}
This should give you the internal representation of a double. See the working sample code on IdeOne.

Storing the hex value FF in an unsigned 8 bit integer produces garbage instead of -1

Behold my code:
#include <iostream>
int main()
{
uint8_t no_value = 0xFF;
std::cout << "novalue: " << no_value << std::endl;
return 0;
}
Why does this output: novalue: ▒
On my terminal it looks like:
I was expecting -1.
After all, if we:
we get:
uint8_t is most likeley typedef-ed to unsigned char. When you pass this to the << operator, the overload for char is selected, which causes your 0xFF value to be interpreted as an ASCII character code, and displaying the "garbage".
If you really want to see -1, you should try this:
#include <iostream>
#include <stdint.h>
int main()
{
uint8_t no_value = 0xFF;
std::cout << "novalue (cast): " << (int)(int8_t)no_value << std::endl;
return 0;
}
Note that I first cast to int8_t, which causes your previously unsigned value to be instead interpretted as a signed value. This is where 255 becomes -1. Then, I cast to int, so that << understands it to mean "integer" instead of "character".
Your confusion comes from that fact that Windows calculator doesn't give you options for signed / unsigned -- it always considers values signed. So when you used an uint8_t, you made it unsigned.
Try this
#include <iostream>
int main()
{
uint8_t no_value = 0x41;
std::cout << "novalue: " << no_value << std::endl;
return 0;
}
You will get this output:
novalue: A
uint8_t probably the same thing as unsigned char.
std::cout with chars will output the char itself and not the char's ASCII value.

Limits of "char" type in C++

I was wondering if there is any way by which one can find out what's the limit of char's in C++ on the lines of those provided for int (std::numeric_limits<int>::min())?
std::numeric_limits<char>::min() should work.
If you are printing the value make sure you use it converted to an integer. This is because by default, the C++ i/o streams convert 8 bit integer values to their ASCII counterpart (edit they don't really convert like that -- see comment by #MSalters).
code:
static const auto min_signed_char = std::numeric_limits<char>::min();
std::cout << "char min numerical value: "
<< static_cast<int>(min_signed_char) << "\n";
Second edit (addressing comment by #MSalters):
Also, your min_signed_char suggests that char is signed. That is an incorrect assumption - char has the same range as either signed char or unsigned char.
While a char has the same bit size (one byte), it doesn't have the same range:
The code:
#include <limits>
#include <iostream>
int main(int argc, char* argv[])
{
std::cout << "min char: "
<< static_cast<int>(std::numeric_limits<char>::min()) << "\n";
std::cout << "min unsigned char: "
<< static_cast<int>(std::numeric_limits<unsigned char>::min()) << "\n";
}
produces the output:
min char: -128
min unsigned char: 0
That is, while the size of the ranges is the same (8 bits), the ranges themselves do depend on the sign.