Is unsigned char('0') legal C++ - c++

The following compiles in Visual Studio but fails to compile under g++.
int main()
{
int a = unsigned char('0');
return 0;
}
Is unsigned char() a valid way to cast in C++?

No, it's not legal.
A function-style explicit type conversion requires a simple-type-specifier, followed by a parenthesized expression-list. (ยง5.2.3) unsigned char is not a simple-type-specifier; this is related to a question brought up by James.
Obviously, if unsigned char was a simple-type-specifier, it would be legal. A work-around is to use std::identity:
template <typename T>
struct identity
{
typedef T type;
};
And then:
int a = std::identity<unsigned char>::type('0');
std::identity<unsigned char>::type is a simple-type-specifier, and its type is simply the type of the template parameter.
Of course, you get a two-for-one with static_cast. This is the preferred casting method anyway.

The prefered method of casting in C++ is to use static_cast like so:
int a = static_cast<unsigned char>( '0' );

Try to add brackets int a = (unsigned char)('0');
or
typedef unsigned char uchar;
//inside main
int a = uchar('0');

No, it isn't - a function-style cast cannot have a space in its name.
A case for a C-style cast perhaps:
int main() {
unsigned char c = (unsigned char) '0' ;
}

I'm pretty sure it's a Microsoft extension.

No, it isn't. But why the cast in the first place? This is perfectly valid,
int a = '0';

Why are you even trying to cast from char to unsigned char and assigning that to an int? You're putting an unsigned value into a signed int (which is legal, but uncool).
Writing '0' gives you a char with value 48. You can try
int i = (int) '0';
That way, you take the char, cast it to an int, and use it. You could even say
int i = '0';
And that would do the same thing. What exactly are you trying to do?

Are you trying to get the integer 0 or the character '0' into it? The character '0' on most implementations namely is just the integer 48 but put into 8 bits.
The only difference between a char and an int is that char must be smaller or equal to short int. and int must be larger or equal than short int accordingly, this usually makes char 8 bits, short in 16, and in 32 nowadays.
Stuf like 'a'+2 to get 'c' works namely. If you have an array that is long enough, you can also index it like array[' '] to get index 32.
If you're trying to cast it to the integer value 0, that would require an actual function that determines that.

Related

Taking an index out of const char* argument

I have the following code:
int some_array[256] = { ... };
int do_stuff(const char* str)
{
int index = *str;
return some_array[index];
}
Apparently the above code causes a bug in some platforms, because *str can in fact be negative.
So I thought of two possible solutions:
Casting the value on assignment (unsigned int index = (unsigned char)*str;).
Passing const unsigned char* instead.
Edit: The rest of this question did not get a treatment, so I moved it to a new thread.
The signedness of char is indeed platform-dependent, but what you do know is that there are as many values of char as there are of unsigned char, and the conversion is injective. So you can absolutely cast the value to associate a lookup index with each character:
unsigned char idx = *str;
return arr[idx];
You should of course make sure that the arr has at least UCHAR_MAX + 1 elements. (This may cause hilarious edge cases when sizeof(unsigned long long int) == 1, which is fortunately rare.)
Characters are allowed to be signed or unsigned, depending on the platform. An assumption of unsigned range is what causes your bug.
Your do_stuff code does not treat const char* as a string representation. It uses it as a sequence of byte-sized indexes into a look-up table. Therefore, there is nothing wrong with forcing unsigned char type on the characters of your string inside do_stuff (i.e. use your solution #1). This keeps re-interpretation of char as an index localized to the implementation of do_stuff function.
Of course, this assumes that other parts of your code do treat str as a C string.

wchar_t* to short int conversion

One of the function in a 3rd party class return awchar_t* that holding a resource id (I don't know why it uses wchar_t* type ) I need to convert this pointer to short int
This method, using AND operator works for me. but it seems like not the correct way. is there any proper way to do this?
wchar_t* s;
short int b = (unsigned long)(s) & 0xFFFF;
wchar_t* s; // I assume this is what you meant
short int b = static_cast<short int>(reinterpret_cast<intptr_t>(s))
You could also replace short int b with auto b, and it will be deduced as short int from the type of the right-hand expression.
It returns the resource ID as a wchar_t* because that is the data type that Windows uses to carry resource identifiers. Resources can be identified by either numeric ID or by name. If numeric, the pointer itself contains the actual ID number encoded in its lower 16 bits. Otherwise it is a normal pointer to a null-terminated string elsewhere in memory. There is an IS_INTRESOURCE() macro to differentiate which is the actual case, eg:
wchar_t *s = ...;
if (IS_INTRESOURCE(s))
{
// s is a numeric ID...
WORD b = (WORD) s;
...
}
else
{
// s is a null-terminated name string
...
}
Did you mean in your code wchar_t *s;?
I'd do the conversion more explicit using
short int b = reinterpret_cast<short int>(s);
If it fits your application needs, I suggest using a data type with a fixed nr of bits, e.g. uint16_t. Using short int means you only know for sure your variable has at least 16 bits. An additional question: Why do you not use unsigned short int, instead of (signed) short int?
In general, knowing the exact nr of bits make things a little more predictable, and makes it easier to know exactly what happens when you cast or use bitmasks.

how do i use fill_n() on the following array?

I have this array
unsigned char bit_table_[10][100];
What is the right way to fill it with 0.
I tried
std::fill_n(bit_table_,sizeof(bit_table_),0x00);
but vc 2010 flags it as error.
On initialization:
unsigned char bit_table_[10][100] = {};
If it's a class member, you can initialize it in the constructor, like this:
MyClass::MyClass()
:bit_table_()
{}
Otherwise:
std::fill_n(*bit_table_,sizeof(bit_table_),0);
The type of bit_table_ is unsigned char [10][100], which will decay (that is, the compiler allows it to be implicitly converted to) into unsigned char (*)[100], that is, a pointer to an array of 100 unsigned chars.
std::fill_n(bit_table_, ...) is then instantiated as:
std::fill_n(unsigned char (*)[100], ...) which means it expects a value of type unsigned char [100] to initialize bit_table_ with. 0 is not convertible to that type, so the compilation fails.
Another way to think about it is that the STL functions that deal with iterators only deal with a single dimension. If you are passing in a multidimensional structure those STL functions will only deal with a single dimension.
Ultimately, you can't do this; there is no way to assign to an array type. I.e., since you can't do this:
char table[100];
char another_table[100]= { };
table= another_table;
you can't use std::fill_n on multidimensional arrays.
You can also try unsigned char bit_table_[10][100]= { 0 } to fill it with zeros.
int main()
{
unsigned char bit_table_[10][100]= { 0 };
return 0;
}

For loop writing special signs

void printchars()
{
for (x=128;x<224;x++)
write(x);
I want the x to be a char in the write function. How can i change the x to be treated by the write functions as a char, but an int in the loop?
What is the point of making x an int if you're just going to strip away its range? That's what makes this a very strange request. You should just make x a unsigned char -- for(unsigned char x = 128; x <224; ++ x) { ....
If you just want to ensure you're calling the unsigned char template specialization of write<>, then call it like this:
write<unsigned char>(x);
If not, then you will have to use type casting:
write((unsigned char)x);
Edit: I just realized what you might be experiencing. My guess is that you originally used char but found something wrong with numbers over 127. You should probably be using unsigned char for x instead of either int or char. I edited my answer to accommodate this. char has a range of -128 to +127. unsigned char has a range of 0-255.
Cast x to a char:
write(static_cast<char>(x));
Note that it is ok for x to be a char as the loop counter as well.

How do I specify an integer literal of type unsigned char in C++?

I can specify an integer literal of type unsigned long as follows:
const unsigned long example = 9UL;
How do I do likewise for an unsigned char?
const unsigned char example = 9U?;
This is needed to avoid compiler warning:
unsigned char example2 = 0;
...
min(9U?, example2);
I'm hoping to avoid the verbose workaround I currently have and not have 'unsigned char' appear in the line calling min without declaring 9 in a variable on a separate line:
min(static_cast<unsigned char>(9), example2);
C++11 introduced user defined literals. It can be used like this:
inline constexpr unsigned char operator "" _uchar( unsigned long long arg ) noexcept
{
return static_cast< unsigned char >( arg );
}
unsigned char answer()
{
return 42;
}
int main()
{
std::cout << std::min( 42, answer() ); // Compile time error!
std::cout << std::min( 42_uchar, answer() ); // OK
}
C provides no standard way to designate an integer constant with width less that of type int.
However, stdint.h does provide the UINT8_C() macro to do something that's pretty much as close to what you're looking for as you'll get in C.
But most people just use either no suffix (to get an int constant) or a U suffix (to get an unsigned int constant). They work fine for char-sized values, and that's pretty much all you'll get from the stdint.h macro anyway.
You can cast the constant. For example:
min(static_cast<unsigned char>(9), example2);
You can also use the constructor syntax:
typedef unsigned char uchar;
min(uchar(9), example2);
The typedef isn't required on all compilers.
If you are using Visual C++ and have no need for interoperability between compilers, you can use the ui8 suffix on a number to make it into an unsigned 8-bit constant.
min(9ui8, example2);
You can't do this with actual char constants like '9' though.
Assuming that you are using std::min what you actually should do is explicitly specify what type min should be using as such
unsigned char example2 = 0;
min<unsigned char>(9, example2);
Simply const unsigned char example = 0; will do fine.
I suppose '\0' would be a char literal with the value 0, but I don't see the point either.
There is no suffix for unsigned char types. Integer constants are either int or long (signed or unsigned) and in C99 long long. You can use the plain 'U' suffix without worry as long as the value is within the valid range of unsigned chars.
The question was how to "specify an integer 'literal' of type unsigned char in C++?". Not how to declare an identifier.
You use the escape backslash and octal digits in apostrophes. (eg. '\177')
The octal value is always taken to be unsigned.