For loop writing special signs - c++

void printchars()
{
for (x=128;x<224;x++)
write(x);
I want the x to be a char in the write function. How can i change the x to be treated by the write functions as a char, but an int in the loop?

What is the point of making x an int if you're just going to strip away its range? That's what makes this a very strange request. You should just make x a unsigned char -- for(unsigned char x = 128; x <224; ++ x) { ....
If you just want to ensure you're calling the unsigned char template specialization of write<>, then call it like this:
write<unsigned char>(x);
If not, then you will have to use type casting:
write((unsigned char)x);
Edit: I just realized what you might be experiencing. My guess is that you originally used char but found something wrong with numbers over 127. You should probably be using unsigned char for x instead of either int or char. I edited my answer to accommodate this. char has a range of -128 to +127. unsigned char has a range of 0-255.

Cast x to a char:
write(static_cast<char>(x));
Note that it is ok for x to be a char as the loop counter as well.

Related

Taking an index out of const char* argument

I have the following code:
int some_array[256] = { ... };
int do_stuff(const char* str)
{
int index = *str;
return some_array[index];
}
Apparently the above code causes a bug in some platforms, because *str can in fact be negative.
So I thought of two possible solutions:
Casting the value on assignment (unsigned int index = (unsigned char)*str;).
Passing const unsigned char* instead.
Edit: The rest of this question did not get a treatment, so I moved it to a new thread.
The signedness of char is indeed platform-dependent, but what you do know is that there are as many values of char as there are of unsigned char, and the conversion is injective. So you can absolutely cast the value to associate a lookup index with each character:
unsigned char idx = *str;
return arr[idx];
You should of course make sure that the arr has at least UCHAR_MAX + 1 elements. (This may cause hilarious edge cases when sizeof(unsigned long long int) == 1, which is fortunately rare.)
Characters are allowed to be signed or unsigned, depending on the platform. An assumption of unsigned range is what causes your bug.
Your do_stuff code does not treat const char* as a string representation. It uses it as a sequence of byte-sized indexes into a look-up table. Therefore, there is nothing wrong with forcing unsigned char type on the characters of your string inside do_stuff (i.e. use your solution #1). This keeps re-interpretation of char as an index localized to the implementation of do_stuff function.
Of course, this assumes that other parts of your code do treat str as a C string.

basic_string of unsigned char Value Type

So, string comes with the value type of char. I want a string of value type unsigned char. Why i want such a thing is because i am currently writing a program which converts large input of hexadecimal to decimal, and i am using strings to calculate the result. But the range of char, which is -128 to 127 is too small, unsigned char with range 0 to 255 would work perfectly instead. Consider this code:
#include<iostream>
using namespace std;
int main()
{
typedef basic_string<unsigned char> u_string;
u_string x= "Hello!";
return 0;
}
But when i try to compile, it shows 2 errors, one is _invalid conversion from const char* to unsigned const char*_ and the other is initializing argument 1 of std::basic_string<_CharT, _Traits, _Alloc>::basic_string...(it goes on)
EDIT:
"Why does the problem "converts large input of hexadecimal to decimal" require initializing a u_string with a string literal?"
While calculating, each time i shift to the left of the hexadecimal number, i multiply by 16. At most the result is going to be 16x9=144, which surpasses the limit of 127, and it makes it negative value.
Also, i have to initialize it like this:
x="0"; x[0] -='0';
Because i want it to be 0 in value. if the variable is null, then i can't perform operations on it, if it is 0, then i can.
So, what should i do?
String literals are const char and you are assigning them to a const unsigned char.
Two solution you have:
First, Copy string from standard strings to your element by element.
Second, Write your own user-literal for your string class:
inline constexpr const unsigned char * operator"" _us(const char *s,unsigned int)
{
return (const unsigned char *) s;
}
// OR
u_string operator"" _us(const char *s, unsigned int len)
{
return u_string(s, s+len);
}
u_string x = "Hello!"_us;
An alternative solution would be to make your compiler treat char as unsigned. There are compiler flags for this:
MSVC: /J
GCC, Clang, ICC: -funsigned-char

C/C++ integer to hex to char array to char

Well I have been trying to convert this integer into hex and have successfully done so but I need to use this hex for setting something. Now for this I need to use a char not a char array. Nothing else has worked without manually setting it. Maybe the problem lies in the issue that I use sprintf for the conversion to hex but either way I am sure there is a way to complete this task. Now What I need to change is have the output be char z but I haven't found a way to get this to work. Any help is greatly appreciated. Thanks
EDIT: now thi code may not make sense directly because it is incomplete and I saw no purpose inputting unrelated code. int x will never be over 100 and the whole point is to convert this into a hex and write it to the memory of a setting I have. So I have been trying to figure out how to convert the integer into hex into a char. nonstring as someone pointed out even though sprintf converts it to a string stored in a char as I just noticed. But I need to take the int convert to hex and assign that to a char variable forbuse later on. And that is where I am stuck. I do not know the best way to go about completely all that in a format and way without going into a string and other things.
VOID WriteSetting(int x)
{
char output[8];
sprintf(output, "0x%X", x);
char z = 0x46
unsigned char y = z
}
Working Code:
VOID WriteSetting(int x)
{
unsigned char y = (unsigned char)x;
Settingdb.Subset.Set = y;
}
If what you want is for:
WriteSetting(70);
or
WriteSetting(0x46);
to do the same thing as
char z = 0x46;
unsigned char y = z;
then all you need to do is:
void WriteSetting(int x)
{
unsigned char y = x;
}
Integers don't have any inherent base - they're just numbers. There's no difference at all between unsigned char y = 0x46; and unsigned char y = 70;.
Have you tried
printf("0x%X", z);
While the C++ standards do not explicitly require it, the size of a character is sometimes a byte, and an integer 4 bytes. I presume that this is why you are using an array of 4 characters.
So when you have an integer type and try to cram it into a single character, you'll lose precision.
P.S. A more precise explanation of sizes of data types is here: What does the C++ standard state the size of int, long type to be?
void WriteSetting(unsigned char *y, int x){
if(x > 100){
fprintf(stderr, "x value(%d) is invalid at %s\n", x, __func__);
return ;
}
*y = x;//There is no need for conversion probably
}

Is unsigned char('0') legal C++

The following compiles in Visual Studio but fails to compile under g++.
int main()
{
int a = unsigned char('0');
return 0;
}
Is unsigned char() a valid way to cast in C++?
No, it's not legal.
A function-style explicit type conversion requires a simple-type-specifier, followed by a parenthesized expression-list. (ยง5.2.3) unsigned char is not a simple-type-specifier; this is related to a question brought up by James.
Obviously, if unsigned char was a simple-type-specifier, it would be legal. A work-around is to use std::identity:
template <typename T>
struct identity
{
typedef T type;
};
And then:
int a = std::identity<unsigned char>::type('0');
std::identity<unsigned char>::type is a simple-type-specifier, and its type is simply the type of the template parameter.
Of course, you get a two-for-one with static_cast. This is the preferred casting method anyway.
The prefered method of casting in C++ is to use static_cast like so:
int a = static_cast<unsigned char>( '0' );
Try to add brackets int a = (unsigned char)('0');
or
typedef unsigned char uchar;
//inside main
int a = uchar('0');
No, it isn't - a function-style cast cannot have a space in its name.
A case for a C-style cast perhaps:
int main() {
unsigned char c = (unsigned char) '0' ;
}
I'm pretty sure it's a Microsoft extension.
No, it isn't. But why the cast in the first place? This is perfectly valid,
int a = '0';
Why are you even trying to cast from char to unsigned char and assigning that to an int? You're putting an unsigned value into a signed int (which is legal, but uncool).
Writing '0' gives you a char with value 48. You can try
int i = (int) '0';
That way, you take the char, cast it to an int, and use it. You could even say
int i = '0';
And that would do the same thing. What exactly are you trying to do?
Are you trying to get the integer 0 or the character '0' into it? The character '0' on most implementations namely is just the integer 48 but put into 8 bits.
The only difference between a char and an int is that char must be smaller or equal to short int. and int must be larger or equal than short int accordingly, this usually makes char 8 bits, short in 16, and in 32 nowadays.
Stuf like 'a'+2 to get 'c' works namely. If you have an array that is long enough, you can also index it like array[' '] to get index 32.
If you're trying to cast it to the integer value 0, that would require an actual function that determines that.

How do I specify an integer literal of type unsigned char in C++?

I can specify an integer literal of type unsigned long as follows:
const unsigned long example = 9UL;
How do I do likewise for an unsigned char?
const unsigned char example = 9U?;
This is needed to avoid compiler warning:
unsigned char example2 = 0;
...
min(9U?, example2);
I'm hoping to avoid the verbose workaround I currently have and not have 'unsigned char' appear in the line calling min without declaring 9 in a variable on a separate line:
min(static_cast<unsigned char>(9), example2);
C++11 introduced user defined literals. It can be used like this:
inline constexpr unsigned char operator "" _uchar( unsigned long long arg ) noexcept
{
return static_cast< unsigned char >( arg );
}
unsigned char answer()
{
return 42;
}
int main()
{
std::cout << std::min( 42, answer() ); // Compile time error!
std::cout << std::min( 42_uchar, answer() ); // OK
}
C provides no standard way to designate an integer constant with width less that of type int.
However, stdint.h does provide the UINT8_C() macro to do something that's pretty much as close to what you're looking for as you'll get in C.
But most people just use either no suffix (to get an int constant) or a U suffix (to get an unsigned int constant). They work fine for char-sized values, and that's pretty much all you'll get from the stdint.h macro anyway.
You can cast the constant. For example:
min(static_cast<unsigned char>(9), example2);
You can also use the constructor syntax:
typedef unsigned char uchar;
min(uchar(9), example2);
The typedef isn't required on all compilers.
If you are using Visual C++ and have no need for interoperability between compilers, you can use the ui8 suffix on a number to make it into an unsigned 8-bit constant.
min(9ui8, example2);
You can't do this with actual char constants like '9' though.
Assuming that you are using std::min what you actually should do is explicitly specify what type min should be using as such
unsigned char example2 = 0;
min<unsigned char>(9, example2);
Simply const unsigned char example = 0; will do fine.
I suppose '\0' would be a char literal with the value 0, but I don't see the point either.
There is no suffix for unsigned char types. Integer constants are either int or long (signed or unsigned) and in C99 long long. You can use the plain 'U' suffix without worry as long as the value is within the valid range of unsigned chars.
The question was how to "specify an integer 'literal' of type unsigned char in C++?". Not how to declare an identifier.
You use the escape backslash and octal digits in apostrophes. (eg. '\177')
The octal value is always taken to be unsigned.