Convert Hex String to unsigned Char - c++

I have something like:
string hex = "\x80\x01";
and want to convert it to a unsigned char like:
unsigned char hex_char[] = "\x80\x01";
I tried strcpy but it won't work since it doesn't support unsigned char
I would appreciate any suggestions.

For the in-practice you can just copy the values, any way you find natural.
E.g.
using Byte = unsigned char;
string hex = "\x80\x01";
vector<Byte> bytes( hex.begin(), hex.end() );
Or if you know that it will always be just two bytes,
using Byte = unsigned char;
string hex = "\x80\x01";
Byte bytes[] = {{ hex[0], hex[1] }};
Formally it's a different kettle of fish, because with 8-bit byte the value \x80 won't fit as a positive signed char value. So it ends up as an implementation defined value. But in practice this is not a problem because computer evolution has converged on two's complement representation of signed integers, and I don't think there's any C++ compiler that doesn't use it.

Related

If char can store a number in C++, why do we need int?

The char data type can store numbers, characters, and symbols, so what is the need for the int data type?
char = '2';
I have knowledge of use of int, but I want to know the conceptual part to describe it fundamentally.
Usually, int can hold larger numbers than char. In current, widely available architectures, int is 32-bit, while char is 8-bit. Furthermore, it is implementation defined that a char is signed or unsigned.
On these architectures int can hold numbers between -2147483648 and 2147483647, while a (signed) char can hold numbers between -128 and 127.

Is it possible to return an integer to the main function from a unsigned char data type function?

I have this unsigned char sumBinaryFigure function that calculates the sum of the digits of the binary representation of an unsigned long long number. When I call this function from the main function, for an unsigned long long it should return a integer(or another numeric data type) although the data type of the function is unsigned char. Is it possible? I tried a function overloading and it didn't work. If it sounds absurd, it's not my fault.
unsigned char sumBinaryFigure(unsigned long long number)
{
unsigned int S = 0;
while (number)
{
S += number % 2;
number /= 2;
}
return S;
}
When I call this function from the main function, for an unsigned long long it should return a integer although the data type of the function is unsigned char. Is it possible?
Yes. The question is not absurd, C types are just confusing. unsigned char and int both represent integers.
Your code is correct.
unsigned char is a 1-byte datatype. It can be used to represent a letter, or it can be used to represent a number.
The following statements are equivalent.
unsigned char ch = 'A';
unsigned char ch = 65;
Whether you use unsigned char as a character or integer, the machine does not care.
char does not necessarily contain a character. It also represents small numbers
The posted implementation of sumBinaryFigure returns a number in the range of 0-255, nothing wrong with that. Because a long long is almost certainly less than 256 bits, you don't need to worry about unsigned char not being large enough.
If I can suggest one change to your program in order to make it less confusing, change this line
unsigned int S = 0;
to this...
unsigned char S = 0;
Addendum
Just to be clear, consider the following code.
int main (void) {
char ch_num = 65; // ch_num is the byte 0100 0001
char ch_char = 'A'; // ch_char is the byte 0100 0001
printf ("%d\n", ch_num); // Prints 65
printf ("%d\n", ch_char); // Prints 65
printf ("%c\n", ch_num); // Prints A
printf ("%c\n", ch_char); // Prints A
}
A char is a byte. It's a sequence of bits with no meaning except what we impose on it.
That byte can be interpreted as either a number or character, but that decision is up to the programmer. The %c format specifier says "interpret this as a character. The %d format specifier says "interpret this as a number".
Whether it's an integer or character is decided by the output function, not the data type.
unsigned char can be converted to int without narrowing on all platforms that I can think of. You don't need to overload anything, just assign the result of the function to an int variable:
int popcnt = sumBinaryFigure(1023);
In fact, taking the function semantics into account, there's no way the result value will not fit into an int, which is guaranteed to be at least 16-bit, which means the minimal numeric_limits<int>::max() value is 32767. You'd have to have a datatype capable of storing over 32767 binary digits for this to be even remotely possible (int on most platforms is 32-bit)

How to initialize a signed char to unsigned values like 0xFF in C++?

There are some cases where a byte array is implemented in a library using a char type, which is a signed type for many compilers.
Is there a simple, readable and correct way to initialize a signed char with a hex value which is greater than 127 and not bigger than 255?
Currently I end up with the following, and I keep thinking that there must be something simpler:
const unsigned char ff_unsigned = 0xff;
const char ff_signed = static_cast<const char>(ff_unsigned);
I want a solution with no warnings, even when using higher compiler warning levels than the default.
The following solution e.g. creates C4310: cast truncates constant value with MSVC 2013:
const char ff_signed = char(0xff);
Yes there is. Use single quotation characters with \x as the prefix. That denotes a hexadecimal literal char type.
For example: '\xff'.
But note that char can be signed or unsigned and up to and including C++11 it can even be a 1's complement signed type.
const char ff_unsigned = '\xff';
0xff is an int and '\xff' is a char.`You can use
const char ff_signed = (char)0xff;
or
const char ff_signed = '\xff';

Converting element in char array to int

I have an 80 element char array and I am trying to specific elements to an integer and am getting some number errors.
Array element 40 in hex is 0xC0. When I try assigning it to an integer I get in hex 0xFFFFC0, and I dont know why.
char tempArray[80]; //Read in from file, with element 40 as 0xC0
int tempInt = (int)tempArray[40]; //Output as 0xFFFFC0 instead of 0x0000C0
Depending on your implementation, a char type in C++ is either a signed type or an unsigned type. (The C++ standard mandates that an implementation chooses either scheme).
To be on the safe side, use unsigned char in your case.
This is so because char is treated as signed number, and the promotion to int preserves the sign. Change the array from char to unsigned char to avoid it.
Because 0XC0 is negative in char, and the cast is preserving the sign as an int. You should use unsigned char if you want to maintain the directly binary translation or as a purely positive value
for more convenience, I always use unsigned and signed always before declaration and casting. you can write the following:
unsigned char tempArray[80]; //Read in from file, with element 40 as 0xC0
unsigned int tempInt = (unsigned int)tempArray[40]; //Output as 0xFFFFC0 instead of 0x0000C0
char may be signed, so converting from a negative char value will result in a negative int value, which is usualyl represented in two's complement, resulting in a very high binary representation.
Instead, either use int tempInt = 0xFF & tempArray[40], define tempArray as unsigned char, or cast to unsigned char : int tempInt = (unsigned char)tempArray[40] (unsure if this is defined behaviour).

What is the purpose of Signed Char

what is the purpose of signed char if both char and signed char ranges from -127 - 127?
what is the place where we use signed char instead of just char?
unsigned char is unsigned.
signed char is signed.
char may be unsigned or signed depending on your platform.
Use signed char when you definitely want signedness.
Possibly related: What does it mean for a char to be signed?
It is implementation defined whether plain char uses the same
representation as signed char or unsigned char. signed char was
introduced because plain char was underspecified. There's also the
message you send to your readers:
plain char: character data
signed char: small itegers
unsigned char: raw memory
(unsigned char may also be used if you're doing a lot of bitwise
operations. In practice, that tends to overlap with the raw memory
use.)
See lamia,
First I want to prepare background for your question.
................................................
char data type is of two types:
unsigned char;
signed char;
(i.e. INTEGRAL DATATYPES)
.................................................
Exaplained as per different books as:
char 1byte –128 to 127 (i.e. by default signed char)
signed char 1byte –128 to 127
unsigned char 1byte 0 to 255
.................................................
one more thing 1byte=8 bits.(zero to 7th bit)
As processor flag register reserves 7th bit for representing sign(i.e. 1=+ve & 0=-ve)
-37 will be represented as 1101 1011 (the most significant bit is 1),
+37 will be represented as 0010 0101 (the most significant bit is 0).
.................................................
similarly for char last bit is by default taken as signed
This is why?
Because char also depends on ASCII codes of perticular charectors(Eg.A=65).
In any case we are using char and using 7 bits only.
In this case to increase memory range for char/int by 1 bit we use unsigned char or unsigned int;
Thanks for the question.
Note that on many systems, char is signed char.
As for your question: Well, you would use it when you would need a small signed number.