How to convert unsigned char to unsigned int in c++? - c++

I have the following piece of code:
const unsigned char *a = (const unsigned char *) input_items;
input_items is basically the contents of a binary file.
Now a[0] = 7, and i want to convert this value to unsigned int. But when I do the following:
unsigned int b = (unsigned int) a[0];
and print both of these values, I get:
a[0] = 7
b = 55
Why does b not contain 7 as well? How do I get b = 7?

I think I see the problem now: You print the value of the character as a character:
unsigned char a = '7';
std::cout << a << '\n';
That will print the character '7', with the ASCII value 55.
If you want to get the corresponding integer value for the digit character, you can rely on that all digit characters must be consecutively encoded, starting with zero and ending with nine. Which means you can subtract '0' from any digit character to get its integer value:
unsigned char a = '7';
unsigned int b = a - '0';
Now b will be equal to the integer value 7.

Related

unsigned char hex to int dec conversion C++

How to convert variable unsigned char with HEX value to variable int with DEC value. I have got:
unsigned char length_hex = 0x30; // HEX
How to convert to int DEC?
int length_dec = length_hex; // With result length_dec = 48
As some people have commented the conversion is automatic. Try:
int main (void)
{
unsigned char length_hex = 0x30;
printf("0x30 is char:%c and int:%d.",length_hex,length_hex);
return 0;
}
and see what you get - when you convert it to a char it's '0' (the character 0) and as an int it's 48. Also, it doesn't really matter if I save the constant 0x30 as an int or char. Just look at:
printf("0x30 is char:%c and int:%d.",0x30,0x30);

How to use char data type as a number rather than a character?

When I use the char datatype to add two numbers, I get the sum of the ASCII code of the characters and not the numbers itself. When I researched on the internet, various sites say that the char type can indeed be used to handle one byte numbers. But in reality, I get the sum of ASCII values. Why is this happening? Below is just a sample code which illustrates the problem:
uint8_t rows,cols; //uint8_t is just a typedef for char
cin >> rows;
cout << rows + 1 << endl;
When people talk about "one-byte numbers", they're talking about 8-bit values, ranging from -128 to 127 for a char, or 0 to 255 for an unsigned char, also known as octets. These can be converted directly to larger integer types and to floats:
char eight_bit = 122;
float floating_point = eight_bit; // = 122.0
If you're trying to convert a digit value such as '1' into the numeric value it represents, there's stoi:
#include <string>
int ctoi(char c) {
std::string temp;
temp.push_back(c);
return std::stoi(temp);
}
Chars store the ASCII equivalent of a character as an integer.
For example
char value = 'A' // == int 65
It's best you use a short integer to store numbers, but if you really want to, you can do something like this;
char value1 = '2';
char value2 = '5';
char sum = (value1 + value2) - '0'; // int value of sum would be 7
When you use char, you use signed 8 bit data type (mostly).
And you get "sum of ASCII" only because std::cout is programmed to display char as ASCII character.
Try
cout << stratic_cast<int16_t>(rows) + 1 << endl;
And you will see that you get the 'number' rather than an 'ASCII character'.
NOTE
uint8_t is not (or probably should not be) char since char is defined as signed data type while uint* stands for unsigned.

C/C++ Converting a 64 bit integer to char array

I have the following simple program that uses a union to convert between a 64 bit integer and its corresponding byte array:
union u
{
uint64_t ui;
char c[sizeof(uint64_t)];
};
int main(int argc, char *argv[])
{
u test;
test.ui = 0x0123456789abcdefLL;
for(unsigned int idx = 0; idx < sizeof(uint64_t); idx++)
{
cout << "test.c[" << idx << "] = 0x" << hex << +test.c[idx] << endl;
}
return 0;
}
What I would expect as output is:
test.c[0] = 0xef
test.c[1] = 0xcd
test.c[2] = 0xab
test.c[3] = 0x89
test.c[4] = 0x67
test.c[5] = 0x45
test.c[6] = 0x23
test.c[7] = 0x1
But what I actually get is:
test.c[0] = 0xffffffef
test.c[1] = 0xffffffcd
test.c[2] = 0xffffffab
test.c[3] = 0xffffff89
test.c[4] = 0x67
test.c[5] = 0x45
test.c[6] = 0x23
test.c[7] = 0x1
I'm seeing this on Ubuntu LTS 14.04 with GCC.
I've been trying to get my head around this for some time now. Why are the first 4 elements of the char array displayed as 32 bit integers, with 0xffffff prepended to them? And why only the first 4, why not all of them?
Interestingly enough, when I use the array to write to a stream (which was the original purpose of the whole thing), the correct values are written. But comparing the array char by char obviously leads to problems, since the first 4 chars are not equal 0xef, 0xcd, and so on.
Using char is not the right thing to do since it could be signed or unsigned. Use unsigned char.
union u
{
uint64_t ui;
unsigned char c[sizeof(uint64_t)];
};
char gets promoted to an int because of the prepended unary + operator. . Since your chars are signed, any element with the highest by set to 1 is interpreted as a negative number and promoted to an integer with the same negative value. There are a few different ways to solve this:
Drop the +: ... << test.c[idx] << .... This may print the char as a character rather than a number, so is probably not a good solution.
Declare c as unsigned char. This will promote it to an unsigned int.
Explicitly cast +test.c[idx] before it is passed: ... << (unsigned char)(+test.c[idx]) << ...
Set the upper bytes of the integer to zero using binary &: ... << +test.c[idx] & 0xFF << .... This will only display the lowest-order byte no matter how the char is promoted.
Use either unsigned char or use test.c[idx] & 0xff to avoid sign extension when a char value > 0x7f is converted to int.
It is unsigned char vs signed char and its casting to integer
The unary plus causes the char to be promoted to a int (integral promotion). Because you have signed chars the value will be used as such and the other bytes will reflect that.
It is not true that only the four are ints, they all are. You just don't see it from the representtion since the leading zeroes are not shown.
Either use unsigned chars or & 0xff for promotion to get the desired result.

char to int conversion - what's happening here?

I want to convert a char value to an int. I am playing with following code snippets:
#include <iostream>
using namespace std;
int main() {
char a = 'A';
int i = (int)a;
//cout<<i<<endl; OUTPUT is 65 (True)
char b = '18';
int j = b;
//cout<<j<<endl; OUTPUT is 56 (HOW?)
char c = 18;
int k = c;
//cout<<c<<endl; OUTPUT is empty
//cout<<k<<endl; OUTPUT is 18 (Is this a valid conversion?)
return 0;
}
I want the third conversion, and I got correct output i.e 18. But is this a valid conversion? Can anyone please explain the above outputs and their strategies?
char a = 'A';
int i = (int)a;
The cast is unnecessary. Assigning or initializing an object of a numeric type to a value of any other numeric type causes an implicit conversion.
The value stored in i is whatever value your implementation uses to represent the character 'A'. On almost all systems these days (except some IBM mainframes and maybe a few others), that value is going to be 65.
char b = '18';
int j = b;
A character literal with more than one character is of type int and has an implementation-defined value. It's likely that the value will be '1'<<8+'8', and that the conversion from char to int drop the high-order bits, leaving '8' or 56. But multi-character literals are something to be avoided. (This doesn't apply to escape sequences like '\n'; though there are two characters between the single quotes, it represents a single character value.)
char c = 18;
int k = c;
char is an integer type; it can easily hold the integer value 18. Converting that value to int just preserves the value. So both c and k are integer variables whose valie is 18. Printing k using
std::cout << k << "\n";
will print 18, but printing c using:
std::cout << c << "\n";
will print a non-printable control character (it happens to be Control-R).
char b = '18';
int j = b;
b, in this case a char of '18' doesn't have a very consistent meaning, and has implementation-dependent behaviour. In your case it appears to get translated to ascii value 56 (equivalent to what you would get from char b = '8').
char c = 18;
int k = c;
c holds character value 18, and it's perfectly valid to convert to an int. However, it might not display very much if you display as a character. It's a non-printing control character.

How to convert a hexadecimal value contained in a char (byte) to an integer?

I just want to know how to convert an hexadecimal value contained in a char (byte) into an integer. I want to convert the color buffer from a .bmp file which is of course in hexadecimal and convert it in integers.
For example :
char rgb_hexa[3] = {0xA8, 0xF4, 0xD3};
After conversion :
int rgb_int[3] = {168, 244, 211};
I always tried to use strtol but it seems to only works with char *. I tried to do the following test but it does not work :
char src_hexa_red = 0xA8;
char src_hexa_green = 0xF4;
char src_hexa_blue = 0xD3;
std::cout << "R=" << strtol(&src_hexa_red, (char**)NULL, 16) << ", G="
<< strtol(&src_hexa_green, (char**)NULL, 16) << ", B="
<< strtol(&src_hexa_blue, (char**)NULL, 16) << std::endl;
Does anyone can help me please ?
Thanks in advance for your help.
A single char never contains hexadecimal. Nor decimal, for
that matter. Strictly speaking, a char contains an integral
value; the C++ standard requires it to use a binary
representation for the value. The value can be interpreted as
a character, but this is not always the case; there are contexts
where the integral value is used directly.
Hexadecimal and decimal are just ways of representing the value
in text format. They only have meaning when dealing with text.
for(int i = 0; i < 3; ++i)
rgb_int[i] = (unsigned char)rgb_hexa[i];
char is an integer type in C & C++ just like short, int and long. It's just the smallest integer type. Mostly, char is signed & the maximum which can fit is 127. So if the hex value was below or equal to 127, you wouldn't have to do anything. However, in this case the hex values you have are > 127 - hence you would have to cast them to unsigned to get the value you want.
Note that both the statements are identical to the compiler.
char rgb_hexa[3] = {0xA8, 0xF4, 0xD3};
char rgb_hexa[3] = {168, 244, 211};
You could have even used octal if you wanted
char rgb_hexa[3] = {0250, 0364, 0323};
It's all the same.
The values in the char array are already in a binary form, so you can cast them to an int, if you need them as such.
int v = (int)rgb_hexa[0];
You should be aware though that using signed char they will be sign extendend.
So 0xFA becomes 0xFFFFFFFA when converted to an int.
If you want to keep the values then you should use unsigned char and unsigned int which makes it 0x000000FA depending on how you want to use the values.
int v0 = (int)a[1]; <-- sign extended
unsigned int v1 = (unsigned int)a[1]; <-- sign extended
unsigned int v1 = (unsigned int)((unsigned char *)a)[1]; <-- not sign extended
You don't need to do any conversion because hexa/decimal are just ways to represent values.
For example 0xA8 in hexadecimal is the same value as 180 in decimal and 250 in octal. As in languages for example, "two", "deux" and "dois" represent all the same number (2).
In your case if you want to print the values do the following:
short y = (short) x & 0x00FF; // x is the char you want to print
cout << "Value (decimal): " << y;
cout << "Value (hexa): " << hex << y;
cout << "Value (oct): " << oct << y;
Why can't you do this
int main(int argc, char *argv[])
{
char rgb_hexa[3] = {0xA8, 0xF4, 0xD3};
int rgb_int[3] = {0,};
int i = 0;
for( i = 0 ; i < 3 ;i++)
rgb_int[i] = (unsigned char)rgb_hexa[i];
for( i = 0 ; i < 3 ;i++)
printf("%d ",rgb_int[i]);
return 0;
}
pretty straight forward ..
For type conversion, there is static_cast:
unsigned char source = 168; // note that this has for compiler same meaning as:
// unsigned char source = 0xA8; // because data is stored in binary anyway
unsigned int dest = static_cast<int>(source); // the conversion
std::cout << source << std::endl;
dest and source have same binary meaning, but they are of a different type.
I've used unsigned types, because signed char stores usually values from -127 to 127, see limits.