I ran into a very strange problem. I think I am missing some very basic thing here. When I do this:
char buffer[1] = {0xA0};
int value=0;
value = (int)buffer[0];
printf("Array : %d\n",value);
I get result as -96, which shouldnt happen. It should give me 160, as hexa number 0xA0 means 160 in decimal. When I put small values in buffer like 0x1F, it works fine.
Can anyone tell me what am I missing here?
char is signed -128 to 127
Declare buffer as unsigned char or cast to unsigned char:
char buffer[1] = {0xA0};
int value=0;
value = (unsigned char)buffer[0];
printf("Array : %d\n",value);
Related
I'm trying to get an int value from a file I read. The trick is that I don't know how many bytes this value lays on, so I first read the length octet, then try to read as many data bytes as length octet tells me. The issue comes when I try to put the data octets in an int variable, and eventually print it - if the first data octet is 0, only the one that comes after is copied, so the int I try to read is wrong, as 0x00A2 is not the same as 0xA200. If i use ntohs or ntohl, then 0xA200 is decoded wrong as 0x00A2, so it does not resolve the hole problem. I am using memcpy like this:
memcpy(&dst, (const *)src, bytes2read)
where dst is int, src is unsigned char * and bytes2read is a size_t.
So what am I doing wrong? Thank you!
You cannot use memcpy to portably store bytes in an integer, because the order of bytes is not specified by the standard, not speaking of possible padding bits. The portable way is to use bitwise operations and shift:
unsigned char b, len;
unsigned int val = 0;
fdin >> len; // read the field len
if (len > sizeof(val)) { // ensure it will fit into an
// process error: cannot fit in an int variable
...
}
while (len-- > 0) { // store and shift one byte at a bite
val <<= 8; // shift previous value to leave room for new byte
fdin >> b; // read it
val |= b; // and store..
}
I'm currently working on a program that converts to and from base64 in Eclipse. However, I've just noticed that char values seem to have 7 bits instead of the usual 8. For example, the character 'o' is shown to be represented in binary as 1101111 instead of 01101111, which effectively prevents me from completing my project, as I need a total of 24 bits to work with for the conversion to work. Is there any way to either append a 0 to the beginning of the value (i tried bitshifting in both directions, but neither worked), or preventing the issue altogether?
The code for the (incomplete/nonfuntional) offending method is as follows, let me know if more is required:
std::string Encoder::encode( char* src, unsigned char* dest)
{
char ch0 = src[0];
char ch1 = src[1];
char ch2 = src[2];
char sixBit1 = ch0 >> 1;
dest[0] = ch2;
dest[1] = ch1;
dest[2] = ch0;
dest[3] = '-';
}
char for C/C++ language is always signed int8. So, it is excepted that you have only 7 useable bits - because one bit is used for sign storage.
Try to use unsigned char instead.
Either unsigned char or uint8_t from <stdint.h> should work. For maximum portability, uint_least8_t is guaranteed to exist.
For example , 130ABF (Hexadecimal) is equals to 1247935 (Decimal),
So my byte array is
char buf[3] = {0x13 , 0x0A , 0xBF};
and I need to retrieve the decimal value from the byte array.
Below are my sample code:
#include<iostream>
using namespace std;
int main()
{
char buf[3] = {0x13 , 0x0A , 0xBF};
int number = buf[0]*0x10000 + buf[1]*0x100 + buf[2];
cout<<number<<endl;
return 0;
}
and the result is : (Wrong)
1247679
Unless I change the
char buf[3] = {0x13 , 0x0A , 0xBF};
to
int buf[3] = {0x13 , 0x0A , 0xBF};
then It will get correct result.
Unfortunately, I must set my array as char type, anyone know how to solve this ?
Define the array as:
unsigned char buf[3];
Remember that char could be signed.
UPDATE: In order to complete the answer, it is interesting to add that "char" is a type that could be equivalent to "signed char" or "unsigned char", but it is not determined by the standard.
Array elements will be promouted to int before evaluating. So if your compiler treats char as signed you get next (assuming int is 32-bit):
int number = 19*0x10000 + 10*0x100 + (-65);
To avoid such effect you can declare your array as unsigned char arr[], or use masking plus shifts:
int number = ((buf[0] << 16) & 0xff0000)
| ((buf[1] << 8) & 0x00ff00)
| ((buf[2] << 0) & 0x0000ff;
Since your char array is signed, when you want to initialize the last element (0xBF), you are trying to assign 191 to it while the max it can store is 127: a narrowing conversion occurs... A workaround would be the following:
unsigned char[3] = { 0x13, 0x0A, 0xBF };
This will prevent the narrowing conversion. Your compiler should have given you a warning about it.
I'm working on a homework assignment to print out big and little endian values of an int and float. I'm having trouble converting to little endian.
here's my code
void convertLitteE(string input)
{
int theInt;
stringstream stream(input);
while(stream >> theInt)
{
float f = (float)theInt;
printf("\n%d\n",theInt);
printf("int: 0x");
printLittle((char *) &theInt, sizeof(theInt));
printf("\nfloat: 0x");
printLittle((char *) &f, sizeof(f));
printf("\n\n");
}
}
void printLittle(char *p, int nBytes)
{
for (int i = 0; i < nBytes; i++, p++)
{
printf("%02X", *p);
}
}
when input is 12 I get what I would expect
output:
int: 0x0C000000
float: 0x00004041
but when input is 1234
output:
int: 0xFFFFFFD20400000
float: 0x0040FFFFFFF9A44
but I would expect
int : 0xD2040000
float: 0x00409A44
When I step through the for loop I can see where there appears to be a garbage value and then it prints all the F's but I don't know why. I've tried this so many different ways but I can't get it to work.
Any help would be greatly appreciated.
Apparently on your system, char is a signed 8-bit type. Using unsigned 8-bit bytes, the 4-byte little-endian representation of 1234 would be 0xd2, 0x04, 0x00, 0x00. But when interpreted as a signed char on most systems, 0xd2 becomes -0x2e.
Then the call to printf promotes that char to the int with value -0x2e, then printf (which is not very typesafe) reads in an unsigned int where you passed the int. This is Undefined Behavior, but on most systems it will be the same as a static_cast, so you get the value 0xFFFFFFD2 when trying to print the first byte.
If you stick to using unsigned char instead of char in these functions, you can avoid this particular problem.
(But as #jogojapan pointed out, this entire approach is not portable at all.)
I'm trying to convert the characters to unsigned short but the value I'm getting in m_cmdCode is always 0. Some input would be very helpful.
int main()
{
char *temp = new char[3];
memset(temp,1,3);
unsigned short m_cmdCode = (unsigned short) atoi(temp);
printf("%d",m_cmdCode);
}
// I want m_cmdCode to be equal to 111, is it possible to do this ?
You're setting the elements of temp to the integer value 1. You want the character value '1':
memset(temp, '1', 3);
Note that you also need to NUL-terminate temp for atoi to work reliably.
it's come from your memset, you put 3 bytes with value 1 it's not the same thing than writing
strcpy(temp, "111")
maybe I reversed the src and dest