I'm working on a homework assignment to print out big and little endian values of an int and float. I'm having trouble converting to little endian.
here's my code
void convertLitteE(string input)
{
int theInt;
stringstream stream(input);
while(stream >> theInt)
{
float f = (float)theInt;
printf("\n%d\n",theInt);
printf("int: 0x");
printLittle((char *) &theInt, sizeof(theInt));
printf("\nfloat: 0x");
printLittle((char *) &f, sizeof(f));
printf("\n\n");
}
}
void printLittle(char *p, int nBytes)
{
for (int i = 0; i < nBytes; i++, p++)
{
printf("%02X", *p);
}
}
when input is 12 I get what I would expect
output:
int: 0x0C000000
float: 0x00004041
but when input is 1234
output:
int: 0xFFFFFFD20400000
float: 0x0040FFFFFFF9A44
but I would expect
int : 0xD2040000
float: 0x00409A44
When I step through the for loop I can see where there appears to be a garbage value and then it prints all the F's but I don't know why. I've tried this so many different ways but I can't get it to work.
Any help would be greatly appreciated.
Apparently on your system, char is a signed 8-bit type. Using unsigned 8-bit bytes, the 4-byte little-endian representation of 1234 would be 0xd2, 0x04, 0x00, 0x00. But when interpreted as a signed char on most systems, 0xd2 becomes -0x2e.
Then the call to printf promotes that char to the int with value -0x2e, then printf (which is not very typesafe) reads in an unsigned int where you passed the int. This is Undefined Behavior, but on most systems it will be the same as a static_cast, so you get the value 0xFFFFFFD2 when trying to print the first byte.
If you stick to using unsigned char instead of char in these functions, you can avoid this particular problem.
(But as #jogojapan pointed out, this entire approach is not portable at all.)
Related
I'm trying to get an int value from a file I read. The trick is that I don't know how many bytes this value lays on, so I first read the length octet, then try to read as many data bytes as length octet tells me. The issue comes when I try to put the data octets in an int variable, and eventually print it - if the first data octet is 0, only the one that comes after is copied, so the int I try to read is wrong, as 0x00A2 is not the same as 0xA200. If i use ntohs or ntohl, then 0xA200 is decoded wrong as 0x00A2, so it does not resolve the hole problem. I am using memcpy like this:
memcpy(&dst, (const *)src, bytes2read)
where dst is int, src is unsigned char * and bytes2read is a size_t.
So what am I doing wrong? Thank you!
You cannot use memcpy to portably store bytes in an integer, because the order of bytes is not specified by the standard, not speaking of possible padding bits. The portable way is to use bitwise operations and shift:
unsigned char b, len;
unsigned int val = 0;
fdin >> len; // read the field len
if (len > sizeof(val)) { // ensure it will fit into an
// process error: cannot fit in an int variable
...
}
while (len-- > 0) { // store and shift one byte at a bite
val <<= 8; // shift previous value to leave room for new byte
fdin >> b; // read it
val |= b; // and store..
}
I need to write 16-bit integers to a file. fstream only writes characters. Thus I need to convert the integers to char - the actual integer, not the character representing the integer (i.e. 0 should be 0x00, not 0x30) I tried the following:
char * chararray = (char*)(&the_int);
However this creates a backwards array of two characters. The individual characters are not flipped, but the order of the characters is. Thus I created this function:
char * inttochar(uint16_t input)
{
int input_size = sizeof(input);
char * chararray = (char*)(&input);
char * output;
output[0]='\0';
for (int i=0; i<input_size; i++)
{
output[i]=chararray[input_size-(i+1)];
}
return output;
}
This seems slow. Surely there is a more efficient, less hacky way to convert it?
It's a bit hard to understand what you're asking here (perhaps it's just me, although I gather the commentators thought so too).
You write
fstream only writes characters
That's true, but doesn't necessarily mean you need to create a character array explicitly.
E.g., if you have an fstream object f (opened in binary mode), you can use the write method:
uint16_t s;
...
f.write(static_cast<const char *>(&s), sizeof(uint16_t));
As others have noted, when you serialize numbers, it often pays to use a commonly-accepted ordering. Hence, use htons (refer to the documentation for your OS's library):
uint16_t s;
...
const uint16_t ns = htons(s);
f.write(static_cast<const char *>(&ns), sizeof(uint16_t));
I'm trying to display an integer on an LCD-Display. The way the Lcd works is that you send an 8-Bit ASCII-Character to it and it displays the character.
The code I have so far is:
unsigned char text[17] = "ABCDEFGHIJKLMNOP";
int32_t n = 123456;
lcd.printInteger(text, n);
//-----------------------------------------
void LCD::printInteger(unsigned char headLine[17], int32_t number)
{
//......
int8_t str[17];
itoa(number,(char*)str,10);
for(int i = 0; i < 16; i++)
{
if(str[i] == 0x0)
break;
this->sendCharacter(str[i]);
_delay_ms(2);
}
}
void LCD::sendCharacter(uint8_t character)
{
//....
*this->cOutputPort = character;
//...
}
So if I try to display 123456 on the LCD, it actually displays -7616, which obviously is not the correct integer.
I know that there is probably a problem because I convert the characters to signed int8_t and then output them as unsigned uint8_t. But I have to output them in unsigned format. I don't know how I can convert the int32_t input integer to an ASCII uint8_t-String.
On your architecture, int is an int16_t, not int32_t. Thus, itoa treats 123456 as -7616, because:
123456 = 0x0001_E240
-7616 = 0xFFFF_E240
They are the same if you truncate them down to 16 bits - so that's what your code is doing. Instead of using itoa, you have following options:
calculate the ASCII representation yourself;
use ltoa(long value, char * buffer, int radix), if available, or
leverage s[n]printf if available.
For the last option you can use the following, "mostly" portable code:
void LCD::printInteger(unsigned char headLine[17], int32_t number) {
...
char str[17];
if (sizeof(int) == sizeof(int32_t))
snprintf(str, sizeof(str), "%d", num);
else if (sizeof(long int) == sizeof(int32_t))
snprintf(str, sizeof(str), "%ld", num);
else if (sizeof(long long int) == sizeof(int32_t))
snprintf(str, sizeof(str), "%lld", num);
...
}
If, and only if, your platform doesn't have snprintf, you can use sprintf and remove the 2nd argument (sizeof(str)). Your go-to function should always be the n variant, as it gives you one less bullet to shoot your foot with :)
Since you're compiling with a C++ compiler that is, I assume, at least half-decent, the above should do "the right thing" in a portable way, without emitting all the unnecessary code. The test conditions passed to if are compile-time constant expressions. Even some fairly old C compilers could deal with such properly.
Nitpick: Don't use int8_t where a char would do. itoa, s[n]printf, etc. expect char buffers, not int8_t buffers.
I ran into a very strange problem. I think I am missing some very basic thing here. When I do this:
char buffer[1] = {0xA0};
int value=0;
value = (int)buffer[0];
printf("Array : %d\n",value);
I get result as -96, which shouldnt happen. It should give me 160, as hexa number 0xA0 means 160 in decimal. When I put small values in buffer like 0x1F, it works fine.
Can anyone tell me what am I missing here?
char is signed -128 to 127
Declare buffer as unsigned char or cast to unsigned char:
char buffer[1] = {0xA0};
int value=0;
value = (unsigned char)buffer[0];
printf("Array : %d\n",value);
I was making a function to read a file containing some dumped data (sequence of 1 byte values). As the dumped values were 1 byte each, I read them as chars. I opened the file in binary mode, read the data as chars and did a casting into int (so I get the ascii codes). But the data read isn't correct (compared in a hex-editor). Here's my code:
int** read_data(char* filename, int** data, int& height, int& width)
{
data=new int*[height];
int row,col;
ifstream infile;
infile.open(filename,ios::binary|ios::in);
if(!infile.good())
{
return 0;
}
char* ch= new char[width];
for(row=0; row<height; row++)
{
data[row]=new int[width];
infile.read(ch,width);
for(col=0; col<width; col++)
{
data[row][col]=int(ch[col]);
cout<<data[row][col]<<" ";
}
cout<<endl;
}
infile.close();
return data;
}
Any ideas what might be wrong with this code?
My machine is windows, I'm using Visual Studio 2005 and the (exact) filename that i passed is:
"D:\\files\\output.dat"
EDIT: If I don't use unsigned char, the first 8 values, which are all 245, are read as -11.
I think, you might have to use unsigned char and unsigned int to get correct results. In your code, the bytes you read are interpreted as signed values. I assume you did not intend that.
Your error seems to cover in using of char* for ch. When you try to output it, all chars are printed until the first zero value.
A plain char can be either signed or unsigned, depending on the compiler. To get a consistent (and correct) result, you can cast the value to unsigned char before assigning to the int.
data[row][col]=static_cast<unsigned char>(ch[col]);