I have a CString with 'HEX' values, which looks something like this: "0030303100314a"
Now I want to store this into a bin file, so that if I open it with a hex-editor, the data us displayed is displayed like in the string: 00 30 30 30 31 00 31 4a
I fail to prepare the data right. It gets always saved in standard ASCII even though I use the "wb" mode.
Thanks in advance
OK your misunderstanding is that "wb" means that all output happens in 'binary'. That's not what it does at all. There are binary output operations, and you should open the file in binary mode to use them, but you still have to use binary output operations. This also means you have to do the work to convert your hex characters to integer values.
Here's how I might do your task
for (int i = 0; i + 1 < str.GetLength(); i += 2)
{
int hex_value = 16*to_hex(str[i]) + to_hex(str[i+1]); // convert hex value
fputc(hex_value, file);
}
static int to_hex(TCHAR ch)
{
return ch >= '0' && ch <= '9' ? ch - '0' : ch - ('A' - 10);
}
The program od will take a file and convert it to various representations. For example, if you say od -x myfile.bin, the contents of myfile.bin will be printed as hexadecimal bytes, just like what you're looking for. You could make C++ do this too, but maybe it's easier to just post-process your data to look the way you want.
Related
// check if temp < 10
if (temp < 10) {
hexaDeciNum[i] = temp + 48;
i++;
}
else {
hexaDeciNum[i] = temp + 55;
i++;
}
n = n / 16;
}
I found this code to convert from decimal to hex but as you can see we have + 48 and + 55 anyone know why did we use these numbers? btw temp is to store the remainder... thanks!
What the code is doing, badly, is converting a value in the range of 0 to 15 into a corresponding character for the hexadecimal representation of that value. The right way to do that is with a lookup table:
const char hex[] = "0123456789ABCDEF";
hexaDeciNum[i] = hex[temp];
One problem with the code as written is that it assumes, without saying so, that you want the output characters encoded in ASCII. That's almost always the case, but there is no need to make that assumption. The compiler knows what encoding the system that it is targeting uses, so the values in the array hex in my code will be correct for the target system, even if it doesn't use ASCII.
Another problem with the code as written is the magic numbers. They don't tell you what their purpose is. To get rid of the magic numbers, replace 48 with '0' and replace 55 with 'A' - 10. But see the previous paragraph.
In C and C++ you can convert a base-10 digit to its corresponding character by adding it to '0', so
hexaDeciNum[i] = digit + '0';
will work correctly. There is no such requirement for any other values, so that conversion to a letter is not guaranteed to work, even if you use 'A' instead of that hardcoded 65.
And don't get me started on pointless comments:
// check if temp < 10
if (temp < 10)
If you look on the ASCII table you will see that the characters for numbers 0..9 are shifted by 48. So, if you take a number e.g. 0 and add 48 to it you will get a character for that number "0".
The same goes for characters if you take number 10 and add 55 to it you will get an "A" character from the ASCII table.
I'm writing to a file one byte at a time in C++. So far I have a code like this:
if ( i == 0 ) { outputfile << '\x0';}
if ( i == 1 ) { outputfile << '\x1';}
....
if ( i == 254 ) { outputfile << '\xfe';}
if ( i == 255 ) { outputfile << '\xff';}
It works but as you can imagine that's an extra 255 lines. If I were using Python it would be as simple as:
output.write(bytes((i,)))
Is there a simpler way to write single byte integers into bytes? Like a couples lines of code at most?
I've tried using char and some conversions for the past day but I'm not good at handling that data type at all. The file I write ends up having being corrupted. Even though the I get the file size right it takes up more size on disk that it should when I try that way.
Thanks for reading
In every line, you are simply converting the int i to a char.
This matches the logic of all your code.
outputfile << static_cast<char>(i);
However, you likely don't want the text formatting that operator << uses and should instead write:
outputfile.put( static_cast<char>(i) );
You don't need any of those if statements. simply print out (char) i.
When you place a data type (like char, int, double, float) in parenthesis before a variable, this will convert that variable to the type.
(int) 'a' will convert the character 'a' to an integer of value 97.
(char) 97 will convert the integer 97 to the character 'a'.
There's all kinds of things you can do with type casting, and you should be aware of possible overflows, but I'll leave this as something for you to research.
I have not many experience with operations/storage of binary data so I would greatly appreciate if someone could clarify some things for me.
I have a device say where you have to store 16 bytes. e.g., you should send it an array of bytes proceeded probably with header information. e.g., smth like this:
unsigned char sendBuffer[255];
sendBuffer[0] = headerInfo1;
sendBuffer[1] = headerInfo1;
sendBuffer[2] = headerInfo1;
sendBuffer[3] = headerInfo1;
sendBuffer[4] = data;
sendBuffer[5] = data;
sendBuffer[6] = data;
sendBuffer[7] = data;
sendBuffer[8] = data;
...
sendBuffer[20] = data;
Let's say send operation is easy, you just use Send(sendBuffer, length).
My question is say I want to store an integer in the device - what is the best way to do this?
I have a sample code which does it and I was not sure if it was ok and how it was doing it. It confused me too. I basically enter the number I want to store in text box. Say I want to store 105 in decimal. I enter "00000000000000000000000000000105" (I am not sure how program interprets this yet, as decimal or as hex), then there is this code:
for(int i=0,m=0; i < size; i+=2,m++)
{
char ch1, ch2;
ch1=(char)str[i]; // str contains the number I entered above as string, padded
ch2=(char)str[i+1];
int dig1, dig2;
if(isdigit(ch1)) dig1 = ch1 - '0';
else if(ch1>='A' && ch1<='F') dig1 = ch1 - 'A' + 10;
else if(ch1>='a' && ch1<='f') dig1 = ch1 - 'a' + 10;
if(isdigit(ch2)) dig2 = ch2 - '0';
else if(ch2>='A' && ch2<='F') dig2 = ch2 - 'A' + 10;
else if(ch2>='a' && ch2<='f') dig2 = ch2 - 'a' + 10;
// Contains data to write as a byte array; this is basically the 'data' part as mentioned in my above snippet
array1[m] = (char)(dig1*16 + dig2);
}
And this array1[m] is written to the device using Send as above. But when I debug array1 contains: 0000000000000015
When I do the read the value I get is correct, it is 00000000000000000000000000000105. How come this works?
You're reinventig a few wheels here, but that's to be expected if you're new to C++.
std::cin >> yourInteger will read an integer, no need to convert that yourself.
Leading zeroes are usually not written out, but in a C++ integer type they're always present. E.g. int32_t always has 32 bits. If it stores 105 (0x69), it really stores 0x00000069.
So, the best way is probably to memcpy that integer to your sendBuffer. You should copy sizeof(yourInteger) bytes.
Seems there are a few questions hiding in here, so some extra answers:
You say that array1 contains: 0000000000000015, not 105.
Well, it's an array, and each member is shown as an 8 bits integer in its own right.
E.g. the last value is 5 or 05, that's the same after all. Similarly, the penultimate integer is 1 or 01.
You also wrote "Say I want to store 105 in decimal. I enter 00000000000000000000000000000105". That doesn't actually store 105 decimal. It stores 105 hexadecimal, which is 261 decimal. It is the string to integer conversion which determines the final value. If you would use base 18 (octodecimal), the string "105" becomes the integer 1*18*18 + 0 + 5 = 329 (decimal), and that would be stored as 000000101001001 binary.
My first time working with binary files and I'm having clumps of hair in my hands. Anyway, I have the following defined:
unsigned int cols, rows;
Those variables can be anywhere from 1 to about 500. When I get to writing them to a binary file, I'm doing this:
myFile.write(reinterpret_cast<const char *>(&cols), sizeof(cols));
myFile.write(reinterpret_cast<const char *>(&rows), sizeof(rows));
When I go back to read the file, on cols = 300, I get this as result:
44
1
0
0
Can someone please explain to me why I'm getting that result? I can't say that there's something wrong, as I honestly think it's me who don't understand things. What I'd LIKE to do is store the value, as is, in the file so that when I read it back, I get that as well. And maybe I do, I just don't know it.
I'd like some explanation of how this is working and how do I get the data I put in read back.
You are simply looking at the four bytes of a 32 bit integer, interpreted on a little-endian platform.
300 base 10 = 0x12C
So little-endianness gives you 0x2C 0x01, and of course 0x2C=44.
Each byte in the file has 8 bits, so can represent values from 0 to 255. It's written in little-endian order, with the low byte first. So, starting at the other end, treat the numbers as digits in base 256. The value is 0 * 256^3 + 0 * 256^2 + 1 * 256^1 + 44 * 256^0 (where ^ means exponentiation, not xor).
You have not (yet) shown how you unmarshal the data nor how you printed this text that you've cited. 44 01 00 00 looks like the bytewise decimal representation of each of the little-endian bytes of the the data you've written (decimal "300").
If you read the data back like so, it should give you the effect you want (presuming that you're okay with the limitation that the computer which writes this file is the same endianness as the one which reads it back):
unsigned int colsReadFromFile = 0;
myOtherFile.read(reinterpret_cast<char *>(&colsReadFromFile), sizeof(colsReadFromFile));
if (!myOtherFile)
{
std::cerr << "Oh noes!" << std::endl;
}
300 in binary is 100101100 which is 9 bits long.
But when you say char*, compiler looks for only first 1 byte(8 bits)
so it is 00101100(bits) of (1 00101100) = 44
^^^^^^^^
I have the following code in a project that write's the ascii representation of packet to a unix tty:
int written = 0;
int start_of_data = 3;
//write data to fifo
while (length) {
if ((written = write(fifo_fd, &packet[start_of_data], length)) == -1)
{
printf("Error writing to FIFO\n");
} else {
length -= written;
}
}
I just want to take the data that would have been written to the socket and put it in a variable. to debug, I have just been trying to printf the first letter/digit. I have tried numerous ways to get it to print out, but I keep getting hex forms (I think).
The expected output is: 13176
and the hex value is: 31 33 31 37 36 0D 0A (if that is even hex)
Obviously my C skills are not the sharpest tools in the shed. Any help would be appreciated.
update: I am using hexdump() to get the output
These are the ASCII codes of characters: 31 is '1', 33 is '3' etc. 0D and 0A are the terminating new line characters, also known as '\r' and '\n', respectively. So if you convert the values to characters, you can print them out directly, e.g. with printf using the %c or %s format codes. As you can check from the table linked, the values you posted do represent "13176" :-)