By using filestreaming in c++, I have read a string in the binary file into a buffer (4 bytes). I know that the buffer contains "89abcdef". The buffer is such that:
buffer[0] = 89
buffer[1] = ab
buffer[2] = cd
buffer[3] = ef
Now, I want to recover these numbers into one single hex number 0x89abcdef. However, this is not as simple as I thought. I tried the following code:
int num = 0;
num |= buffer[0];
num <<= 24;
cout << num << endl;
at this point, num is displayed to be
ea000000
When I tried the same algorithm for the second element of the buffer:
num = 0;
num |= buffer[1];
num <<= 16;
cout << num << endl;
output:
ffcd0000
The ff in front of the cd is highly inconvenient for me to add them all together (I was planning to make it something looks like 00cd0000, and add it to the first num).
Could anyone help me to recover the hex number 0x89abcdef? Thanks.
Don't modify the actual number until the end:
num = buffer [0] << 24 | buffer [1] << 16 | buffer [2] << 8 | buffer [3];
buffer [0] << 24 gives you your first result, which is combined with the second result independent of the first, and so on.
Also, as pointed out, operations like this should be done on unsigned numbers, so that the signing doesn't interfere with the result.
For all of your bitwise operations, you're going to want to use unsigned int instead of int. This way you can avoid the kinds of sign-extension problems you're seeing.
Related
When I try to run the code below:
void* data = new char[SIZE]();
int16_t* num = static_cast<int16_t*>(data);
char* c = static_cast<char*>(data);
num[0] = 49;
num[1] = 50;
num[2] = 48;
for(int i = 0; i < 3; ++i)
cout << num[i] << " ";
cout << endl;
for(int i = 0; i < 6; ++i)
cout << "'" << c[i] << "' ";
cout << endl;
c[1] = 1;
cout << num[0] << endl;
I got some unexpected result:
49 50 48
'1' '' '2' '0' ''
305
So the first line of the output confirms that num[0] == 49 (int16), which in binary form is 00000000 00110001. Converted to char, the first byte should be an unprintable character and the second byte should be '1'. But the second line shows that it's the other way around.
Also, the third line shows the attempt to change the second byte to 00000001 changed the first byte instead. I expected it to be 00000000 00000001, but the int16 value is 305 which is 00000001 00110001.
What's going on here?
This happens because of the way the computer keeps information in its memory, a.k.a Endianness.
Here is a illustration of a 16bit variable in both big/little-endian notations:
In your case, since you are setting the values through a uint16_t variable, and your machine is using little-endian, each pair is stored in reverse order, thus, the '1' is printed before the unprintable character.
Extra reading (Elaborating Martin Bonner's comment)
For 16bit variable, there are only two possible byte endianness notations, those are the ones presented above. For 32bit variables, there are a total of 12 possible byte orderings, and at least three of them are/were used: Big-Endian, Little-Endian and PDP-Endian (a.k.a Middle-Endian):
First of all, forgive my extremely amateur coding knowledge.
I am intern at a company and have been assigned to create a code in C++ that swaps bytes in order to get the correct checksum value.
I am reading a list that resembles something like:
S315FFF200207F7FFFFF42A000000000001B000000647C
S315FFF2003041A00000FF7FFFFF0000001B00000064ED
S315FFF2004042480000FF7FFFFF0000001E000000464F
I have made the program convert this string to hex and then int so that it can be read correctly. I am not reading the first 12 chars or last 2 chars of each line.
My question is how do I make the converted int do a byte swap (little endian to big endian) so that it is readable to the computer?
Again I'm sorry if this is a terrible explanation.
EDIT: I need to essentially take each byte (4 letters) and flip them. i.e: 64C7 flipped to C764, etc etc etc. How would I do this and put it into a new array? Each line is a string right now...
EDIT2: This is part of my code as of now...
int j = 12;
for (i = 0; i < hexLength2 - 5; i++){
string convert1 = ODL.substr(j, 4);
short input_int = stoi(convert1);
short lowBit = 0x00FF & input_int;
short hiBit = 0xFF00 & input_int;
short byteSwap = (lowBit << 8) | (hiBit >> 8);
I think I may need to convert my STOI to a short in some way..
EDIT3: Using the answer code below I get the following...
HEX: 8D --> stored to memory (myMem = unsigned short) as 141 (decimal) -->when byte swapped: -29440
Whats wrong here??
for (i = 0; i < hexLength2 - 5; i++){
string convert1 = ODL.substr(j, 2);
stringstream str1;
str1 << convert1;
str1 >> hex >> myMem[k];
short input_int = myMem[k]; //byte swap
short lowBit = 0x00FF & input_int;
short hiBit = 0xFF00 & input_int;
short byteSwap = (lowBit << 8) | (hiBit >> 8);
cout << input_int << endl << "BYTE SWAP: " <<byteSwap <<"Byte Swap End" << endl;
k++;
j += 2;
You can always do it bitwise too. (Assuming 16-bit word) For example, if you're byte swapping an int:
short input_int = 123; // each of the ints that you have
short input_lower_half = 0x00FF & input_int;
short input_upper_half = 0xFF00 & input_int;
// size of short is 16-bits, so shift the bits halfway in each direction that they were originally
short byte_swapped_int = (input_lower_half << 8) | (input_upper_half >> 8)
EDIT: My exact attempt at using your code
unsigned short myMem[20];
int k = 0;
string ODL = "S315FFF2000000008DC7000036B400003030303030319A";
int j = 12;
for(int i = 0; i < (ODL.length()-12)/4; i++) { // not exactly sure what your loop condition was
string convert1 = ODL.substr(j, 4);
cout << "substring is: " << convert1 << endl;
stringstream str1;
str1 << convert1;
str1 >> hex >> myMem[k];
short input_int = myMem[k]; //byte swap
unsigned short lowBit = 0x00FF & input_int; // changed this to unsigned
unsigned short hiBit = 0xFF00 & input_int; // changed this to unsigned
short byteSwap = (lowBit << 8) | (hiBit >> 8);
cout << hex << input_int << " BYTE SWAPed as: " << byteSwap <<", Byte Swap End" << endl;
k++;
j += 4;
}
it only matters to change the loBit and hiBit to be unsigned since those are the temporary values we're using.
If you're asking what I think you're asking-
First, you need to make sure you know what size your integers are. 32 bits is nice and standard, but check and make sure.
Second, cast your integer array as a char array. Now you can access and manipulate the array one byte at a time.
Third- just reverse the order of every four bytes (after your first 12 char offset). Swap the first and fourth and the second and third.
First off, I apologize if this is a duplicate; but my Google-fu seems to be failing me today.
I'm in the middle of writing an image format module for Photoshop, and one of the save options for this format, includes a 4-bit alpha channel. Of course, the data I have to convert is 8-bit/1 byte alpha - so I need to essentially take every two bytes of alpha, and merge it into one.
my attempt (below), I believe has a lot of room for improvement:
for(int x=0,w=0;x < alphaData.size();x+=2,w++)
{
short ashort=(alphaData[x] << 8)+alphaData[x+1];
alphaFinal[w]=(unsigned char)ashort;
}
alphaData and alphaFinal are vectors that contains the 8-bit alpha data and the 4-bit alpha data, respectively. I realize that reducing two bytes into the value of one, is bound to result in loss of "resolution", but I can't help but think there's a better way of doing this.
For extra information, here's the loop that does the reverse (converts 4-bit alpha from the format to 8-bit for Photoshop)
alphaData serves the same purpose as above, and imgData is an unsigned char vector that holds the raw image data. (alpha data is tacked on after the actual rgb data for the image in this particular variant of the format)
for(int b=alphaOffset,x2=0;b < (alphaOffset+dataLength); b++,x2+=2)
{
unsigned char lo = (imgData[b] & 15);
unsigned char hi = ((imgData[b] >> 4) & 15);
alphaData[x2]=lo*17;
alphaData[x2+1]=hi*17;
}
Are you sure that it's
alphaData[x2]=lo*17;
alphaData[x2+1]=hi*17;
and not
alphaData[x2]=lo*16;
alphaData[x2+1]=hi*16;
In any case, to generate the values that work with the decoding function you have posted, you just have to reverse the operations. So multiplying by 17 becomes dividing by 17 and the shifts and masks get reordered to look like this:
for(int x=0,w=0;x < alphaData.size();x+=2,w++)
{
unsigned char alpha1 = alphaData[x] / 17;
unsigned char alpha2 = alphaData[x+1] / 17;
Assert(alpha1 < 16 && alpha2 < 16);
alphaFinal[w]=(alpha2 << 4) | alpha1;
}
short ashort=(alphaData[x] << 8)+alphaData[x+1];
alphaFinal[w]=(unsigned char)ashort;
You're actually losing alphaData[x] in alphaFinal. You shift alphaData[x] by 8 bits to the left and then assign 8 low bits.
Also your for loop is unsafe, if for some reason alphaData.size() is odd, you'll run out of range.
what you want to do, I think, is to truncate an 8-bit value into a 4-bit one; not to combine two 8-bit vales. In other words, you want to drop the four least significant bits of each alpha value, not to combine two different alpha values.
So, basically, you want to right-shift by 4.
output = (input >> 4); /* truncate four bits */
in case you're not familiar with binary shifts, take this random 8-bit number:
10110110
>> 1
= 01011011
>> 1
= 00101101
>> 1
= 00010110
>> 1
= 00001011
so,
10110110
>> 4
= 00001011
and to reverse, left-shift instead...
input = (output << 4); /* expand four bits */
which, using the result from that same random 8-bit number as before, would be
00001011
>> 4
= 10110000
obviously, as you noted, 4 bits of precision is lost. But you'd be surprised how little it's noticed in a fully-composited work.
This code
for(int x=0,w=0;x < alphaData.size();x+=2,w++)
{
short ashort=(alphaData[x] << 8)+alphaData[x+1];
alphaFinal[w]=(unsigned char)ashort;
}
Is broken. Given
#include <iostream>
using std::cout;
using std::endl;
typedef unsigned char uchar;
int main() {
uchar x0 = 1; // for alphaData[x]
uchar x1 = 2; // for alphaData[x+1]
short ashort = (x0 << 8) + x1; // The value 0x0102
uchar afinal = (uchar)ashort; // truncates to 0x02.
cout << std::hex
<< "x0 = 0x" << x0 << " << 8 = 0x" << (x0 << 8) << endl
<< "x1 = 0x" << x1 << endl
<< "ashort = 0x" << ashort << endl
<< "afinal = 0x" << (unsigned int)afinal << endl
;
}
If you are saying that your source stream contains sequences of 4-bit pairs stored in 8-bit storage values, which you need to re-store as a single 8-bit value, then what you want is:
for(int x=0,w=0;x < alphaData.size();x+=2,w++)
{
unsigned char aleft = alphaData[x] & 0x0f; // 4 bits.
unsigned char aright = alphaData[x + 1] & 0x0f; // 4 bits.
alphaFinal[w] = (aleft << 4) | (aright);
}
"<<4" is equivalent to "*16", as ">>4" is equivalent to "/16".
Basically what I want to do is to read a binary file, and extract 4 consecutive values at address e.g. 0x8000. For example, the 4 numbers are 89 ab cd ef. I want to read these values and store them into a buffer, and then convert the buffer to int type. I have tried the following method:
ifstream *pF = new ifstream();
buffer = new char[4];
memset(buffer, 0, 4);
pF->read(buffer, 4);
When I tried
cout << buffer << endl;
nothing happens, I guarantee that there are values at this location (I can view the binary file in hex viewer). Could anyone show me the method to convert the buffer to int type and properly display it? Thank you.
Update
int number = buffer[0];
for (int i = 0; i < 4; ++i)
{
number <<= 8;
number |= buffer[i];
}
It also depends on Little endian and Bit endian notations. If you compose your number with another way, you can use number |= buffer[3 - i]
And in order to display hex int you can use
#include <iomanip>
cout << hex << number;
cout << hex << buffer[0] << buffer[1] << buffer[2] << buffer[3] << endl;
See http://www.cplusplus.com/reference/iostream/manipulators/hex/
I have the contents of a file assigned into a string object. For simplicity the file only has 5 bytes, which is the size of 1 integer plus another byte.
What I want to do is get the first four bytes of the string object and somehow store it into a valid integer variable by the program.
Then the program will do various operations on the integer, changing it.
Afterward I want the changed integer stored back into the first four bytes of the string object.
Could anyone tell me I could achieve this? I would prefer to stick with the standard C++ library exclusively for this purpose. Thanks in advance for any help.
The following code snippet should illustrate a handful of things. Beware of endian differences. Play around with it. Try to understand what's going on. Add some file operations (binary read & write). The only way to really understand how to do this, is to experiment and create some tests.
#include <iostream>
#include <string>
using namespace std;
int main(int argc, char *argv[]) {
int a = 108554107; // some random number for example sake
char c[4]; // simulate std::string containing a binary int
*((int *) &c[0]) = a; // use casting to copy the data
// reassemble a into b, using indexed bytes from c
int b = 0;
b |= (c[3] & 0xff) << 24;
b |= (c[2] & 0xff) << 16;
b |= (c[1] & 0xff) << 8;
b |= c[0] & 0xff;
// show that all three are equivalent
cout << "a: " << a << " b: " << b
<< " c: " << *((int *) &c[0]) << endl;
return 0;
}
If you are reading into std::string from that file any zero byte would signal end of the string, so you might end up with a string that is shorter then 5 bytes. Take a look here for how to do binary I/O with C++ streams.