Byte Swap with an array? - c++

First of all, forgive my extremely amateur coding knowledge.
I am intern at a company and have been assigned to create a code in C++ that swaps bytes in order to get the correct checksum value.
I am reading a list that resembles something like:
S315FFF200207F7FFFFF42A000000000001B000000647C
S315FFF2003041A00000FF7FFFFF0000001B00000064ED
S315FFF2004042480000FF7FFFFF0000001E000000464F
I have made the program convert this string to hex and then int so that it can be read correctly. I am not reading the first 12 chars or last 2 chars of each line.
My question is how do I make the converted int do a byte swap (little endian to big endian) so that it is readable to the computer?
Again I'm sorry if this is a terrible explanation.
EDIT: I need to essentially take each byte (4 letters) and flip them. i.e: 64C7 flipped to C764, etc etc etc. How would I do this and put it into a new array? Each line is a string right now...
EDIT2: This is part of my code as of now...
int j = 12;
for (i = 0; i < hexLength2 - 5; i++){
string convert1 = ODL.substr(j, 4);
short input_int = stoi(convert1);
short lowBit = 0x00FF & input_int;
short hiBit = 0xFF00 & input_int;
short byteSwap = (lowBit << 8) | (hiBit >> 8);
I think I may need to convert my STOI to a short in some way..
EDIT3: Using the answer code below I get the following...
HEX: 8D --> stored to memory (myMem = unsigned short) as 141 (decimal) -->when byte swapped: -29440
Whats wrong here??
for (i = 0; i < hexLength2 - 5; i++){
string convert1 = ODL.substr(j, 2);
stringstream str1;
str1 << convert1;
str1 >> hex >> myMem[k];
short input_int = myMem[k]; //byte swap
short lowBit = 0x00FF & input_int;
short hiBit = 0xFF00 & input_int;
short byteSwap = (lowBit << 8) | (hiBit >> 8);
cout << input_int << endl << "BYTE SWAP: " <<byteSwap <<"Byte Swap End" << endl;
k++;
j += 2;

You can always do it bitwise too. (Assuming 16-bit word) For example, if you're byte swapping an int:
short input_int = 123; // each of the ints that you have
short input_lower_half = 0x00FF & input_int;
short input_upper_half = 0xFF00 & input_int;
// size of short is 16-bits, so shift the bits halfway in each direction that they were originally
short byte_swapped_int = (input_lower_half << 8) | (input_upper_half >> 8)
EDIT: My exact attempt at using your code
unsigned short myMem[20];
int k = 0;
string ODL = "S315FFF2000000008DC7000036B400003030303030319A";
int j = 12;
for(int i = 0; i < (ODL.length()-12)/4; i++) { // not exactly sure what your loop condition was
string convert1 = ODL.substr(j, 4);
cout << "substring is: " << convert1 << endl;
stringstream str1;
str1 << convert1;
str1 >> hex >> myMem[k];
short input_int = myMem[k]; //byte swap
unsigned short lowBit = 0x00FF & input_int; // changed this to unsigned
unsigned short hiBit = 0xFF00 & input_int; // changed this to unsigned
short byteSwap = (lowBit << 8) | (hiBit >> 8);
cout << hex << input_int << " BYTE SWAPed as: " << byteSwap <<", Byte Swap End" << endl;
k++;
j += 4;
}
it only matters to change the loBit and hiBit to be unsigned since those are the temporary values we're using.

If you're asking what I think you're asking-
First, you need to make sure you know what size your integers are. 32 bits is nice and standard, but check and make sure.
Second, cast your integer array as a char array. Now you can access and manipulate the array one byte at a time.
Third- just reverse the order of every four bytes (after your first 12 char offset). Swap the first and fourth and the second and third.

Related

Convert a number given as a string input to a 32 bit little endian value

Input -> string: "208705"
Output-> BYTE array: {0x41, 0x2F, 0x03}
I have converted the string to hex format using stringstream:
string decStrToHex(string decimalString) {
std::stringstream ss;
ss<< std::hex << stoi(decimalString);
std::string result ( ss.str() );
return "0" + result;
}
How to proceed?
How to convert from a (decimal format) string to a (constant sized hex format) BYTE array: ...
There's no direct conversion from the string to that byte array you want to have.
It looks like you want to convert a number given as a string input to a 32 bit little endian value.
If I put it in my calculator
and convert it to hex, it looks like
So the 0x41 is the LSB and shows up last.
How that byte array you are claiming should actually look like and how the bytes are ordered depends on the machine architecture (see Endianess).
First, you just convert the number (there are other ways but I'll take that as example):
uint32_t number;
std::istringstream iss("208705");
iss >> number;
Next step is to ensure you have a little endian representation of that number:
union LittleEndian32Bit {
uint32_t uint;
uint8_t[4] bytes;
};
Such you can have
LittleEndian32Bit le;
le.bytes[0] = number & 0xFF;
le.bytes[1] = (number >> 8) & 0xFF;
le.bytes[2] = (number >> 16) & 0xFF;
le.bytes[3] = (number >> 24) & 0xFF;
Output BYTE array: {0x41, 0x2F, 0x03}
std::cout << '{';
for(size_t i = 0; i < sizeof(le.bytes); ++i) {
if(i != 0) {
std::cout << ", ";
}
std::cout << "0x" << std::hex << std::setw(2) << std::setfill('0')
<< (unsigned int)le.bytes[i];
}
std::cout << '}' << std::endl;
Well, one more for the fourth byte:
{0x41, 0x2f, 0x03, 0x00}
See Live Demo
For a fixed array of 3 bytes you can do something like this:
unsigned char GetByte(int i, int n)
{
int Mask = 0xFF;
Mask <<= n * 8;
return (i & Mask) >> (n * 8);
}
std::array<unsigned char, 3> decStrToHex(std::string decimalString)
{
int Val;
std::stringstream ss(decimalString);
ss >> Val;
std::array<unsigned char, 3> Arr;
for (int i = 0; i < 3; i++)
Arr[i] = GetByte(Val, i);
return Arr;
}

Bit manipulation and > 32 bit numbers?

I am basically trying to do the following:
c[i] = ((number_to_store << pos) & 0xFF00000000) >> 32;
But this stores 0 in c[i] something not expected. The following works like a charm:
c[i] = ((number_to_store << pos) & 0xFF000000) >> 24;
I am 99% sure the error has something to do with the fact all my variables are unsigned int but here I am requesting 40 bits space.
Can someone please explain the differences between less than or equal to 32 bit and more than 32 bit number, when it's about bit manipulation?
edit: This also gives me 0:
cout << ((((unsigned long)number_to_store << (unsigned long)pos) & (unsigned long)0xFF00000000) >> 32) << endl;
edit 2: The following works:
cout << ((((unsigned long long)number_to_store << (unsigned long long)pos) & (unsigned long long)0xFF00000000) >> 32) << endl;
Lesson learned: never expect long to be larger than int
An unsigned int is 32 bits, if you shift it by 32 bits it will become 0. As you found out, in order to keep bits that are shifted left in your first shift, you must declare the number_to_store as unsigned long long which is 64 bits.

Two bytes into one

First off, I apologize if this is a duplicate; but my Google-fu seems to be failing me today.
I'm in the middle of writing an image format module for Photoshop, and one of the save options for this format, includes a 4-bit alpha channel. Of course, the data I have to convert is 8-bit/1 byte alpha - so I need to essentially take every two bytes of alpha, and merge it into one.
my attempt (below), I believe has a lot of room for improvement:
for(int x=0,w=0;x < alphaData.size();x+=2,w++)
{
short ashort=(alphaData[x] << 8)+alphaData[x+1];
alphaFinal[w]=(unsigned char)ashort;
}
alphaData and alphaFinal are vectors that contains the 8-bit alpha data and the 4-bit alpha data, respectively. I realize that reducing two bytes into the value of one, is bound to result in loss of "resolution", but I can't help but think there's a better way of doing this.
For extra information, here's the loop that does the reverse (converts 4-bit alpha from the format to 8-bit for Photoshop)
alphaData serves the same purpose as above, and imgData is an unsigned char vector that holds the raw image data. (alpha data is tacked on after the actual rgb data for the image in this particular variant of the format)
for(int b=alphaOffset,x2=0;b < (alphaOffset+dataLength); b++,x2+=2)
{
unsigned char lo = (imgData[b] & 15);
unsigned char hi = ((imgData[b] >> 4) & 15);
alphaData[x2]=lo*17;
alphaData[x2+1]=hi*17;
}
Are you sure that it's
alphaData[x2]=lo*17;
alphaData[x2+1]=hi*17;
and not
alphaData[x2]=lo*16;
alphaData[x2+1]=hi*16;
In any case, to generate the values that work with the decoding function you have posted, you just have to reverse the operations. So multiplying by 17 becomes dividing by 17 and the shifts and masks get reordered to look like this:
for(int x=0,w=0;x < alphaData.size();x+=2,w++)
{
unsigned char alpha1 = alphaData[x] / 17;
unsigned char alpha2 = alphaData[x+1] / 17;
Assert(alpha1 < 16 && alpha2 < 16);
alphaFinal[w]=(alpha2 << 4) | alpha1;
}
short ashort=(alphaData[x] << 8)+alphaData[x+1];
alphaFinal[w]=(unsigned char)ashort;
You're actually losing alphaData[x] in alphaFinal. You shift alphaData[x] by 8 bits to the left and then assign 8 low bits.
Also your for loop is unsafe, if for some reason alphaData.size() is odd, you'll run out of range.
what you want to do, I think, is to truncate an 8-bit value into a 4-bit one; not to combine two 8-bit vales. In other words, you want to drop the four least significant bits of each alpha value, not to combine two different alpha values.
So, basically, you want to right-shift by 4.
output = (input >> 4); /* truncate four bits */
in case you're not familiar with binary shifts, take this random 8-bit number:
10110110
>> 1
= 01011011
>> 1
= 00101101
>> 1
= 00010110
>> 1
= 00001011
so,
10110110
>> 4
= 00001011
and to reverse, left-shift instead...
input = (output << 4); /* expand four bits */
which, using the result from that same random 8-bit number as before, would be
00001011
>> 4
= 10110000
obviously, as you noted, 4 bits of precision is lost. But you'd be surprised how little it's noticed in a fully-composited work.
This code
for(int x=0,w=0;x < alphaData.size();x+=2,w++)
{
short ashort=(alphaData[x] << 8)+alphaData[x+1];
alphaFinal[w]=(unsigned char)ashort;
}
Is broken. Given
#include <iostream>
using std::cout;
using std::endl;
typedef unsigned char uchar;
int main() {
uchar x0 = 1; // for alphaData[x]
uchar x1 = 2; // for alphaData[x+1]
short ashort = (x0 << 8) + x1; // The value 0x0102
uchar afinal = (uchar)ashort; // truncates to 0x02.
cout << std::hex
<< "x0 = 0x" << x0 << " << 8 = 0x" << (x0 << 8) << endl
<< "x1 = 0x" << x1 << endl
<< "ashort = 0x" << ashort << endl
<< "afinal = 0x" << (unsigned int)afinal << endl
;
}
If you are saying that your source stream contains sequences of 4-bit pairs stored in 8-bit storage values, which you need to re-store as a single 8-bit value, then what you want is:
for(int x=0,w=0;x < alphaData.size();x+=2,w++)
{
unsigned char aleft = alphaData[x] & 0x0f; // 4 bits.
unsigned char aright = alphaData[x + 1] & 0x0f; // 4 bits.
alphaFinal[w] = (aleft << 4) | (aright);
}
"<<4" is equivalent to "*16", as ">>4" is equivalent to "/16".

Copy stuff to char*

I've got a buffer of type char*, and a string. I want to place inside the buffer the string length + the string.
I wrote the following code to accomplish this but it doesn't work, because the std::cout<<strlen(buffer) prints "1" no matter what string I pass as parameter of the function.
int VariableLengthRecord :: pack (const std::string strToPack)
{
int strToPackSize = strToPack.length();
if (sizeof(strToPackSize) + strToPackSize > maxBytes - nextByte)
return RES_RECORD_TOO_LONG; // The string is too long
int start = nextByte;
// Copy the string length into the buffer
copyIntToBuffer((buffer+start),strToPackSize);
// Copy the string into the buffer
strcpy((buffer+start+sizeof(strToPackSize)),strToPack.c_str());
// Move the buffer pointer
nextByte += sizeof(strToPackSize) + strToPackSize;
// Update buffer size
bufferSize = nextByte;
std::cout << "Size of buffer = " << strlen(buffer) << std::endl;
return RES_OK;
}
void copyIntToBuffer (char* buffer, int integer)
{
buffer[0] = integer & 0xff;
buffer[1] = (integer >> 8) & 0xff;
buffer[2] = (integer >> 16) & 0xff;
buffer[3] = (integer >> 24) & 0xff;
}
strlen doesn't work on binary data (the length field is binary). Keep track of the real length, or use 5 + strlen(buffer+4) to measure only the text part.
Or, take advantage of the fact that you stored the length inside the buffer, and read the length from there.
strlen is going to walk the string until a null byte (\0) is found. You are attempting to put together a pascal string. If you want to use the built in strlen, you will need to advance the pointer sizeof(string_length_type)
In your case, you can't use cout to directly print the buffer, and you can't use strlen either. The problem is that you are storing binary data.
The strlen function will stop at the first 0x00 byte found in the buffer.
The cout will print garbage for non-printable values.
You will need to convert the buffer to an ASCII version of hex values before printing them.
Something like:
for (i = 0; i < BUFFER_SIZE; i ++)
{
cout << hex << buffer[i];
}
cout << endl;

c++. After reading the binary file into a buffer, how to display the buffer in hex?

Basically what I want to do is to read a binary file, and extract 4 consecutive values at address e.g. 0x8000. For example, the 4 numbers are 89 ab cd ef. I want to read these values and store them into a buffer, and then convert the buffer to int type. I have tried the following method:
ifstream *pF = new ifstream();
buffer = new char[4];
memset(buffer, 0, 4);
pF->read(buffer, 4);
When I tried
cout << buffer << endl;
nothing happens, I guarantee that there are values at this location (I can view the binary file in hex viewer). Could anyone show me the method to convert the buffer to int type and properly display it? Thank you.
Update
int number = buffer[0];
for (int i = 0; i < 4; ++i)
{
number <<= 8;
number |= buffer[i];
}
It also depends on Little endian and Bit endian notations. If you compose your number with another way, you can use number |= buffer[3 - i]
And in order to display hex int you can use
#include <iomanip>
cout << hex << number;
cout << hex << buffer[0] << buffer[1] << buffer[2] << buffer[3] << endl;
See http://www.cplusplus.com/reference/iostream/manipulators/hex/