Two bytes into one - c++

First off, I apologize if this is a duplicate; but my Google-fu seems to be failing me today.
I'm in the middle of writing an image format module for Photoshop, and one of the save options for this format, includes a 4-bit alpha channel. Of course, the data I have to convert is 8-bit/1 byte alpha - so I need to essentially take every two bytes of alpha, and merge it into one.
my attempt (below), I believe has a lot of room for improvement:
for(int x=0,w=0;x < alphaData.size();x+=2,w++)
{
short ashort=(alphaData[x] << 8)+alphaData[x+1];
alphaFinal[w]=(unsigned char)ashort;
}
alphaData and alphaFinal are vectors that contains the 8-bit alpha data and the 4-bit alpha data, respectively. I realize that reducing two bytes into the value of one, is bound to result in loss of "resolution", but I can't help but think there's a better way of doing this.
For extra information, here's the loop that does the reverse (converts 4-bit alpha from the format to 8-bit for Photoshop)
alphaData serves the same purpose as above, and imgData is an unsigned char vector that holds the raw image data. (alpha data is tacked on after the actual rgb data for the image in this particular variant of the format)
for(int b=alphaOffset,x2=0;b < (alphaOffset+dataLength); b++,x2+=2)
{
unsigned char lo = (imgData[b] & 15);
unsigned char hi = ((imgData[b] >> 4) & 15);
alphaData[x2]=lo*17;
alphaData[x2+1]=hi*17;
}

Are you sure that it's
alphaData[x2]=lo*17;
alphaData[x2+1]=hi*17;
and not
alphaData[x2]=lo*16;
alphaData[x2+1]=hi*16;
In any case, to generate the values that work with the decoding function you have posted, you just have to reverse the operations. So multiplying by 17 becomes dividing by 17 and the shifts and masks get reordered to look like this:
for(int x=0,w=0;x < alphaData.size();x+=2,w++)
{
unsigned char alpha1 = alphaData[x] / 17;
unsigned char alpha2 = alphaData[x+1] / 17;
Assert(alpha1 < 16 && alpha2 < 16);
alphaFinal[w]=(alpha2 << 4) | alpha1;
}

short ashort=(alphaData[x] << 8)+alphaData[x+1];
alphaFinal[w]=(unsigned char)ashort;
You're actually losing alphaData[x] in alphaFinal. You shift alphaData[x] by 8 bits to the left and then assign 8 low bits.
Also your for loop is unsafe, if for some reason alphaData.size() is odd, you'll run out of range.

what you want to do, I think, is to truncate an 8-bit value into a 4-bit one; not to combine two 8-bit vales. In other words, you want to drop the four least significant bits of each alpha value, not to combine two different alpha values.
So, basically, you want to right-shift by 4.
output = (input >> 4); /* truncate four bits */
in case you're not familiar with binary shifts, take this random 8-bit number:
10110110
>> 1
= 01011011
>> 1
= 00101101
>> 1
= 00010110
>> 1
= 00001011
so,
10110110
>> 4
= 00001011
and to reverse, left-shift instead...
input = (output << 4); /* expand four bits */
which, using the result from that same random 8-bit number as before, would be
00001011
>> 4
= 10110000
obviously, as you noted, 4 bits of precision is lost. But you'd be surprised how little it's noticed in a fully-composited work.

This code
for(int x=0,w=0;x < alphaData.size();x+=2,w++)
{
short ashort=(alphaData[x] << 8)+alphaData[x+1];
alphaFinal[w]=(unsigned char)ashort;
}
Is broken. Given
#include <iostream>
using std::cout;
using std::endl;
typedef unsigned char uchar;
int main() {
uchar x0 = 1; // for alphaData[x]
uchar x1 = 2; // for alphaData[x+1]
short ashort = (x0 << 8) + x1; // The value 0x0102
uchar afinal = (uchar)ashort; // truncates to 0x02.
cout << std::hex
<< "x0 = 0x" << x0 << " << 8 = 0x" << (x0 << 8) << endl
<< "x1 = 0x" << x1 << endl
<< "ashort = 0x" << ashort << endl
<< "afinal = 0x" << (unsigned int)afinal << endl
;
}
If you are saying that your source stream contains sequences of 4-bit pairs stored in 8-bit storage values, which you need to re-store as a single 8-bit value, then what you want is:
for(int x=0,w=0;x < alphaData.size();x+=2,w++)
{
unsigned char aleft = alphaData[x] & 0x0f; // 4 bits.
unsigned char aright = alphaData[x + 1] & 0x0f; // 4 bits.
alphaFinal[w] = (aleft << 4) | (aright);
}
"<<4" is equivalent to "*16", as ">>4" is equivalent to "/16".

Related

C++ convert char to int

Sorry for my bad English. I need to build app which converts hex to rgb. I have file U1.txt with content inside:
2 3
008000
FF0000
FFFFFF
FFFF00
FF0000
FFFF00
And my codeblocks app:
#include <iostream>
#include <fstream>
using namespace std;
int main()
{
int a;
int b;
string color;
ifstream data("U1.txt");
ofstream result("U1result.txt");
data >> a;
data >> b;
for (int i = 0; i < a * b; i++) {
data >> color;
cout << color[0] * 16 + color[1] << endl;
}
data.close();
result.close();
return 0;
}
This gives me 816. But it should be 0. I think color[0] is not an integer, but a char and it multiplies by ASCII number.. I've tried many ways with atoi, c_str() and it not working. P.S do not suggest stoi(), because I need to do this homework with older C++. Thanks in advance and have a good day ;)
You can directly store the hexadecimal values in an int with std::hex.
int b;
ifstream data("U1.txt");
data >> std::hex >> b;
Since those encodings use 24 bits, you have to start out with an integer type that holds at least 24 bits. And for this kind of packing and unpacking, it really ought to be unsigned, so you don't get tangled up in sign bits. That means using std::uint_least32_t, which is the smallest unsigned type that can hold at least 32 bits. (Yes, 24 would fit better, but there is no least24 type; 32 is the best you can do).
If your compiler doesn't provide those fixed-width types (std::uint_least32_t), you can use unsigned long. It's required to be at least 32 bits wide. It could be larger, and the reason for using std::uint_least32_t is that your compiler might have, for example, a 32-bit integer, in which case unsigned int would be 32 bits wide. But you can't count on that, so either use the fixed-width type or use unsigned long to ensure that you have enough bits.
Since the character inputs are encoded in hexadecimal, you need to tell the input system to interpret them as hex values. So:
std::uint_least32_t value;
data >> std::hex >> value;
Now you've got the value in the low 24 bits of value. You need to pick out the individual R, G, and B parts of that value. That's straightforward. To get the low 8 bits, just mask out the higher ones:
std::cout << (value & 0xFF) << '\n';
To get the next 8 bits, shift and mask:
std::cout << ((value >> 8) & 0xFF) << '\n';
And, naturally, to get the upper 8 bits, shift and mask:
std::cout << ((value >> 16) & 0xFF) << '\n';
A rather unelegant but also working answer is to subtract all your chars by 48 as thats where numbers start in ASCII. This is also the reason why you get 816 as:
48*16+48 = 816

convert hexadecimal string to binary and seperate into bits n C++

I need to covert hexadecimal string to binary then pass the bits into different variables.
For example, my input is:
std::string hex = "E136";
How do I convert the string into binary output 1110 0001 0011 0110?
After that I need to pass the bit 0 to variable A, bits 1-9 to variable B and bits 10-15 to variable C.
Thanks in advance
How do I convert the string [...]?
Start with result value of null, then for each character (starting at first, indicating most significant one) determine its value (in range of [0:15]), multiply the so far received result by 16 and add the current value to. For your given example, this will result in
(((0 * 16 + v('E')) * 16 + v('1')) * 16 + v('3')) + v('6')
There are standard library functions doing the stuff for you, such as std::strtoul:
char* end;
unsigned long value = strtoul(hex.c_str(), &end, 16);
// ^^ base!
The end pointer useful to check if you have read the entire string:
if(*char == 0)
{
// end of string reached
}
else
{
// some part of the string was left, you might consider this
// as error (could occur if e. g. "f10s12" was passed, then
// end would point to the 's')
}
If you don't care for end checking, you can just pass nullptr instead.
Don't convert back to a string afterwards, you can get the required values by masking (&) and bitshifting (>>), e. g getting bits [1-9]:
uint32_t b = value >> 1 & 0x1ffU;
Working on integrals is much more efficient than working on strings. Only when you want to print out the final result, then convert back to string (if using a std::ostream, operator<< already does the work for you...).
While playing with this sample, I realized that I gave a wrong recommendation:
std::setbase(2) does not work by standard. Ouch! (SO: Why doesn't std::setbase(2) switch to binary output?)
For conversion of numbers to string with binary digits, something else must be used. I made this small sample. Though, the separation of bits is considered as well, my main focus was on output with different bases (and IMHO worth another answer):
#include <algorithm>
#include <iomanip>
#include <iostream>
#include <string>
std::string bits(unsigned value, unsigned w)
{
std::string text;
for (unsigned i = 0; i < w || value; ++i) {
text += '0' + (value & 1); // bit -> character '0' or '1'
value >>= 1; // shift right one bit
}
// text is right to left -> must be reversed
std::reverse(text.begin(), text.end());
// done
return text;
}
void print(const char *name, unsigned value)
{
std::cout
<< name << ": "
// decimal output
<< std::setbase(10) << std::setw(5) << value
<< " = "
// binary output
#if 0 // OLD, WRONG:
// std::setbase(2) is not supported by standard - Ouch!
<< "0b" << std::setw(16) << std::setfill('0') << std::setbase(2) << value
#else // NEW:
<< "0b" << bits(value, 16)
#endif // 0
<< " = "
// hexadecimal output
<< "0x" << std::setw(4) << std::setfill('0') << std::setbase(16) << value
<< '\n';
}
int main()
{
std::string hex = "E136";
unsigned value = strtoul(hex.c_str(), nullptr, 16);
print("hex", value);
// bit 0 -> a
unsigned a = value & 0x0001;
// bit 1 ... 9 -> b
unsigned b = (value & 0x03FE) >> 1;
// bit 10 ... 15 -> c
unsigned c = (value & 0xFC00) >> 10;
// report
print(" a ", a);
print(" b ", b);
print(" c ", c);
// done
return 0;
}
Output:
hex: 57654 = 0b1110000100110110 = 0xe136
a : 00000 = 0b0000000000000000 = 0x0000
b : 00155 = 0b0000000010011011 = 0x009b
c : 00056 = 0b0000000000111000 = 0x0038
Live Demo on coliru
Concerning, the bit operations:
binary bitwise and operator (&) is used to set all unintended bits to 0. The second value can be understood as mask. It would be more obvious if I had used binary numbers but this is not supported in C++. Hex codes do nearly as well as a hex digit represents always the same pattern of 4 bits. (as 16 = 24) After some time of practice, you usually learn to "see" the bits in the hex code.
About the right shift (>>), I was not quite sure. OP didn't require that bits have to be moved somewhere – only that they had to be separated into distinct variables. So, these right-shift's might be obsolete.
So, this question which seemed to be trivial leaded to a surprising enlightment (for me).

Byte Swap with an array?

First of all, forgive my extremely amateur coding knowledge.
I am intern at a company and have been assigned to create a code in C++ that swaps bytes in order to get the correct checksum value.
I am reading a list that resembles something like:
S315FFF200207F7FFFFF42A000000000001B000000647C
S315FFF2003041A00000FF7FFFFF0000001B00000064ED
S315FFF2004042480000FF7FFFFF0000001E000000464F
I have made the program convert this string to hex and then int so that it can be read correctly. I am not reading the first 12 chars or last 2 chars of each line.
My question is how do I make the converted int do a byte swap (little endian to big endian) so that it is readable to the computer?
Again I'm sorry if this is a terrible explanation.
EDIT: I need to essentially take each byte (4 letters) and flip them. i.e: 64C7 flipped to C764, etc etc etc. How would I do this and put it into a new array? Each line is a string right now...
EDIT2: This is part of my code as of now...
int j = 12;
for (i = 0; i < hexLength2 - 5; i++){
string convert1 = ODL.substr(j, 4);
short input_int = stoi(convert1);
short lowBit = 0x00FF & input_int;
short hiBit = 0xFF00 & input_int;
short byteSwap = (lowBit << 8) | (hiBit >> 8);
I think I may need to convert my STOI to a short in some way..
EDIT3: Using the answer code below I get the following...
HEX: 8D --> stored to memory (myMem = unsigned short) as 141 (decimal) -->when byte swapped: -29440
Whats wrong here??
for (i = 0; i < hexLength2 - 5; i++){
string convert1 = ODL.substr(j, 2);
stringstream str1;
str1 << convert1;
str1 >> hex >> myMem[k];
short input_int = myMem[k]; //byte swap
short lowBit = 0x00FF & input_int;
short hiBit = 0xFF00 & input_int;
short byteSwap = (lowBit << 8) | (hiBit >> 8);
cout << input_int << endl << "BYTE SWAP: " <<byteSwap <<"Byte Swap End" << endl;
k++;
j += 2;
You can always do it bitwise too. (Assuming 16-bit word) For example, if you're byte swapping an int:
short input_int = 123; // each of the ints that you have
short input_lower_half = 0x00FF & input_int;
short input_upper_half = 0xFF00 & input_int;
// size of short is 16-bits, so shift the bits halfway in each direction that they were originally
short byte_swapped_int = (input_lower_half << 8) | (input_upper_half >> 8)
EDIT: My exact attempt at using your code
unsigned short myMem[20];
int k = 0;
string ODL = "S315FFF2000000008DC7000036B400003030303030319A";
int j = 12;
for(int i = 0; i < (ODL.length()-12)/4; i++) { // not exactly sure what your loop condition was
string convert1 = ODL.substr(j, 4);
cout << "substring is: " << convert1 << endl;
stringstream str1;
str1 << convert1;
str1 >> hex >> myMem[k];
short input_int = myMem[k]; //byte swap
unsigned short lowBit = 0x00FF & input_int; // changed this to unsigned
unsigned short hiBit = 0xFF00 & input_int; // changed this to unsigned
short byteSwap = (lowBit << 8) | (hiBit >> 8);
cout << hex << input_int << " BYTE SWAPed as: " << byteSwap <<", Byte Swap End" << endl;
k++;
j += 4;
}
it only matters to change the loBit and hiBit to be unsigned since those are the temporary values we're using.
If you're asking what I think you're asking-
First, you need to make sure you know what size your integers are. 32 bits is nice and standard, but check and make sure.
Second, cast your integer array as a char array. Now you can access and manipulate the array one byte at a time.
Third- just reverse the order of every four bytes (after your first 12 char offset). Swap the first and fourth and the second and third.

Hex bitwise operation in c++

By using filestreaming in c++, I have read a string in the binary file into a buffer (4 bytes). I know that the buffer contains "89abcdef". The buffer is such that:
buffer[0] = 89
buffer[1] = ab
buffer[2] = cd
buffer[3] = ef
Now, I want to recover these numbers into one single hex number 0x89abcdef. However, this is not as simple as I thought. I tried the following code:
int num = 0;
num |= buffer[0];
num <<= 24;
cout << num << endl;
at this point, num is displayed to be
ea000000
When I tried the same algorithm for the second element of the buffer:
num = 0;
num |= buffer[1];
num <<= 16;
cout << num << endl;
output:
ffcd0000
The ff in front of the cd is highly inconvenient for me to add them all together (I was planning to make it something looks like 00cd0000, and add it to the first num).
Could anyone help me to recover the hex number 0x89abcdef? Thanks.
Don't modify the actual number until the end:
num = buffer [0] << 24 | buffer [1] << 16 | buffer [2] << 8 | buffer [3];
buffer [0] << 24 gives you your first result, which is combined with the second result independent of the first, and so on.
Also, as pointed out, operations like this should be done on unsigned numbers, so that the signing doesn't interfere with the result.
For all of your bitwise operations, you're going to want to use unsigned int instead of int. This way you can avoid the kinds of sign-extension problems you're seeing.

c++. After reading the binary file into a buffer, how to display the buffer in hex?

Basically what I want to do is to read a binary file, and extract 4 consecutive values at address e.g. 0x8000. For example, the 4 numbers are 89 ab cd ef. I want to read these values and store them into a buffer, and then convert the buffer to int type. I have tried the following method:
ifstream *pF = new ifstream();
buffer = new char[4];
memset(buffer, 0, 4);
pF->read(buffer, 4);
When I tried
cout << buffer << endl;
nothing happens, I guarantee that there are values at this location (I can view the binary file in hex viewer). Could anyone show me the method to convert the buffer to int type and properly display it? Thank you.
Update
int number = buffer[0];
for (int i = 0; i < 4; ++i)
{
number <<= 8;
number |= buffer[i];
}
It also depends on Little endian and Bit endian notations. If you compose your number with another way, you can use number |= buffer[3 - i]
And in order to display hex int you can use
#include <iomanip>
cout << hex << number;
cout << hex << buffer[0] << buffer[1] << buffer[2] << buffer[3] << endl;
See http://www.cplusplus.com/reference/iostream/manipulators/hex/