Create and fill a 10 bits set from two 8 bits characters - c++

We have 2 characters a and b of 8 bits that we want to encode in a 10 bits set. What we want to do is take the first 8 bits of character a put them in the first 8 bits of the 10 bits set. Then take only the first 2 bits of character b and fill the rest.
QUESTION: Do I need to shift the 8 bits in order to concatenate the other 2 ?
// Online C++ compiler to run C++ program online
#include <iostream>
#include <bitset>
struct uint10_t {
uint16_t value : 10;
uint16_t _ : 6;
};
uint10_t hash(char a, char b){
uint10_t hashed;
// Concatenate 2 bits to the other 8
hashed.value = (a << 8) + (b & 11000000);
return hashed;
}
int main() {
uint10_t hashed = hash('a', 'b');
std::bitset<10> newVal = hashed.value;
std::cout << newVal << " "<<hashed .value << std::endl;
return 0;
}
Thanks #Scheff's Cat. My cat says Hi

Do I need to shift the 8 bits in order to concatenate the other 2?
Yes.
The bits of a have to be shifted left to make room for the two bits of b. As there is room needed for two bits a left shift by 2 is appropriate. (Before my recent update, there was a wrong left shift by 8 which I didn't notice. Shame on me.)
The bits of b have to be shifted right. The reason is that OP wants to combine the two most significant bits of b with them of a. As these two bits have to appear as least significant bits in the result they have to be shifted to that position.
It should be:
hashed.value = (a << 2) + ((b & 0xc0) >> 6);
or
hashed.value = (a << 2) + ((b & 0b11000000) >> 6);
As b is of type char (which is signed or unsigned depending on the compiler), it is even better to swap the order of & and >>:
hashed.value = (a << 2) + ((b >> 6) & 0x03);
or
hashed.value = (a << 2) + ((b >> 6) & 0b11);
This ensures that any possible sign bit extension is eliminated which may occur if the type char is a signed type in the specific compiler and b has a negative value (i.e. the most significant bit is set and will be replicated in the conversion to int).
MCVE on coliru:
#include <iostream>
#include <bitset>
struct uint10_t {
uint16_t value : 10;
uint16_t _ : 6;
};
uint10_t hash(char a, char b){
uint10_t hashed;
// Concatenate 2 bits to the other 8
hashed.value = (a << 2) + ((b >> 6) & 0b11);
return hashed;
}
int main() {
uint10_t hashed = hash('a', 'b');
std::cout << "a: " << std::bitset<8>('a') << '\n';
std::cout << "b: " << std::bitset<8>('b') << '\n';
std::bitset<10> newVal = hashed.value;
std::cout << " " << newVal << " " << hashed.value << std::endl;
}
Output:
a: 01100001
b: 01100010
0110000101 389
One may wonder why the two upper bits of a are not lost although a is of type char which is usually an 8 bit type. The reason is that integral arithmetic operations work at least on int types. Hence, a << 2 involves the implicit conversion of a to int which has at least 16 bit.

Related

convert hexadecimal string to binary and seperate into bits n C++

I need to covert hexadecimal string to binary then pass the bits into different variables.
For example, my input is:
std::string hex = "E136";
How do I convert the string into binary output 1110 0001 0011 0110?
After that I need to pass the bit 0 to variable A, bits 1-9 to variable B and bits 10-15 to variable C.
Thanks in advance
How do I convert the string [...]?
Start with result value of null, then for each character (starting at first, indicating most significant one) determine its value (in range of [0:15]), multiply the so far received result by 16 and add the current value to. For your given example, this will result in
(((0 * 16 + v('E')) * 16 + v('1')) * 16 + v('3')) + v('6')
There are standard library functions doing the stuff for you, such as std::strtoul:
char* end;
unsigned long value = strtoul(hex.c_str(), &end, 16);
// ^^ base!
The end pointer useful to check if you have read the entire string:
if(*char == 0)
{
// end of string reached
}
else
{
// some part of the string was left, you might consider this
// as error (could occur if e. g. "f10s12" was passed, then
// end would point to the 's')
}
If you don't care for end checking, you can just pass nullptr instead.
Don't convert back to a string afterwards, you can get the required values by masking (&) and bitshifting (>>), e. g getting bits [1-9]:
uint32_t b = value >> 1 & 0x1ffU;
Working on integrals is much more efficient than working on strings. Only when you want to print out the final result, then convert back to string (if using a std::ostream, operator<< already does the work for you...).
While playing with this sample, I realized that I gave a wrong recommendation:
std::setbase(2) does not work by standard. Ouch! (SO: Why doesn't std::setbase(2) switch to binary output?)
For conversion of numbers to string with binary digits, something else must be used. I made this small sample. Though, the separation of bits is considered as well, my main focus was on output with different bases (and IMHO worth another answer):
#include <algorithm>
#include <iomanip>
#include <iostream>
#include <string>
std::string bits(unsigned value, unsigned w)
{
std::string text;
for (unsigned i = 0; i < w || value; ++i) {
text += '0' + (value & 1); // bit -> character '0' or '1'
value >>= 1; // shift right one bit
}
// text is right to left -> must be reversed
std::reverse(text.begin(), text.end());
// done
return text;
}
void print(const char *name, unsigned value)
{
std::cout
<< name << ": "
// decimal output
<< std::setbase(10) << std::setw(5) << value
<< " = "
// binary output
#if 0 // OLD, WRONG:
// std::setbase(2) is not supported by standard - Ouch!
<< "0b" << std::setw(16) << std::setfill('0') << std::setbase(2) << value
#else // NEW:
<< "0b" << bits(value, 16)
#endif // 0
<< " = "
// hexadecimal output
<< "0x" << std::setw(4) << std::setfill('0') << std::setbase(16) << value
<< '\n';
}
int main()
{
std::string hex = "E136";
unsigned value = strtoul(hex.c_str(), nullptr, 16);
print("hex", value);
// bit 0 -> a
unsigned a = value & 0x0001;
// bit 1 ... 9 -> b
unsigned b = (value & 0x03FE) >> 1;
// bit 10 ... 15 -> c
unsigned c = (value & 0xFC00) >> 10;
// report
print(" a ", a);
print(" b ", b);
print(" c ", c);
// done
return 0;
}
Output:
hex: 57654 = 0b1110000100110110 = 0xe136
a : 00000 = 0b0000000000000000 = 0x0000
b : 00155 = 0b0000000010011011 = 0x009b
c : 00056 = 0b0000000000111000 = 0x0038
Live Demo on coliru
Concerning, the bit operations:
binary bitwise and operator (&) is used to set all unintended bits to 0. The second value can be understood as mask. It would be more obvious if I had used binary numbers but this is not supported in C++. Hex codes do nearly as well as a hex digit represents always the same pattern of 4 bits. (as 16 = 24) After some time of practice, you usually learn to "see" the bits in the hex code.
About the right shift (>>), I was not quite sure. OP didn't require that bits have to be moved somewhere – only that they had to be separated into distinct variables. So, these right-shift's might be obsolete.
So, this question which seemed to be trivial leaded to a surprising enlightment (for me).

When right shift operation >> shift sign bit and when it not?

My question is why a>>1 shift sign bit, but not (a & 0xaaaaaaaa) >> 1 ?
Code snippet
int a = 0xaaaaaaaa;
std::cout << sizeof(a) << std::endl;
getBits(a);
std::cout << sizeof(a>>1) << std::endl;
getBits(a >> 1);
std::cout << sizeof(a & 0xaaaaaaaa) << std::endl;
getBits(a & 0xaaaaaaaa);
std::cout << sizeof((a & 0xaaaaaaaa)>>1) << std::endl;
getBits((a & 0xaaaaaaaa) >> 1);
result
4
10101010101010101010101010101010
4
11010101010101010101010101010101
4
10101010101010101010101010101010
4
01010101010101010101010101010101
a >> 1 is boring. It's simply implementation defined for a signed type for negative a.
(a & 0xaaaaaaaa) >> 1 is more interesting. For the likely case of your having a 32 bit int (among others), 0xaaaaaaaa is an unsigned literal (obscure rule of a hexadecimal literal). So due to C++ type promotion rules a is converted to an unsigned type too, and the type of the expression a & 0xaaaaaaaa is therefore unsigned.
Makes a nice question for the pub quiz.
Reference: http://en.cppreference.com/w/cpp/language/integer_literal, especially the "The type of the literal" table.

How to convert Hex to IEEE 754 32 bit float in C++

I am trying to convert hex values stored as int and convert them to floatting point numbers using the IEEE 32 bit rules. I am specifically struggling with getting the right values for the mantissa and exponent. The hex is stored from in a file in hex. I want to have four significant figures to it. Below is my code.
float floatizeMe(unsigned int myNumba ) {
//// myNumba comes in as 32 bits or 8 byte
unsigned int sign = (myNumba & 0x007fffff) >>31;
unsigned int exponent = ((myNumba & 0x7f800000) >> 23)- 0x7F;
unsigned int mantissa = (myNumba & 0x007fffff) ;
float value = 0;
float mantissa2;
cout << endl<< "mantissa is : " << dec << mantissa << endl;
unsigned int m1 = mantissa & 0x00400000 >> 23;
unsigned int m2 = mantissa & 0x00200000 >> 22;
unsigned int m3 = mantissa & 0x00080000 >> 21;
unsigned int m4 = mantissa & 0x00040000 >> 20;
mantissa2 = m1 * (2 ^ -1) + m2*(2 ^ -2) + m3*(2 ^ -3) + m4*(2 ^ -4);
cout << "\nsign is: " << dec << sign << endl;
cout << "exponent is : " << dec << exponent << endl;
cout << "mantissa 2 is : " << dec << mantissa2 << endl;
// if above this number it is negative
if ( sign == 1)
sign = -1;
// if above this number it is positive
else {
sign = 1;
}
value = (-1^sign) * (1+mantissa2) * (2 ^ exponent);
cout << dec << "Float value is: " << value << "\n\n\n";
return value;
}
int main()
{
ifstream myfile("input.txt");
if (myfile.is_open())
{
unsigned int a, b,b1; // Hex
float c, d, e; // Dec
int choice;
unsigned int ex1 = 0;
unsigned int ex2 = 1;
myfile >> std::hex;
myfile >> a >> b ;
floatizeMe(a);
myfile.close();
return 0;
}
I suspect you mean for the ^ in
mantissa2 = m1 * (2 ^ -1) + m2*(2 ^ -2) + m3*(2 ^ -3) + m4*(2 ^ -4);
to mean "to the power of". There is no such operator in C or C++. The ^ operator is the bit-wise XOR operator.
Considering your CPU follows the IEEE standard, you can also use union. Something like this
union
{
int num;
float fnum;
} my_union;
Then store the integer values into my_union.num and read them as float by getting my_union.fnum.
We needed to convert IEEE-754 single and double precision numbers (using 32bit and 64bit encoding). We were using a C compiler (Vector CANoe/Canalyzer CAPL Script) with a restricted set of functions and ended up developing the function below (it can easily be tested using any on-line C compiler):
#include <stdio.h>
#include <math.h>
double ConvertNumberToFloat(unsigned long number, int isDoublePrecision)
{
int mantissaShift = isDoublePrecision ? 52 : 23;
unsigned long exponentMask = isDoublePrecision ? 0x7FF0000000000000 : 0x7f800000;
int bias = isDoublePrecision ? 1023 : 127;
int signShift = isDoublePrecision ? 63 : 31;
int sign = (number >> signShift) & 0x01;
int exponent = ((number & exponentMask) >> mantissaShift) - bias;
int power = -1;
double total = 0.0;
for ( int i = 0; i < mantissaShift; i++ )
{
int calc = (number >> (mantissaShift-i-1)) & 0x01;
total += calc * pow(2.0, power);
power--;
}
double value = (sign ? -1 : 1) * pow(2.0, exponent) * (total + 1.0);
return value;
}
int main()
{
// Single Precision
unsigned int singleValue = 0x40490FDB; // 3.141592...
float singlePrecision = (float)ConvertNumberToFloat(singleValue, 0);
printf("IEEE754 Single (from 32bit 0x%08X): %.7f\n",singleValue,singlePrecision);
// Double Precision
unsigned long doubleValue = 0x400921FB54442D18; // 3.141592653589793...
double doublePrecision = ConvertNumberToFloat(doubleValue, 1);
printf("IEEE754 Double (from 64bit 0x%016lX): %.16f\n",doubleValue,doublePrecision);
}
Just do the following (but of course make sure you have the right endianness when reading bytes into the integer in the first place):
float int_bits_to_float(int32_t ieee754_bits) {
float flt;
*((int*) &flt) = ieee754_bits;
return flt;
}
Works for me... this of course assumes that float has 32 bits, and is in IEEE754 format, on your architecture (which is almost always the case).
There are a number of very basic errors in your code.
The most visible is repeatedly using ^ for "power of". ^ is the XOR-operator, and for "power" you must use the function pow(base, exponent) in math.h.
Next, "I want to have four significant figures" (presumably for the mantissa), but you only extract four bits. Four bits can encode only 0..15, which is about a digit-and-a-half. To get four significant digits, you'd need at least log(10,000)/log(2) ≈ 13.288, or at least 14 bits (but preferably 17, so you get one full extra digit to get better rounding).
You extract the wrong bit for sign, and then you use it the wrong way. Yes, if it is 0 then sign = 1 and if 1 then sign = -1, but you use it in the final calculation as
value = (-1^sign) * ...
(again with a ^, although even pow does not make any sense here). You ought to have used sign * .. straight away.
exponent was declared an unsigned int, but that fails for negative values. It needs to be signed for pow(2, exponent) (corrected from your (2 ^ exponent)).
On the positive side, (1+mantissa2) is indeed correct.
With all of those points taken together, and ignoring the fact that you actually ask for only 4 significant digits, I get the following code. Note that I rearranged the initial bit shifting and extracting for convenience – I shift mantissa to the left, rather than the right, so I can test against 0 in its calculation.
(Ah, I missed this!) Using sign straight away does not work because it was declared as an unsigned int. Therefore, where you think you give it the value -1, it actually gets the value 4294967295 (more precise: the value of UINT_MAX from limits.h).
The easiest way to get rid of this is not multiplying by sign but only test it, and negate value if it is set.
float floatizeMe (unsigned int myNumba )
{
//// myNumba comes in as 32 bits or 8 byte
unsigned int sign = myNumba >>31;
signed int exponent = ((myNumba >> 23) & 0xff) - 0x7F;
unsigned int mantissa = myNumba << 9;
float value = 0;
float mantissa2;
cout << endl << "input is : " << hex << myNumba << endl;
cout << endl << "mantissa is : " << hex << mantissa << endl;
value = 0.5f;
mantissa2 = 0.0f;
while (mantissa)
{
if (mantissa & 0x80000000)
mantissa2 += value;
mantissa <<= 1;
value *= 0.5f;
}
cout << "\nsign is: " << sign << endl;
cout << "exponent is : " << hex << exponent << endl;
cout << "mantissa 2 is : " << mantissa2 << endl;
/* REMOVE:
if above this number it is negative
if ( sign == 1)
sign = -1;
// if above this number it is positive
else {
sign = 1;
} */
/* value = sign * (1.0f + mantissa2) * (pow (2, exponent)); */
value = (1.0f + mantissa2) * (pow (2, exponent));
if (sign) value = -value;
cout << dec << "Float value is: " << value << "\n\n\n";
return value;
}
With the above, you get correct results for values such as 0x3e4ccccd (0.2000000030) and 0x40490FDB (3.1415927410).
All said and done, if your input is already in IEEE-754 format (albeit in hex), then a simple cast ought to be enough.
As well as being much simpler, this also avoids any rounding/precision errors.
float value = reinterpret_cast<float&>(myNumba)
If you still want to inspect the parts separately, use the library function std::frexp afterwards. Of if you don't like the type punning, at least use std::ldexp to apply the exponent rather than your explicit maths, which is vulnerable to rounding/precision errors and overflow.
An alternate to both of these is to use a union type, as described in this answer.

Two bytes into one

First off, I apologize if this is a duplicate; but my Google-fu seems to be failing me today.
I'm in the middle of writing an image format module for Photoshop, and one of the save options for this format, includes a 4-bit alpha channel. Of course, the data I have to convert is 8-bit/1 byte alpha - so I need to essentially take every two bytes of alpha, and merge it into one.
my attempt (below), I believe has a lot of room for improvement:
for(int x=0,w=0;x < alphaData.size();x+=2,w++)
{
short ashort=(alphaData[x] << 8)+alphaData[x+1];
alphaFinal[w]=(unsigned char)ashort;
}
alphaData and alphaFinal are vectors that contains the 8-bit alpha data and the 4-bit alpha data, respectively. I realize that reducing two bytes into the value of one, is bound to result in loss of "resolution", but I can't help but think there's a better way of doing this.
For extra information, here's the loop that does the reverse (converts 4-bit alpha from the format to 8-bit for Photoshop)
alphaData serves the same purpose as above, and imgData is an unsigned char vector that holds the raw image data. (alpha data is tacked on after the actual rgb data for the image in this particular variant of the format)
for(int b=alphaOffset,x2=0;b < (alphaOffset+dataLength); b++,x2+=2)
{
unsigned char lo = (imgData[b] & 15);
unsigned char hi = ((imgData[b] >> 4) & 15);
alphaData[x2]=lo*17;
alphaData[x2+1]=hi*17;
}
Are you sure that it's
alphaData[x2]=lo*17;
alphaData[x2+1]=hi*17;
and not
alphaData[x2]=lo*16;
alphaData[x2+1]=hi*16;
In any case, to generate the values that work with the decoding function you have posted, you just have to reverse the operations. So multiplying by 17 becomes dividing by 17 and the shifts and masks get reordered to look like this:
for(int x=0,w=0;x < alphaData.size();x+=2,w++)
{
unsigned char alpha1 = alphaData[x] / 17;
unsigned char alpha2 = alphaData[x+1] / 17;
Assert(alpha1 < 16 && alpha2 < 16);
alphaFinal[w]=(alpha2 << 4) | alpha1;
}
short ashort=(alphaData[x] << 8)+alphaData[x+1];
alphaFinal[w]=(unsigned char)ashort;
You're actually losing alphaData[x] in alphaFinal. You shift alphaData[x] by 8 bits to the left and then assign 8 low bits.
Also your for loop is unsafe, if for some reason alphaData.size() is odd, you'll run out of range.
what you want to do, I think, is to truncate an 8-bit value into a 4-bit one; not to combine two 8-bit vales. In other words, you want to drop the four least significant bits of each alpha value, not to combine two different alpha values.
So, basically, you want to right-shift by 4.
output = (input >> 4); /* truncate four bits */
in case you're not familiar with binary shifts, take this random 8-bit number:
10110110
>> 1
= 01011011
>> 1
= 00101101
>> 1
= 00010110
>> 1
= 00001011
so,
10110110
>> 4
= 00001011
and to reverse, left-shift instead...
input = (output << 4); /* expand four bits */
which, using the result from that same random 8-bit number as before, would be
00001011
>> 4
= 10110000
obviously, as you noted, 4 bits of precision is lost. But you'd be surprised how little it's noticed in a fully-composited work.
This code
for(int x=0,w=0;x < alphaData.size();x+=2,w++)
{
short ashort=(alphaData[x] << 8)+alphaData[x+1];
alphaFinal[w]=(unsigned char)ashort;
}
Is broken. Given
#include <iostream>
using std::cout;
using std::endl;
typedef unsigned char uchar;
int main() {
uchar x0 = 1; // for alphaData[x]
uchar x1 = 2; // for alphaData[x+1]
short ashort = (x0 << 8) + x1; // The value 0x0102
uchar afinal = (uchar)ashort; // truncates to 0x02.
cout << std::hex
<< "x0 = 0x" << x0 << " << 8 = 0x" << (x0 << 8) << endl
<< "x1 = 0x" << x1 << endl
<< "ashort = 0x" << ashort << endl
<< "afinal = 0x" << (unsigned int)afinal << endl
;
}
If you are saying that your source stream contains sequences of 4-bit pairs stored in 8-bit storage values, which you need to re-store as a single 8-bit value, then what you want is:
for(int x=0,w=0;x < alphaData.size();x+=2,w++)
{
unsigned char aleft = alphaData[x] & 0x0f; // 4 bits.
unsigned char aright = alphaData[x + 1] & 0x0f; // 4 bits.
alphaFinal[w] = (aleft << 4) | (aright);
}
"<<4" is equivalent to "*16", as ">>4" is equivalent to "/16".

Thinking in C++ shift operators

I'm reading through a book on C++ standards: "Thinking in C++" by Bruce Eckel.
A lot of the C++ features are explained really well in this book but I have come to a brick wall on something and whether it may or may not help me when I wish to program a game for example, it's irking me as to how it works and I really cannot get it from the explanation given.
I was wondering if anybody here could help me in explaining how this example program actually works:
printBinary.h -
void printBinary(const unsigned char val);
printBinary.cpp -
#include <iostream>
void printBinary(const unsigned char val) {
for (int i = 7; i >= 0; i--) {
if (val & ( 1 << i))
std::cout << "1";
else
std::cout << "0";
}
}
Bitwise.cpp -
#include "printBinary.h"
#include <iostream>
using namespace std;
#define PR(STR, EXPR) \
cout << STR; printBinary(EXPR); cout << endl;
int main() {
unsigned int getval;
unsigned char a, b;
cout << "Enter a number between 0 and 255: ";
cin >> getval; a = getval;
PR ("a in binary: ", a);
cin >> getval; b = getval;
PR ("b in binary: ", b);
PR("a | b = ", a | b);
This program is supposed to explain to me how the shift bitwise operator (<<) and (>>) work but I simply don't get it, I mean sure I know how it works using cin and cout but am I stupid for not understanding this?
this piece in particular confuses me more so than the rest:
if (val & ( 1 << i))
Thanks for any help
if (val & ( 1 << i))
Consider the following binary number (128):
10000000
& is bitwise "AND" - 0 & 0 = 0, 0 & 1 = 1 & 0 = 0, 1 & 1 = 1.
<< is bitwise shift operator; it shifts the binary representation of the shifted number to left.
00000001 << 1 = 00000010; 00000001 << 2 = 00000100.
Write it down on a piece of paper in all iterations and see what comes out.
1 << i
takes the int-representation of 1 and shifts it i bits to the left.
val & x
is a bit-wise AND between val and x (where x is 1 << i in this example).
if(x)
tests if x converted to bool is true. Any non-zero value of an integral type converted to bool is true.
<< has two different meanings in the code you shown.
if (val & (1 << i))
<< is used to bitshift, so the value 1 will be shifted left by i bits
cout << ....
The stream class overloads the operator <<, so here it has a different meaning than before.
In this case, << is a function that outputs the contents on its right to cout
if (val & ( 1 << i))
This checks if the bit in i-th position is set. (1 << i) is something like 000001000 for i = 3. Now if the & operation returns non-zero, that means val had the corresponding bit set.