Bit Reversal - not clear what the output is - c++

I am reading the following example :
Var1(REG1, 0U, 16U);
Var2(REG2, 0U, 8U);
UINT32 FirstReg = Getaddress1(Var1); //the dimension is 16 bit
FirstReg = ((FirstReg >> 1) & 0x5555) | ((FirstReg << 1) & 0xaaaa);
FirstReg = ((FirstReg >> 2) & 0x3333) | ((FirstReg << 2) & 0xcccc);
FirstReg = ((FirstReg >> 4) & 0x0f0f) | ((FirstReg << 4) & 0xf0f0);
FirstReg = ((FirstReg >> 8) & 0x00ff) | ((FirstReg << 8) & 0xff00);
FirstReg = (FirstReg << 8);
UINT32 SecondReg = Getaddress2(Var2);//the dimension is 8 bit
SecondReg = ((SecondReg >> 1) & 0x5555) | ((SecondReg << 1) & 0xaaaa);
SecondReg = ((SecondReg >> 2) & 0x3333) | ((SecondReg << 2) & 0xcccc);
SecondReg = ((SecondReg >> 4) & 0x0f0f) | ((SecondReg << 4) & 0xf0f0);
SecondReg = ((SecondReg >> 8) & 0x00ff) | ((SecondReg << 8) & 0xff00);
SecondReg = (SecondReg >> 8);
return (FirstReg | SecondReg);
Basically as far as i undestand the intention is to reverse the bits read in the 2 UINT32 Reg(s) variables and collect in only 1 variable of UINT32 type.
I don't get if the first bit (for example) of SecondReg will become the 17th bit of the returned variable or the first one.

First, even if the algorythm works with 32 bits integers, only the 16 least significant bits are used because they are anded with 16 bits only values.
So after the first part (before the last shift) FirstReg and SecondReg contains the 16 least significant bits of the original values reversed.
Then FirstReg is shifted left 8 bits and SecondReg is shifted right 8 bits and both are ored. The result is a 32 bits values composed with (most significant byte to least): O, high order byte of FirstReg, low order byte of FirstReg, high order byte of SecondReg

Related

C++ Bitshift 4 int_8t into a normal integer (32 bit )

I had already asked a question how to get 4 int8_t into a 32bit int, I was told that I have to cast the int8_t to a uint8_t first to pack it with bitshifting into a 32bit integer.
int8_t offsetX = -10;
int8_t offsetY = 120;
int8_t offsetZ = -60;
using U = std::uint8_t;
int toShader = (U(offsetX) << 24) | (U(offsetY) << 16) | (U(offsetZ) << 8) | (0 << 0);
std::cout << (int)(toShader >> 24) << " "<< (int)(toShader >> 16) << " " << (int)(toShader >> 8) << std::endl;
My Output is
-10 -2440 -624444
It's not what I expected, of course, does anyone have a solution?
In the shader I want to unpack the int16 later and that is only possible with a 32bit integer because glsl does not have any other data types.
int offsetX = data[gl_InstanceID * 3 + 2] >> 24;
int offsetY = data[gl_InstanceID * 3 + 2] >> 16 ;
int offsetZ = data[gl_InstanceID * 3 + 2] >> 8 ;
What is written in the square bracket does not matter it is about the correct shifting of the bits or casting after the bracket.
If any of the offsets is negative, then the shift results in undefined behaviour.
Solution: Convert the offsets to an unsigned type first.
However, this brings another potential problem: If you convert to unsigned, then negative numbers will have very large values with set bits in most significant bytes, and OR operation with those bits will always result in 1 regardless of offsetX and offsetY. A solution is to convert into a small unsigned type (std::uint8_t), and another is to mask the unused bytes. Former is probably simpler:
using U = std::uint8_t;
int third = U(offsetX) << 24u
| U(offsetY) << 16u
| U(offsetZ) << 8u
| 0u << 0u;
I think you're forgetting to mask the bits that you care about before shifting them.
Perhaps this is what you're looking for:
int32 offsetX = (data[gl_InstanceID * 3 + 2] & 0xFF000000) >> 24;
int32 offsetY = (data[gl_InstanceID * 3 + 2] & 0x00FF0000) >> 16 ;
int32 offsetZ = (data[gl_InstanceID * 3 + 2] & 0x0000FF00) >> 8 ;
if (offsetX & 0x80) offsetX |= 0xFFFFFF00;
if (offsetY & 0x80) offsetY |= 0xFFFFFF00;
if (offsetZ & 0x80) offsetZ |= 0xFFFFFF00;
Without the bit mask, the X part will end up in offsetY, and the X and Y part in offsetZ.
on CPU side you can use union to avoid bit shifts and bit masking and branches ...
int8_t x,y,z,w; // your 8bit ints
int32_t i; // your 32bit int
union my_union // just helper union for the casting
{
int8_t i8[4];
int32_t i32;
} a;
// 4x8bit -> 32bit
a.i8[0]=x;
a.i8[1]=y;
a.i8[2]=z;
a.i8[3]=w;
i=a.i32;
// 32bit -> 4x8bit
a.i32=i;
x=a.i8[0];
y=a.i8[1];
z=a.i8[2];
w=a.i8[3];
If you do not like unions the same can be done with pointers...
Beware on GLSL side is this not possible (nor unions nor pointers) and you have to use bitshifts and masks like in the other answer...

How to count the number of the bits in the long long type variables? [duplicate]

This question already has answers here:
Count the number of set bits in a 32-bit integer
(65 answers)
Closed 3 years ago.
I want to count the bits in the long long type variable. For example,
1100011001 -> 5
In the integer type, I can use
a = (a & 0x5555) + ((a & 0xAAAA) >> 1);
a = (a & 0x3333) + ((a & 0xCCCC) >> 2);
a = (a & 0x0F0F) + ((a & 0xF0F0) >> 4);
a = (a & 0x00FF) + ((a & 0xFF00) >> 8);
but in case of "long long", how to do that?
Your code is for 16-bit integers.
To make it work with 32-bit integers, you need to:
make each literal in it two times wider (while preserving the pattern),
add one more line of code.
Here's the result:
a = (a & 0x55555555) + ((a & 0xAAAAAAAA) >> 1);
a = (a & 0x33333333) + ((a & 0xCCCCCCCC) >> 2);
a = (a & 0x0F0F0F0F) + ((a & 0xF0F0F0F0) >> 4);
a = (a & 0x00FF00FF) + ((a & 0xFF00FF00) >> 8);
a = (a & 0x0000FFFF) + ((a & 0xFFFF0000) >> 16);
Then, to make it work with 64-bit integers, you repeat the same procedure:
a = (a & 0x5555555555555555) + ((a & 0xAAAAAAAAAAAAAAAA) >> 1);
...
a = (a & 0x0000FFFF0000FFFF) + ((a & 0xFFFF0000FFFF0000) >> 16);
a = (a & 0x00000000FFFFFFFF) + ((a & 0xFFFFFFFF00000000) >> 32);

Embed multiple ints in byte array

I am using this code:
int number; //=smth
unsigned char sendBuffer[255];
sendBuffer[0] = number & 0xFF;
sendBuffer[1] = (number >> 8) & 0xFF;
sendBuffer[2] = (number >> 16) & 0xFF;
sendBuffer[3] = (number >> 24) & 0xFF;
to put number in byte array sendBuffer.
My question is:
Say I want to embed now two numbers in the byte array, shall I proceed like this?
sendBuffer[0] = number & 0xFF;
sendBuffer[1] = (number >> 8) & 0xFF;
sendBuffer[2] = (number >> 16) & 0xFF;
sendBuffer[3] = (number >> 24) & 0xFF;
sendBuffer[4] = number2 & 0xFF;
sendBuffer[5] = (number2 >> 8) & 0xFF;
sendBuffer[6] = (number2 >> 16) & 0xFF;
sendBuffer[7] = (number2 >> 24) & 0xFF;
Will this work even if number is of size say 8 or 6 bytes?
(I am saying this because on some platforms the int maybe 4 bytes or 6 right?
So I was thinking if the above code also works when number is 6 bytes?
Further thing to note is that even if it is 6 bytes, but I only
store 4 byte integer inside it, will above code work?).
This buffer I usually store on some memory of a card and I don't have problems reading it back (e.g., endiannes etc. issues, the byte array when reading seems to come in the order I saved).
Finally, how to reconstruct the integer from the byte array sendBuffer?
1) Yes, proceed like that. No, it only works for 4 bytes.
There is a easier, better way to do this, although it can cause endianness issues if the buffer is sent from one computer to another which uses a different architecture. Assuming you know the type of number, overlay another array on top of sendBuffer.
unsigned char sendBuffer[255];
number_type *sendBufferNum = (number_type*) sendBuffer;
sendBufferNum[0] = number;
sendBufferNum[1] = number2;
Reading a number can be done the same way.
unsigned char receiveBuffer[255];
//read values into receiveBuffer
number_type *receiverBufferNum = (number_type*) receiveBuffer;
number_type number = recieveBuffer[0];
number_type number2 = receiveBuffer[1];
This only works for 32bit (4 bytes) integer. You have to write a 64bit (8 bytes) version if you are going to support large int.
You can reverse the process using bitwise OR.
#define BigEndianGetUInt32(ptr) ( ((uint32)((uint8*)(ptr))[0]) << 24 | \
((uint32)((uint8*)(ptr))[1]) << 16 | \
((uint32)((uint8*)(ptr))[2]) << 8 | \
(uint32)((uint8*)(ptr))[3]) )
number = BigEndianGetUInt32(sendBuffer);
number1 = BigEndianGetUInt32(sendBuffer+4);
As a side note, if you're serializing data just for the same device, you could have memcpy'ed number to sendBuffer.
memcpy(sendBuffer, &number, sizeof(number));
memcpy(sendBuffer+sizeof(number), &number1, sizeof(number1));
Will this work even if number is of size say 8 or 6 bytes?
It works but obviously you need to add more lines to save all the bytes in the value. That's a lot of manual work which is not very extensible. Use a programmatic approach instead
auto num = number;
for (size_t i = 0; i < sizeof(number); i++, num >>= CHAR_BIT)
sendBuffer[i] = number & 0xFF;
But why do that when you already have memcpy()? This way you need only 1 line, and even better, it can be extended to multiple values easily
memcpy(&sendBuffer[0], number1, sizeof number1);
memcpy(&sendBuffer[sizeof(number1)], number2, sizeof number2);
Finally, how to reconstruct the integer from the byte array sendBuffer?
Easy. Just shift the bytes back
number = (sendBuffer[3] << 24) | (sendBuffer[2] << 16) | (sendBuffer[1] << 8) | sendBuffer[0];
number2 = (sendBuffer[7] << 24) | (sendBuffer[6] << 16) | (sendBuffer[5] << 8) | sendBuffer[4];
Again, avoid tedious work like that and use a for loop
number = 0;
for (size_t i = 0; i < sizeof(number); i++)
number = (number << 8) | sendBuffer[i];
But memcpy also works and is highly recommended
memcpy(&number, &sendBuffer[numberIndex], sizeof number);

opengl c++: How to convert a 32bit png file to a 16 bit 565RGB file

I am trying to convert 32bit png files to 16 bit file formats, I understand how to convert 16 bit file formats between eachother (e.g RGB565 RGBA4444) However I'm not sure how to go about converting from a 32 bit to a 16 bit.
My main questions are: How do I find how the 32 bit pngs stored (are 8 bits each assigned to R,B,G, and A values)?
How do I lost precision but still maintain roughly the same value?
Thanks in advance
You'd be much better off using libpng than implementing this by hand.
I am not familiar with the exact layout of the 32bit png pixel, but assuming it is relatively consistent with other formats you probably want to do something similar to this:
// Get the pixel from the png:
unsigned int pngPixel = getPngPixel();
unsigned char r = (pngPixel & 0xFF000000) >> 24;
unsigned char g = (pngPixel & 0x00FF0000) >> 16;
unsigned char b = (pngPixel & 0x0000FF00) >> 8;
unsigned char a = (pngPixel & 0x000000FF);
// you can collapse this to one line, but for clarity...
// masking off the least significant bits.
unsigned short rgb565Pixel = (r & 0xF8) << 11;
rgb565Pixel |= (g & 0xFC) << 5;
rgb565Pixel |= (b & 0xF8);
// Again you could collapse this down to one line, but for clarity...
// masking off the least significant bits.
unsigned short rgba4Pixel = (r & 0xF0) << 12;
rgba4Pixel |= (g & 0xF0) << 8;
rgba4Pixel |= (b & 0xF0) << 4;
rgba4Pixel |= (a & 0xF0);
Consider this pseudocode.
One could argue that masking off the least significant bits, especially when converting from 8 bit to 4 bit, is not a very good way to convert between the two, and they would be right. You could instead use a conversion function:
unsigned int convertColor(unsigned char c, unsigned int oldMax, unsigned int newMax) {
double oldColor = c;
double percentOfMax = oldColor / oldMax;
return ((unsigned int)(newMax * percentOfMax)) & newMax;
}
// now we can do this
unsigned short rgba4Pixel = convertColor(r, 0xFF, 0x0F) << 12;
rgba4Pixel |= convertColor(g, 0xFF, 0x0F) << 8;
rgba4Pixel |= convertColor(b, 0xFF, 0x0F) << 4;
rgba4Pixel |= convertColor(a, 0xFF, 0x0F);

DWORD to bytes using bitwise shift operators

I can't get it to work correctly.
#include <windows.h>
int main()
{
DWORD i = 6521;
BYTE first = i >> 32;
BYTE second = i >> 24;
BYTE third = i >> 16;
BYTE fourth = i >> 8;
i = (((DWORD)fourth) << 24) | (((DWORD)third) << 16) | (((DWORD)second) << 8) | first;
}
BYTE first = (i >> 24) & 0xff;
BYTE second = (i >> 16) & 0xff;
BYTE third = (i >> 8) & 0xff;
BYTE fourth = i & 0xff ;
I think You shift Your DWORD too much. By 8 bits too much :)
Your shifts are not quite correct.
BYTE first = i >> 24;
BYTE second = i << 8 >> 24;
BYTE third = i << 16 >> 24;
BYTE fourth = i << 24 >> 24;
What I am doing is shifting down 24 for the top byte, then shifting up in increments of 8 to clear the top bits and place the next byte in position for the shift down.
You could read the value at dword as a byte array (or struct) of 4 bytes to do this as well and let the compile do the work for you.
The bytes aren't always in the order that you expect, though Neil's solution is correct. You probably want to look at "endianess" if you're having that problem