I am using this code:
int number; //=smth
unsigned char sendBuffer[255];
sendBuffer[0] = number & 0xFF;
sendBuffer[1] = (number >> 8) & 0xFF;
sendBuffer[2] = (number >> 16) & 0xFF;
sendBuffer[3] = (number >> 24) & 0xFF;
to put number in byte array sendBuffer.
My question is:
Say I want to embed now two numbers in the byte array, shall I proceed like this?
sendBuffer[0] = number & 0xFF;
sendBuffer[1] = (number >> 8) & 0xFF;
sendBuffer[2] = (number >> 16) & 0xFF;
sendBuffer[3] = (number >> 24) & 0xFF;
sendBuffer[4] = number2 & 0xFF;
sendBuffer[5] = (number2 >> 8) & 0xFF;
sendBuffer[6] = (number2 >> 16) & 0xFF;
sendBuffer[7] = (number2 >> 24) & 0xFF;
Will this work even if number is of size say 8 or 6 bytes?
(I am saying this because on some platforms the int maybe 4 bytes or 6 right?
So I was thinking if the above code also works when number is 6 bytes?
Further thing to note is that even if it is 6 bytes, but I only
store 4 byte integer inside it, will above code work?).
This buffer I usually store on some memory of a card and I don't have problems reading it back (e.g., endiannes etc. issues, the byte array when reading seems to come in the order I saved).
Finally, how to reconstruct the integer from the byte array sendBuffer?
1) Yes, proceed like that. No, it only works for 4 bytes.
There is a easier, better way to do this, although it can cause endianness issues if the buffer is sent from one computer to another which uses a different architecture. Assuming you know the type of number, overlay another array on top of sendBuffer.
unsigned char sendBuffer[255];
number_type *sendBufferNum = (number_type*) sendBuffer;
sendBufferNum[0] = number;
sendBufferNum[1] = number2;
Reading a number can be done the same way.
unsigned char receiveBuffer[255];
//read values into receiveBuffer
number_type *receiverBufferNum = (number_type*) receiveBuffer;
number_type number = recieveBuffer[0];
number_type number2 = receiveBuffer[1];
This only works for 32bit (4 bytes) integer. You have to write a 64bit (8 bytes) version if you are going to support large int.
You can reverse the process using bitwise OR.
#define BigEndianGetUInt32(ptr) ( ((uint32)((uint8*)(ptr))[0]) << 24 | \
((uint32)((uint8*)(ptr))[1]) << 16 | \
((uint32)((uint8*)(ptr))[2]) << 8 | \
(uint32)((uint8*)(ptr))[3]) )
number = BigEndianGetUInt32(sendBuffer);
number1 = BigEndianGetUInt32(sendBuffer+4);
As a side note, if you're serializing data just for the same device, you could have memcpy'ed number to sendBuffer.
memcpy(sendBuffer, &number, sizeof(number));
memcpy(sendBuffer+sizeof(number), &number1, sizeof(number1));
Will this work even if number is of size say 8 or 6 bytes?
It works but obviously you need to add more lines to save all the bytes in the value. That's a lot of manual work which is not very extensible. Use a programmatic approach instead
auto num = number;
for (size_t i = 0; i < sizeof(number); i++, num >>= CHAR_BIT)
sendBuffer[i] = number & 0xFF;
But why do that when you already have memcpy()? This way you need only 1 line, and even better, it can be extended to multiple values easily
memcpy(&sendBuffer[0], number1, sizeof number1);
memcpy(&sendBuffer[sizeof(number1)], number2, sizeof number2);
Finally, how to reconstruct the integer from the byte array sendBuffer?
Easy. Just shift the bytes back
number = (sendBuffer[3] << 24) | (sendBuffer[2] << 16) | (sendBuffer[1] << 8) | sendBuffer[0];
number2 = (sendBuffer[7] << 24) | (sendBuffer[6] << 16) | (sendBuffer[5] << 8) | sendBuffer[4];
Again, avoid tedious work like that and use a for loop
number = 0;
for (size_t i = 0; i < sizeof(number); i++)
number = (number << 8) | sendBuffer[i];
But memcpy also works and is highly recommended
memcpy(&number, &sendBuffer[numberIndex], sizeof number);
Related
I had already asked a question how to get 4 int8_t into a 32bit int, I was told that I have to cast the int8_t to a uint8_t first to pack it with bitshifting into a 32bit integer.
int8_t offsetX = -10;
int8_t offsetY = 120;
int8_t offsetZ = -60;
using U = std::uint8_t;
int toShader = (U(offsetX) << 24) | (U(offsetY) << 16) | (U(offsetZ) << 8) | (0 << 0);
std::cout << (int)(toShader >> 24) << " "<< (int)(toShader >> 16) << " " << (int)(toShader >> 8) << std::endl;
My Output is
-10 -2440 -624444
It's not what I expected, of course, does anyone have a solution?
In the shader I want to unpack the int16 later and that is only possible with a 32bit integer because glsl does not have any other data types.
int offsetX = data[gl_InstanceID * 3 + 2] >> 24;
int offsetY = data[gl_InstanceID * 3 + 2] >> 16 ;
int offsetZ = data[gl_InstanceID * 3 + 2] >> 8 ;
What is written in the square bracket does not matter it is about the correct shifting of the bits or casting after the bracket.
If any of the offsets is negative, then the shift results in undefined behaviour.
Solution: Convert the offsets to an unsigned type first.
However, this brings another potential problem: If you convert to unsigned, then negative numbers will have very large values with set bits in most significant bytes, and OR operation with those bits will always result in 1 regardless of offsetX and offsetY. A solution is to convert into a small unsigned type (std::uint8_t), and another is to mask the unused bytes. Former is probably simpler:
using U = std::uint8_t;
int third = U(offsetX) << 24u
| U(offsetY) << 16u
| U(offsetZ) << 8u
| 0u << 0u;
I think you're forgetting to mask the bits that you care about before shifting them.
Perhaps this is what you're looking for:
int32 offsetX = (data[gl_InstanceID * 3 + 2] & 0xFF000000) >> 24;
int32 offsetY = (data[gl_InstanceID * 3 + 2] & 0x00FF0000) >> 16 ;
int32 offsetZ = (data[gl_InstanceID * 3 + 2] & 0x0000FF00) >> 8 ;
if (offsetX & 0x80) offsetX |= 0xFFFFFF00;
if (offsetY & 0x80) offsetY |= 0xFFFFFF00;
if (offsetZ & 0x80) offsetZ |= 0xFFFFFF00;
Without the bit mask, the X part will end up in offsetY, and the X and Y part in offsetZ.
on CPU side you can use union to avoid bit shifts and bit masking and branches ...
int8_t x,y,z,w; // your 8bit ints
int32_t i; // your 32bit int
union my_union // just helper union for the casting
{
int8_t i8[4];
int32_t i32;
} a;
// 4x8bit -> 32bit
a.i8[0]=x;
a.i8[1]=y;
a.i8[2]=z;
a.i8[3]=w;
i=a.i32;
// 32bit -> 4x8bit
a.i32=i;
x=a.i8[0];
y=a.i8[1];
z=a.i8[2];
w=a.i8[3];
If you do not like unions the same can be done with pointers...
Beware on GLSL side is this not possible (nor unions nor pointers) and you have to use bitshifts and masks like in the other answer...
Toy program to split an integer into 4 bytes and later combine these bytes to get back the input value results into error. However the program works for positive integers. I am interested in signed integers. Need help.
Expected Output: -12345
Actual Output: -57
int main()
{
int j,i = -12345;
char b[4];
b[0] = (i >> 24) & 0xFF;
b[1] = (i >> 16) & 0xFF;
b[2] = (i >> 8) & 0xFF;
b[3] = (i >> 0) & 0xFF;
j = (int)((b[0] << 24) | (b[1] << 16) | (b[2] << 8) | (b[3] << 0));
std::cout << j;
return 0;
}
There are actually two problems that leads to your "error".
The first is that the result of e.g. b[0] << 24 will be an int. When you cast that to a char (and assuming that char is an 8-bit type) then you cut off the top 24 bits of the value, truncating it.
The second problem is that char could be unsigned (it's implementation-defined if char is signed or unsigned). If char is unsigned then the value -1 (0xffffffff) will become 255 (0x000000ff).
When you then bring all that together it will almost certainly result in wrong values.
In general, whenever you feel the need to do a C-style cast (like in (char)(b[0] << 24)) when programming in C++, you should take that as a sign that you're doing something wrong.
One possible way to solve your problem, always work with explicit unsigned data-types.
First you need to copy the original int value to an unsigned int:
unsigned ui;
memcpy(&ui, &i, sizeof ui);
Then use ui instead of i when doing the "split". And explicitly use unsigned char:
unsigned char b[sizeof(unsigned)] = { 0 };
b[0] = (ui >> 24) & 0xFF;
b[1] = (ui >> 16) & 0xFF;
b[2] = (ui >> 8) & 0xFF;
b[3] = (ui >> 0) & 0xFF;
Then to put it all back, again use an explicit unsigned type, and copy it to the resulting variable:
unsigned uj = (b[0] << 24) | (b[1] << 16) | (b[2] << 8) | (b[3] << 0);
memcpy(&j, &uj, sizeof j);
I suggest using unsigned data types here to avoid possible problems that can come from sign-extension during conversion.
Your code works only for possessive numbers! "i" is negative and by shifting it to to right b[0] becomes positive! and finally desensitization results error!
try
int main()
{
int j, i = -12345;
const char* bytes = reinterpret_cast<const char*>(&i);
j = *reinterpret_cast<const int*>(bytes);
std::cout << j;
return 0;
}
I am sorry if my question is confusing but here is the example of what I want to do,
lets say I have an unsigned long int = 1265985549
in binary I can write this as 01001011011101010110100000001101
now I want to split this binary 32 bit number into 4 bits like this and work separately on those 4 bits
0100 1011 0111 0101 0110 1000 0000 1101
any help would be appreciated.
You can get a 4-bit nibble at position k using bit operations, like this:
uint32_t nibble(uint32_t val, int k) {
return (val >> (4*k)) & 0x0F;
}
Now you can get the individual nibbles in a loop, like this:
uint32_t val = 1265985549;
for (int k = 0; k != 8 ; k++) {
uint32_t n = nibble(val, k);
cout << n << endl;
}
Demo on ideone.
short nibble0 = (i >> 0) & 15;
short nibble1 = (i >> 4) & 15;
short nibble2 = (i >> 8) & 15;
short nibble3 = (i >> 12) & 15;
etc
Based on the comment explaining the actual use for this, here's an other way to count how many nibbles have an odd parity: (not tested)
; compute parities of nibbles
x ^= x >> 2;
x ^= x >> 1;
x &= 0x11111111;
; add the parities
x = (x + (x >> 4)) & 0x0F0F0F0F;
int count = x * 0x01010101 >> 24;
The first part is just a regular "xor all the bits" type of parity calculation (where "all bits" refers to all the bits in a nibble, not in the entire integer), the second part is based on this bitcount algorithm, skipping some steps that are unnecessary because certain bits are always zero and so don't have to be added.
how to count number of occurrences of 1 in a 8 bit string. such as 10110001.
bit string is taken from user. like 10110001
what type of array should be used to store this bit string in c?
Short and simple. Use std::bitset(C++)
#include <iostream>
#include <bitset>
int main()
{
std::bitset<8> mybitstring;
std::cin >> mybitstring;
std::cout << mybitstring.count(); // returns the number of set bits
}
Online Test at Ideone
Don't use an array at all, use a std::string. This gives you the possibility of better error handling. You can write code like:
bitset <8> b;
if ( cin >> b ) {
cout << b << endl;
}
else {
cout << "error" << endl;
}
but there is no way of finding out which character caused the error.
You'd probably use an unsigned int to store those bits in C.
If you're using GCC then you can use __builtin_popcount to count the one bits:
Built-in Function: int __builtin_popcount (unsigned int x)
Returns the number of 1-bits in x.
This should resolve to a single instruction on CPUs that support it too.
From hacker's delight:
For machines that don't have this instruction, a good way to count the number
of 1-bits is to first set each 2-bit field equal to the sum of the two single
bits that were originally in the field, and then sum adjacent 2-bit fields,
putting the results in each 4-bit field, and so on.
so, if x is an integer:
x = (x & 0x55555555) + ((x >> 1) & 0x55555555);
x = (x & 0x33333333) + ((x >> 2) & 0x33333333);
x = (x & 0x0F0F0F0F) + ((x >> 4) & 0x0F0F0F0F);
x = (x & 0x00FF00FF) + ((x >> 8) & 0x00FF00FF);
x = (x & 0x0000FFFF) + ((x >>16) & 0x0000FFFF);
x will now contain the number of 1 bits. Just adapt the algorithm with 8 bit values.
I can't get it to work correctly.
#include <windows.h>
int main()
{
DWORD i = 6521;
BYTE first = i >> 32;
BYTE second = i >> 24;
BYTE third = i >> 16;
BYTE fourth = i >> 8;
i = (((DWORD)fourth) << 24) | (((DWORD)third) << 16) | (((DWORD)second) << 8) | first;
}
BYTE first = (i >> 24) & 0xff;
BYTE second = (i >> 16) & 0xff;
BYTE third = (i >> 8) & 0xff;
BYTE fourth = i & 0xff ;
I think You shift Your DWORD too much. By 8 bits too much :)
Your shifts are not quite correct.
BYTE first = i >> 24;
BYTE second = i << 8 >> 24;
BYTE third = i << 16 >> 24;
BYTE fourth = i << 24 >> 24;
What I am doing is shifting down 24 for the top byte, then shifting up in increments of 8 to clear the top bits and place the next byte in position for the shift down.
You could read the value at dword as a byte array (or struct) of 4 bytes to do this as well and let the compile do the work for you.
The bytes aren't always in the order that you expect, though Neil's solution is correct. You probably want to look at "endianess" if you're having that problem