Network Data Packing - c++

I was searching for a way to efficiently pack my data in order to send them over a network.
I found a topic which suggested a way : http://www.sdltutorials.com/cpp-tip-packing-data
And I've also seen it being used in commercial applications. So I decided to give it a try, but the results weren't what I expected.
First of all , the whole point of "packing" your data is to save bytes. But I don't think that the algorithm mentioned above is saving bytes at all.
Because , without packing ... The server would send 4 bytes (Data) , after the packing the server sends a character array , 4 bytes long ... So it's pointless.
Aside from that , why would someone add 0xFF , it doesn't do anything at all.
The code snippet found in the tutorial mentioned above:
unsigned char Buffer[3];
unsigned int Data = 1024;
unsigned int UpackedData;
Buffer[0] = (Data >> 24) & 0xFF;
Buffer[1] = (Data >> 12) & 0xFF;
Buffer[2] = (Data >> 8) & 0xFF;
Buffer[3] = (Data ) & 0xFF;
UnpackedData = (Buffer[0] << 24) | (Buffer[1] << 12) | (Buffer[2] << 8) | (Buffer[3] & 0xFF);
Result:
0040 // 4 bytes long character
1024 // 4 bytes long

The & 0xFF is to make sure it's between 0 and 255.
I wouldn't place too much credence in that posting; aside from your objection, the code contains an obvious mistake. Buffer is only 3 elements long, but the code stores data in 4 elements.

For integers a simple method I found often useful is BER encoding. Basically for an unsigned integer you write 7 bits for each byte, using the 8th bit to mark if another byte is needed
void berPack(unsigned x, std::vector<unsigned char>& out)
{
while (x >= 128)
{
out.push_back(128 + (x & 127)); // write 7 bits, 8th=1 -> more needed
x >>= 7;
}
out.push_back(x); // Write last bits (8th=0 -> this ends the number)
}
for a signed integer you encode the sign in the least significant bit and the use the same encoding as before
void berPack(int x, std::vector<unsigned char>& out)
{
if (x < 0) berPack((unsigned(-x) << 1) + 1, out);
else berPack((unsigned(x) << 1), out);
}
With this approach small numbers will use less space. Another advantage is that this encoding is already architecture-neutral (i.e. data will be understood correctly independently on the endian-ness of the system) and that the same format can handle different integer sizes and you can send data from a 32 bit system to a 64 bit system without problems (assuming of course that the values themselves are not overflowing).
The price to pay is that for example unsigned values from 268435456 (1 << 28) to 4294967295 ((1 << 32) - 1) will require 5 bytes instead of 4 bytes of standard fixed 4-bytes packing.

Another reason for packing is to enforce a consistent structure, so that data written by one machine can be reliably read by another.
It's not "adding"; it's performing a bitwise-AND in order to mask out the LSB (least-significant byte). But it doesn't look necessary here.

Related

Reversing Bitwise Operator Left Shift

Update!!!
I'm trying to reverse an bitwise Left-Shift. I have code:
int n1;
int n2;
int n3;
n1 = n1 + 128;
n2 = n2 + 128;
n3 = n3 + 128;
int size = n1 + ((n3 << 8) + n2 << 8); // size = 9999
How could I get back n1,n2,n3 to given the result 9999?
Offsetting by 128 is sometimes used in Java to work around byte being signed. Adding 128 ensures that the result (which will be an int) is non-negative again. As long as it is symmetric with the encoder (the corresponding writeMessage), then it is a way to encode unsigned bytes.
After that, the bytes are reassembled into something bigger. That would not work right with signed bytes. By the way I think a pair of parentheses is missing, and the expression should be n1 + ((n3 << 8) + n2 << 8), or clearer: n1 + (n2 << 8) + (n3 << 16)
An alternative is using byteValue & 0xFF to get rid of the leading ones added by sign-extension. That has the benefit that the raw bytes are used "as themselves", without a strange offset.
The inverse is extracting the bytes and offsetting them by 128 again (adding or subtracting 128 would actually do the same thing, but for symmetry it makes more sense to subtract here), for example:
byte n1 = (byte)((size & 0xFF) - 128);
byte n2 = (byte)((size >> 8 & 0xFF) - 128);
byte n3 = (byte)((size >> 16 & 0xFF) - 128);
The & 0xFF operation is not strictly necessary (the final cast to byte gets rid of the high bits), but makes it clear that bytes are being extracted.
It's called bit shifting. It's shifting the bits (at the binary level) to the right by 8.
128 << 8 = 32768
Say you have 16, 10000 in binary and shift it left by 2; 16 << 2 = 64 (1000000 binary)
This is common for encryption, data compression, or even dealing with something as simple as color values (when you want separate RGB components of a color represented as a single integer)
In your example, it sounds like multiple values were combined into a single integer to conserve space when sending a packet. Bit shifting is a way or extracting those individual values from a larger number. But that's just a guess.

16-bit to 10-bit conversion code explanation

I came across the following code to convert 16-bit numbers to 10-bit numbers and store it inside an integer. Could anyone maybe explain to me what exactly is happening with the AND 0x03?
// Convert the data to 10-bits
int xAccl = (((data[1] & 0x03) * 256) + data[0]);
if(xAccl > 511) {
xAccl -= 1024;
}
Link to where I got the code: https://www.instructables.com/id/Measurement-of-Acceleration-Using-ADXL345-and-Ardu/
The bitwise operator & will make a mask, so in this case, it voids the 6 highest bits of the integer.
Basically, this code does a modulo % 1024 (for unsigned values).
data[1] takes the 2nd byte; & 0x03 masks that byte with binary 11 - so: takes 2 bits; * 256 is the same as << 8 - i.e. pushes those 2 bits into the 9th and 10th positions; adding data[0] to data combines these two bytes (personally I'd have used |, not +).
So; xAccl is now the first 10 bits, using big-endian ordering.
The > 511 seems to be a sign check; essentially, it is saying "if the 10th bit is set, treat the entire thing as a negative integer as though we'd used 10-bit twos complement rules".

Split parts of a uint32_t hex value into smaller parts in C++

I have a uint32_t as follows:
uint32_t midiData=0x9FCC00;
I need to separate this uint32_t into smaller parts so that 9 becomes its own entity, F becomes its own entity, and CC becomes its own entity. If you're wondering what I am doing, I am trying to break up the parts of a MIDI message so that they are easier to manage in my program.
I found this solution, but the problem is I don't know how to apply it to the CC section, and that I am not sure that this method works with C++.
Here is what I have so far:
uint32_t midiData=0x9FCC00;
uint32_t status = 0x0FFFFF & midiData; // Retrieve 9
uint32_t channel = (0xF0FFFF & midiData)>>4; //Retrieve F
uint32_t note = (0xFF00FF & midiData) >> 8; //Retrieve CC
Is this correct for C++? Reason I ask is cause I have never used C++ before and its syntax of using the > and < has always confused me (thus why I tend to avoid it).
You can use bit shift operator >> and bit masking operator & in C++ as well.
There are, however, some issues on how you use it:
Operator v1 & v2 gives a number built from those bits that are set in both v1 and v2, such that, for example, 0x12 & 0xF0 gives 0x10, not 0x02. Further, bit shift operator takes the number of bits, and a single digit in a hex number (which is usually called a nibble), consists of 4 bits (0x0..0xF requires 4 bits). So, if you have 0x12 and want to get 0x01, you have to write 0x12 >>4.
Hence, your shifts need to be adapted, too:
#define BITS_OF_A_NIBBLE 4
unsigned char status = (midiData & 0x00F00000) >> (5*BITS_OF_A_NIBBLE);
unsigned char channel = (midiData & 0x000F0000) >> (4*BITS_OF_A_NIBBLE);
unsigned char note = (midiData & 0x0000FF00) >> (2*BITS_OF_A_NIBBLE);
unsigned char theRest = (midiData & 0x000000FF);
You have it backwards, in a way.
In boolean logic (the & is a bitwise-AND), ANDing something with 0 will exclude it. Knowing that F in hex is 1111 in binary, a line like 0x9FCC00 & 0x0FFFFF will give you all the hex digits EXCEPT the 9, the opposite of what you want.
So, for status:
uint32_t status = 0xF000000 & midiData; // Retrieve 9
Actually, this will give you 0x900000. If you want 0x9 (also 9 in decimal), you need to bitshift the result over.
Now, the right bitshift operator (say, X >> 4) means move X 4 bits to the right; dividing by 16. That is 4 bits, not 4 hex digits. 1 hex digit == 4 bits, so to get 9 from 0x900000, you need 0x900000 >> 20.
So, to put them together, to get a status of 9:
uint32_t status = (0xF000000 & midiData) >> 20;
A similar process will get you the remaining values you want.
In general I'd recommend shift first, then mask - it's less error prone:
uint8_t cmd = (midiData >> 16) & 0xff;
uint8_t note = (midiData >> 8) & 0x7f; // MSB can't be set
uint8_t velocity = (midiData >> 0) & 0x7f; // ditto
and then split the cmd variable:
uint8_t status = (cmd & 0xf0); // range 0x00 .. 0xf0
uint8_t channel = (cmd & 0x0f); // range 0 .. 15
I personally wouldn't bother mapping the status value back into the range 0 .. 15 - it's commonly understood that e.g. 0x90 is a "note on", and not the plain value 9.

Wave file header endianess

I am writing code in c++ to read a wave file in. I am following the wave file specification I found here.
In the following code I am reading in the chunksize, which is stored in bytes 4,5,6,7.
According to the specification, this int is stores in little endian in these 4 bytes.
So if these 4 bytes held the unsigned value 2, I would think the would be as follows..
4 5 6 7
00000010 00000000 00000000 00000000
So if I am trying to read these 4 bytes as an int on windows, I don't need to do anything correct? Since windows it little endian. So this is what I did...
unsigned int chunk_size = (hbytes[4] << 24) + (hbytes[5] << 16) + (hbytes[6] << 8) + hbytes[7];
but that didn't work, it gave me an incorrect value. When I swapped the endian of the bytes, it did work....
unsigned int chunk_size = (hbytes[7] << 24) + (hbytes[6] << 16) + (hbytes[5] << 8) + hbytes[4];
Is this information I have about wavefiles correct? Is this int stored as little endian? Or are my assumptions about endianess incorrect?
You got everything right except the procedure to convert a little-endian stream.
Your diagram is correct: if the 4-byte field holds a 2, then the first byte (hbytes[4]) is 2 and the remaining bytes are 0. Why would you then want to left shift that byte by 24? The byte you want to left shift by 24 is the high-order byte, hbytes[7].

how to optimize C++/C code for a large number of integers

I have written the below mentioned code. The code checks the first bit of every byte. If the first bit of every byte of is equal to 0, then it concatenates this value with the previous byte and stores it in a different variable var1. Here pos points to bytes of an integer. An integer in my implementation is uint64_t and can occupy upto 8 bytes.
uint64_t func(char* data)
{
uint64_t var1 = 0; int i=0;
while ((data[i] >> 7) == 0)
{
variable = (variable << 7) | (data[i]);
i++;
}
return variable;
}
Since I am repeatedly calling func() a trillion times for trillions of integers. Therefore it runs slow, is there a way by which I may optimize this code?
EDIT: Thanks to Joe Z..its indeed a form of uleb128 unpacking.
I have only tested this minimally; I am happy to fix glitches with it. With modern processors, you want to bias your code heavily toward easily predicted branches. And, if you can safely read the next 10 bytes of input, there's nothing to be saved by guarding their reads by conditional branches. That leads me to the following code:
// fast uleb128 decode
// assumes you can read all 10 bytes at *data safely.
// assumes standard uleb128 format, with LSB first, and
// ... bit 7 indicating "more data in next byte"
uint64_t unpack( const uint8_t *const data )
{
uint64_t value = ((data[0] & 0x7F ) << 0)
| ((data[1] & 0x7F ) << 7)
| ((data[2] & 0x7F ) << 14)
| ((data[3] & 0x7F ) << 21)
| ((data[4] & 0x7Full) << 28)
| ((data[5] & 0x7Full) << 35)
| ((data[6] & 0x7Full) << 42)
| ((data[7] & 0x7Full) << 49)
| ((data[8] & 0x7Full) << 56)
| ((data[9] & 0x7Full) << 63);
if ((data[0] & 0x80) == 0) value &= 0x000000000000007Full; else
if ((data[1] & 0x80) == 0) value &= 0x0000000000003FFFull; else
if ((data[2] & 0x80) == 0) value &= 0x00000000001FFFFFull; else
if ((data[3] & 0x80) == 0) value &= 0x000000000FFFFFFFull; else
if ((data[4] & 0x80) == 0) value &= 0x00000007FFFFFFFFull; else
if ((data[5] & 0x80) == 0) value &= 0x000003FFFFFFFFFFull; else
if ((data[6] & 0x80) == 0) value &= 0x0001FFFFFFFFFFFFull; else
if ((data[7] & 0x80) == 0) value &= 0x00FFFFFFFFFFFFFFull; else
if ((data[8] & 0x80) == 0) value &= 0x7FFFFFFFFFFFFFFFull;
return value;
}
The basic idea is that small values are common (and so most of the if-statements won't be reached), but assembling the 64-bit value that needs to be masked is something that can be efficiently pipelined. With a good branch predictor, I think the above code should work pretty well. You might also try removing the else keywords (without changing anything else) to see if that makes a difference. Branch predictors are subtle beasts, and the exact character of your data also matters. If nothing else, you should be able to see that the else keywords are optional from a logic standpoint, and are there only to guide the compiler's code generation and provide an avenue for optimizing the hardware's branch predictor behavior.
Ultimately, whether or not this approach is effective depends on the distribution of your dataset. If you try out this function, I would be interested to know how it turns out. This particular function focuses on standard uleb128, where the value gets sent LSB first, and bit 7 == 1 means that the data continues.
There are SIMD approaches, but none of them lend themselves readily to 7-bit data.
Also, if you can mark this inline in a header, then that may also help. It all depends on how many places this gets called from, and whether those places are in a different source file. In general, though, inlining when possible is highly recommended.
Your code is problematic
uint64_t func(const unsigned char* pos)
{
uint64_t var1 = 0; int i=0;
while ((pos[i] >> 7) == 0)
{
var1 = (var1 << 7) | (pos[i]);
i++;
}
return var1;
}
First a minor thing: i should be unsigned.
Second: You don't assert that you don't read beyond the boundary of pos. E.g. if all values of your pos array are 0, then you will reach pos[size] where size is the size of the array, hence you invoke undefined behaviour. You should pass the size of your array to the function and check that i is smaller than this size.
Third: If pos[i] has most significant bit equal to zero for i=0,..,k with k>10, then previous work get's discarded (as you push the old value out of var1).
The third point actually helps us:
uint64_t func(const unsigned char* pos, size_t size)
{
size_t i(0);
while ( i < size && (pos[i] >> 7) == 0 )
{
++i;
}
// At this point, i is either equal to size or
// i is the index of the first pos value you don't want to use.
// Therefore we want to use the values
// pos[i-10], pos[i-9], ..., pos[i-1]
// if i is less than 10, we obviously need to ignore some of the values
const size_t start = (i >= 10) ? (i - 10) : 0;
uint64_t var1 = 0;
for ( size_t j(start); j < i; ++j )
{
var1 <<= 7;
var1 += pos[j];
}
return var1;
}
In conclusion: We separated logic and got rid of all discarded entries. The speed-up depends on the actual data you have. If lot's of entries are discarded then you save a lot of writes to var1 with this approach.
Another thing: Mostly, if one function is called massively, the best optimization you can do is call it less. Perhaps you can have come up with an additional condition that makes the call of this function useless.
Keep in mind that if you actually use 10 values, the first value ends up the be truncated.
64bit means that there are 9 values with their full 7 bits of information are represented, leaving exactly one bit left foe the tenth. You might want to switch to uint128_t.
A small optimization would be:
while ((pos[i] & 0x80) == 0)
Bitwise and is generally faster than a shift. This of course depends on the platform, and it's also possible that the compiler will do this optimization itself.
Can you change the encoding?
Google came across the same problem, and Jeff Dean describes a really cool solution on slide 55 of his presentation:
http://research.google.com/people/jeff/WSDM09-keynote.pdf‎
http://videolectures.net/wsdm09_dean_cblirs/
The basic idea is that reading the first bit of several bytes is poorly supported on modern architectures. Instead, let's take 8 of these bits, and pack them as a single byte preceding the data. We then use the prefix byte to index into a 256-item lookup table, which holds masks describing how to extract numbers from the rest of the data.
I believe it's how protocol buffers are currently encoded.
Can you change your encoding? As you've discovered, using a bit on each byte to indicate if there's another byte following really sucks for processing efficiency.
A better way to do it is to model UTF-8, which encodes the length of the full int into the first byte:
0xxxxxxx // one byte with 7 bits of data
10xxxxxx 10xxxxxx // two bytes with 12 bits of data
110xxxxx 10xxxxxx 10xxxxxx // three bytes with 16 bits of data
1110xxxx 10xxxxxx 10xxxxxx 10xxxxxx // four bytes with 22 bits of data
// etc.
But UTF-8 has special properties to make it easier to distinguish from ASCII. This bloats the data and you don't care about ASCII, so you'd modify it to look like this:
0xxxxxxx // one byte with 7 bits of data
10xxxxxx xxxxxxxx // two bytes with 14 bits of data.
110xxxxx xxxxxxxx xxxxxxxx // three bytes with 21 bits of data
1110xxxx xxxxxxxx xxxxxxxx xxxxxxxx // four bytes with 28 bits of data
// etc.
This has the same compression level as your method (up to 64 bits = 9 bytes), but is significantly easier for a CPU to process.
From this you can build a lookup table for the first byte which gives you a mask and length:
// byte_counts[255] contains the number of additional
// bytes if the first byte has a value of 255.
uint8_t const byte_counts[256]; // a global constant.
// byte_masks[255] contains a mask for the useful bits in
// the first byte, if the first byte has a value of 255.
uint8_t const byte_masks[256]; // a global constant.
And then to decode:
// the resulting value.
uint64_t v = 0;
// mask off the data bits in the first byte.
v = *data & byte_masks[*data];
// read in the rest.
switch(byte_counts[*data])
{
case 3: v = v << 8 | *++data;
case 2: v = v << 8 | *++data;
case 1: v = v << 8 | *++data;
case 0: return v;
default:
// If you're on VC++, this'll make it take one less branch.
// Better make sure you've got all the valid inputs covered, though!
__assume(0);
}
No matter the size of the integer, this hits only one branch point: the switch, which will likely be put into a jump table. You can potentially optimize it even further for ILP by not letting each case fall through.
First, rather than shifting, you can do a bitwise test on the
relevant bit. Second, you can use a pointer, rather than
indexing (but the compiler should do this optimization itself.
Thus:
uint64_t
readUnsignedVarLength( unsigned char const* pos )
{
uint64_t results = 0;
while ( (*pos & 0x80) == 0 ) {
results = (results << 7) | *pos;
++ pos;
}
return results;
}
At least, this corresponds to what your code does. For variable
length encoding of unsigned integers, it is incorrect, since
1) variable length encodings are little endian, and your code is
big endian, and 2) your code doesn't or in the high order byte.
Finally, the Wiki page suggests that you've got the test
inversed. (I know this format mainly from BER encoding and
Google protocol buffers, both of which set bit 7 to indicate
that another byte will follow.
The routine I use is:
uint64_t
readUnsignedVarLen( unsigned char const* source )
{
int shift = 0;
uint64_t results = 0;
uint8_t tmp = *source ++;
while ( ( tmp & 0x80 ) != 0 ) {
*value |= ( tmp & 0x7F ) << shift;
shift += 7;
tmp = *source ++;
}
return results | (tmp << shift);
}
For the rest, this wasn't written with performance in mind, but
I doubt that you could do significantly better. An alternative
solution would be to pick up all of the bytes first, then
process them in reverse order:
uint64_t
readUnsignedVarLen( unsigned char const* source )
{
unsigned char buffer[10];
unsigned char* p = std::begin( buffer );
while ( p != std::end( buffer ) && (*source & 0x80) != 0 ) {
*p = *source & 0x7F;
++ p;
}
assert( p != std::end( buffer ) );
*p = *source;
++ p;
uint64_t results = 0;
while ( p != std::begin( buffer ) ) {
-- p;
results = (results << 7) + *p;
}
return results;
}
The necessity of checking for buffer overrun will likely make
this slightly slower, but on some architectures, shifting by
a constant is significantly faster than shifting by a variable,
so this could be faster on them.
Globally, however, don't expect miracles. The motivation for
using variable length integers is to reduce data size, at
a cost in runtime for decoding and encoding.