Efficiency of extracting a bit from byte - c++

I'm extracting the 8th bit from a byte in C. Here is my example.
register unsigned char byte;
int pos = 7;
int x =(byte >> pos) & 1; //Method I
int y =(byte & 0x80) >> pos; //Method II
Both techniques will result in the same output, but is one of the methods more efficient than the other?

Both will be the same. Both AND and SHR instructions are 1-clock instructions on intel CPUs.

Bitwise operations a basically one of the fastest things you'll find on a computer. I'd imagine that any difference would be incredibly minor, such that it doesn't really matter.

If you know the bit you are extracting at compile-time, then either method should have approximately the same speed:
unsigned char val;
const int pos = 4;
...
int x = (val >> pos) & 1;
int y = (val & 0x10) >> pos;
However if you are calculating the position at runtime and not as a constant, doing the AND last should be faster:
unsigned char val;
int pos;
...
/* requires only a shift and AND */
int x = (val >> pos) & 1;
/* requires two shifts and AND */
int y = (val & (1 << pos)) >> pos;

Related

Getting shorts from an integer

I'm supposed to pack some shorts into a 32 bit integer. It's a homework assignment that will lead into a larger idea of compression/decompression.
I don't have any problems understanding how to pack the shorts into an integer, but I am struggling to understand how to get each short value stored within the integer.
So, for example, I store the values 2, 4, 6, 8 into the integer. That means I want to print them in the same order I input them.
How do you go about getting these values out from the integer?
EDIT: Shorts in this context refers to an unsigned two-byte integer.
As Craig corrected me, short is a 16 bit variable, therfore only 2 shorts can fit in one int, so here's my answer retrieving shorts:
2|4
000000000000001|00000000000000100
00000000000000100000000000000100
131076
denoting first as the left-most variable and last as the right-most variable, getting the short variable would like like this:
int x = 131076; //00000000000000100000000000000100 in binary
short last = x & 65535; // 65535= 1111111111111111
short first= (x >> 16) & 65535;
and here's my previous answer fixed for compressing chars (8 bit variables):
Let's assume the first char is the one the start on the MSB and the last one is the one that ends on the LSB:
2|4|6|8
00000001|00000010|000000110|00001000
000000010000001000000011000001000
33818120
So, in this example the first char is 2 (0010), followed by 4 (0100), 6 (0110) and last: 8 (1000).
so to get the compressed numbers back one could use this code:
int x = 33818120; //00000010000001000000011000001000 in binary
char last = x & 255; // 255= 11111111
char third = (x >> 8) & 255;
char second = (x >> 16) & 255;
char last = (x >> 24) & 255;
This would be more interesting with char [as you'd get 4]. But, you can only pack two shorts into a single int. So, I'm a bit mystified by this as to what the instructor was thinking.
Consider a union:
union combo {
int u_int;
short u_short[2];
char u_char[4];
};
int
getint1(short s1,short s2)
{
union combo combo;
combo.u_short[0] = s1;
combo.u_short[1] = s2;
return combo.u_int;
}
short
getshort1(int val,int which)
{
union combo combo;
combo.u_int = val;
return combo.u_short[which];
}
Now consider encoding/decoding with shifts:
unsigned int
getint2(unsigned short s1,unsigned short s2)
{
unsigned int val;
val = s1;
val <<= 16;
val |= s2;
return val;
}
unsigned short
getshort2(unsigned int val,int which)
{
val >>= (which * 16);
return val & 0xFFFF;
}
The unsigned code above will probably do what you want.
But, the next uses signed values and probably won't work as well because you may have mixed signs between s1/s2 and encoding/decoding that may present problems
int
getint3(short s1,short s2)
{
int val;
val = s1;
val <<= 16;
val |= s2;
return val;
}
short
getshort3(int val,int which)
{
val >>= (which * 16);
return val & 0xFFFF;
}

C++ write a number on two bytes

I am new to the low level c++, and I find it a bit hard to understand how to manipulate bits. I am trying to do the following to use in a compression algorithm I am trying to make:
unsigned int num = ...;//we want to store this number
unsigned int num_size = 3;//this is the maximum size of the number in bits, and
//can be anything from 1 bit to 32
unsigned int pos = 7;//the starting pos on the 1st bit.
//this can be anything from 1 to 8
char a;
char b;
if the num_size is 3 and pos is 7 for example, we must store num, on the 7th and 8th bit of a and on the 1st bit of b.
How about just?
a = num << (pos-1);
b = ((num << (pos-1)) & 0xFF00) >> 8;
To read num back just
num = ((unsigned int)a + ((unsigned int b) << 8)) >> (pos - 1);
Note, this doesn't do any sanity checks, such as whether all the relevant bits fit in a and b, you'll have to do that yourself.
For this specific test case, the highest number that fits into 2 unsigned char is actually 65535.
#include <iostream>
unsigned char high(int input)
{
return (input >> 8) & 0xFF;
}
unsigned char low(int input)
{
return input & 0xFF;
}
int value(unsigned char low, unsigned char high)
{
return low | (high << 8);
}
int main()
{
int num = 65535;
unsigned char l = low(num);
unsigned char h = high(num);
int val = value(l, h);
std::cout<<"l: "<<l<<" h: "<<h<<" val: "<<val;
}

Extract n most significant non-zero bits from int in C++ without loops

I want to extract the n most significant bits from an integer in C++ and convert those n bits to an integer.
For example
int a=1200;
// its binary representation within 32 bit word-size is
// 00000000000000000000010010110000
Now I want to extract the 4 most significant digits from that representation, i.e. 1111
00000000000000000000010010110000
^^^^
and convert them again to an integer (1001 in decimal = 9).
How is possible with a simple c++ function without loops?
Some processors have an instruction to count the leading binary zeros of an integer, and some compilers have instrinsics to allow you to use that instruction. For example, using GCC:
uint32_t significant_bits(uint32_t value, unsigned bits) {
unsigned leading_zeros = __builtin_clz(value);
unsigned highest_bit = 32 - leading_zeros;
unsigned lowest_bit = highest_bit - bits;
return value >> lowest_bit;
}
For simplicity, I left out checks that the requested number of bits are available. For Microsoft's compiler, the intrinsic is called __lzcnt.
If your compiler doesn't provide that intrinsic, and you processor doesn't have a suitable instruction, then one way to count the zeros quickly is with a binary search:
unsigned leading_zeros(int32_t value) {
unsigned count = 0;
if ((value & 0xffff0000u) == 0) {
count += 16;
value <<= 16;
}
if ((value & 0xff000000u) == 0) {
count += 8;
value <<= 8;
}
if ((value & 0xf0000000u) == 0) {
count += 4;
value <<= 4;
}
if ((value & 0xc0000000u) == 0) {
count += 2;
value <<= 2;
}
if ((value & 0x80000000u) == 0) {
count += 1;
}
return count;
}
It's not fast, but (int)(log(x)/log(2) + .5) + 1 will tell you the position of the most significant non-zero bit. Finishing the algorithm from there is fairly straight-forward.
This seems to work (done in C# with UInt32 then ported so apologies to Bjarne):
unsigned int input = 1200;
unsigned int most_significant_bits_to_get = 4;
// shift + or the msb over all the lower bits
unsigned int m1 = input | input >> 8 | input >> 16 | input >> 24;
unsigned int m2 = m1 | m1 >> 2 | m1 >> 4 | m1 >> 6;
unsigned int m3 = m2 | m2 >> 1;
unsigned int nbitsmask = m3 ^ m3 >> most_significant_bits_to_get;
unsigned int v = nbitsmask;
unsigned int c = 32; // c will be the number of zero bits on the right
v &= -((int)v);
if (v>0) c--;
if ((v & 0x0000FFFF) >0) c -= 16;
if ((v & 0x00FF00FF) >0) c -= 8;
if ((v & 0x0F0F0F0F) >0 ) c -= 4;
if ((v & 0x33333333) >0) c -= 2;
if ((v & 0x55555555) >0) c -= 1;
unsigned int result = (input & nbitsmask) >> c;
I assumed you meant using only integer math.
I used some code from #OliCharlesworth's link, you could remove the conditionals too by using the LUT for trailing zeroes code there.

Does anyone have an easy solution to parsing Exp-Golomb codes using C++?

Trying to decode the SDP sprop-parameter-sets values for an H.264 video stream and have found to access some of the values will involve parsing of Exp-Golomb encoded data and my method contains the base64 decoded sprop-parameter-sets data in a byte array which I now bit walking but have come up to the first part of Exp-Golomb encoded data and looking for a suitable code extract to parse these values.
Exp.-Golomb codes of what order ??
If it you need to parse H.264 bit stream (I mean transport layer) you can write a simple functions to make an access to scecified bits in the endless bit stream. Bits indexing from left to right.
inline u_dword get_bit(const u_byte * const base, u_dword offset)
{
return ((*(base + (offset >> 0x3))) >> (0x7 - (offset & 0x7))) & 0x1;
}
This function implement decoding of exp-Golomb codes of zero range (used in H.264).
u_dword DecodeUGolomb(const u_byte * const base, u_dword * const offset)
{
u_dword zeros = 0;
// calculate zero bits. Will be optimized.
while (0 == get_bit(base, (*offset)++)) zeros++;
// insert first 1 bit
u_dword info = 1 << zeros;
for (s_dword i = zeros - 1; i >= 0; i--)
{
info |= get_bit(base, (*offset)++) << i;
}
return (info - 1);
}
u_dword means unsigned 4 bytes integer.
u_byte means unsigned 1 byte integer.
Note that first byte of each NAL Unit is a specified structure with forbidden bit, NAL reference, and NAL type.
Accepted answer is not a correct implementation. It is giving wrong output. Correct implementation as per pseudo code from
"Sec 9.1 Parsing process for Exp-Golomb codes" spec T-REC-H.264-201304
int32_t getBitByPos(unsigned char *buffer, int32_t pos) {
return (buffer[pos/8] >> (8 - pos%8) & 0x01);
}
uint32_t decodeGolomb(unsigned char *byteStream, uint32_t *index) {
uint32_t leadingZeroBits = -1;
uint32_t codeNum = 0;
uint32_t pos = *index;
if (byteStream == NULL || pos == 0 ) {
printf("Invalid input\n");
return 0;
}
for (int32_t b = 0; !b; leadingZeroBits++)
b = getBitByPos(byteStream, pos++);
for (int32_t b = leadingZeroBits; b > 0; b--)
codeNum = codeNum | (getBitByPos(byteStream, pos++) << (b - 1));
*index = pos;
return ((1 << leadingZeroBits) - 1 + codeNum);
}
I wrote a c++ jpeg-ls compression library that uses golomb codes. I don't know if Exp-Golomb codes is exactly the same. The library is open source can be found at http://charls.codeplex.com. I use a lookup table to decode golomb codes <= 8 bits in length. Let me know if you have problems finding your way around.
Revised with a function to get N bits from the stream; works parsing H.264 NALs
inline uint32_t get_bit(const uint8_t * const base, uint32_t offset)
{
return ((*(base + (offset >> 0x3))) >> (0x7 - (offset & 0x7))) & 0x1;
}
inline uint32_t get_bits(const uint8_t * const base, uint32_t * const offset, uint8_t bits)
{
uint32_t value = 0;
for (int i = 0; i < bits; i++)
{
value = (value << 1) | (get_bit(base, (*offset)++) ? 1 : 0);
}
return value;
}
// This function implement decoding of exp-Golomb codes of zero range (used in H.264).
uint32_t DecodeUGolomb(const uint8_t * const base, uint32_t * const offset)
{
uint32_t zeros = 0;
// calculate zero bits. Will be optimized.
while (0 == get_bit(base, (*offset)++)) zeros++;
// insert first 1 bit
uint32_t info = 1 << zeros;
for (int32_t i = zeros - 1; i >= 0; i--)
{
info |= get_bit(base, (*offset)++) << i;
}
return (info - 1);
}

Big Endian and Little Endian for Files in C++

I am trying to write some processor independent code to write some files in big endian. I have a sample of code below and I can't understand why it doesn't work. All it is supposed to do is let byte store each byte of data one by one in big endian order. In my actual program I would then write the individual byte out to a file, so I get the same byte order in the file regardless of processor architecture.
#include <iostream>
int main (int argc, char * const argv[]) {
long data = 0x12345678;
long bitmask = (0xFF << (sizeof(long) - 1) * 8);
char byte = 0;
for(long i = 0; i < sizeof(long); i++) {
byte = data & bitmask;
data <<= 8;
}
return 0;
}
For some reason byte always has the value of 0. This confuses me, I am looking at the debugger and see this:
data = 00010010001101000101011001111000
bitmask = 11111111000000000000000000000000
I would think that data & mask would give 00010010, but it just makes byte 00000000 every time! How can his be? I have written some code for the little endian order and this works great, see below:
#include <iostream>
int main (int argc, char * const argv[]) {
long data = 0x12345678;
long bitmask = 0xFF;
char byte = 0;
for(long i = 0; i < sizeof(long); i++) {
byte = data & bitmask;
data >>= 8;
}
return 0;
}
Why does the little endian one work and the big endian not? Thanks for any help :-)
You should use the standard functions ntohl() and kin for this. They operate on explicit sized variables (i.e. uint16_t and uin32_t) rather than compiler-specific long, which necessary for portability.
Some platforms provide 64-bit versions in <endian.h>
In your example, data is 0x12345678.
Your first assignment to byte is therefore:
byte = 0x12000000;
which won't fit in a byte, so it gets truncated to zero.
try:
byte = (data & bitmask) >> (sizeof(long) - 1) * 8);
You're getting the shifting all wrong.
#include <iostream>
int main (int argc, char * const argv[]) {
long data = 0x12345678;
int shift = (sizeof(long) - 1) * 8
const unsigned long mask = 0xff;
char byte = 0;
for (long i = 0; i < sizeof(long); i++, shift -= 8) {
byte = (data & (mask << shift)) >> shift;
}
return 0;
}
Now, I wouldn't recommend you do things this way. I would recommend instead writing some nice conversion functions. Many compilers have these as builtins. So you can write your functions to do it the hard way, then switch them to just forward to the compiler builtin when you figure out what it is.
#include <tr1/cstdint> // To get uint16_t, uint32_t and so on.
inline uint16_t to_bigendian(uint16_t val, char bytes[2])
{
bytes[0] = (val >> 8) & 0xffu;
bytes[1] = val & 0xffu;
}
inline uint32_t to_bigendian(uint32_t val, char bytes[4])
{
bytes[0] = (val >> 24) & 0xffu;
bytes[1] = (val >> 16) & 0xffu;
bytes[2] = (val >> 8) & 0xffu;
bytes[3] = val & 0xffu;
}
This code is simpler and easier to understand than your loop. It's also faster. And lastly, it is recognized by some compilers and automatically turned into the single byte swap operation that would be required on most CPUs.
because you are masking off the top byte from an integer and then not shifting it back down 24 bits ...
Change your loop to:
for(long i = 0; i < sizeof(long); i++) {
byte = (data & bitmask) >> 24;
data <<= 8;
}