Optimize generating a parent bitmask from child bitmasks - c++

Given a 64 bit child mask input, for example:
10000000 01000000 00100000 00010000 00001000 00000100 00000010 00000000
The 8 bit parent mask would be:
11111110
A single bit in the parent mask maps to 8 bits in the child mask string, and the bit in the parent mask is set to 1 when one of the 8 child bits is set to 1. A simple algorithm to calculate this would be the following:
unsigned __int64 childMask = 0x8040201008040200; // The number above in hex
unsigned __int8 parentMask = 0;
for (int i = 0; i < 8; i++)
{
const unsigned __int8 child = childMask >> (8 * i);
parentMask |= (child > 0) << i;
}
I'm wondering if there's any optimizations left to do in the code above. The code will be run on CUDA, where I'd like to avoid branches whenever possible. For an answer, code in C++/C will do fine. The for loop can be unrolled, but I'd rather leave that to the compiler to optimize, giving hints where necessary using for example the #pragma unroll.

A possible approach is to use __vcmpgtu4 to do the per-byte comparisons, which returns the result as packed masks, which can be AND-ed with 0x08040201 (0x80402010 for the high half) to turn them into the bits of the final result, but then they need to be summed horizontally which does not seem to be well-supported but it can be done with plain old C-style code.
For example,
unsigned int low = childMask;
unsigned int high = childMask >> 32;
unsigned int lowmask = __vcmpgtu4(low, 0) & 0x08040201;
unsigned int highmask = __vcmpgtu4(high, 0) & 0x80402010;
unsigned int mask = lowmask | highmask;
mask |= mask >> 16;
mask |= mask >> 8;
parentMask = mask & 0xff;

This solution based on classical bit-twiddling techniques may be faster than the accepted answer on at least some GPU architectures supported by CUDA, since __vcmp* intrinsics are not fast on all of them.
Since GPUs are basically 32-bit architectures, the 64-bit childMask is processed as two halves, hi and lo.
The processing consists of three steps. In the first step, we set each non-null byte to 0x80 and leave the byte unmodified otherwise. In other words, we set the most significant bit of each byte if the byte is non-zero. One method is to use a modified version of a null-byte detection algorithm Alan Mycroft devised in the 1980s and which is often used for C-string processing. Alternatively we can use the fact that hadd (~0, x) has the most significant bit set only if x != 0, where hadd is a halving add: hadd (a, b) = (a + b) / 2, without overflow in the intermediate computation. An efficient implementation was published by Peter L. Montgomery in 2000.
In the second step, we collect the most significant bits of each byte into the highest nibble. For this, we need to move bit 7 to bit 28, bits 15 to bit 29, bit 23 to bit 30, and bit 31 to bits 31, corresponding to the shift factors of 21, 14, 7, and 0. In order to avoid separate shifts, we combine the shift factors into a single "magic" multiplier, then multiply with that, thus performing all shifts in parallel.
In the third step we combine the nibbles containing the result and move them into the correct bit position. For the hi word, that means moving the nibble in bits <31:28> into bits <7:4> and for the lo word this means moving the nibble in bits <31:28> into bits <3:0>. This combination can be performed either with bit-wise OR or addition. Which variant is faster may depend on the target architecture.
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#define USE_HAROLDS_SOLUTION (0)
#define USE_MYCROFT_ZEROBYTE (0)
#define USE_TWO_MASKS (1)
#define USE_ADD_COMBINATION (1)
uint8_t parentMask (uint64_t childMask)
{
#if USE_TWO_MASKS
const uint32_t LSB_MASK = 0x01010101;
#endif // USE_TWO_MASKS
const uint32_t MSB_MASK = 0x80808080;
const uint32_t MAGICMUL = (1 << 21) | (1 << 14) | (1 << 7) | (1 << 0);
uint32_t lo, hi;
/* split 64-bit argument into two halves for 32-bit GPU architecture */
lo = (uint32_t)(childMask >> 0);
hi = (uint32_t)(childMask >> 32);
#if USE_MYCROFT_ZEROBYTE
/* Set most significant bit in each byte that is not zero. Adapted from Alan
Mycroft's null-byte detection algorithm (newsgroup comp.lang.c, 1987/04/08,
https://groups.google.com/forum/#!original/comp.lang.c/2HtQXvg7iKc/xOJeipH6KLMJ):
null_byte(x) = ((x - 0x01010101) & (~x & 0x80808080))
*/
#if USE_TWO_MASKS
lo = (((lo | MSB_MASK) - LSB_MASK) | lo) & MSB_MASK;
hi = (((hi | MSB_MASK) - LSB_MASK) | hi) & MSB_MASK;
#else // USE_TWO_MASKS
lo = (((lo & ~MSB_MASK) + ~MSB_MASK) | lo) & MSB_MASK;
hi = (((hi & ~MSB_MASK) + ~MSB_MASK) | hi) & MSB_MASK;
#endif // USE_TWO_MASKS
#else // USE_MYCROFT_ZEROBYTE
/* Set most significant bit in each byte that is not zero. Use hadd(~0,x).
Peter L. Montgomery's observation (newsgroup comp.arch, 2000/02/11,
https://groups.google.com/d/msg/comp.arch/gXFuGZtZKag/_5yrz2zDbe4J):
(A+B)/2 = (A AND B) + (A XOR B)/2.
*/
#if USE_TWO_MASKS
lo = (((~lo & ~LSB_MASK) >> 1) + lo) & MSB_MASK;
hi = (((~hi & ~LSB_MASK) >> 1) + hi) & MSB_MASK;
#else // USE_TWO_MASKS
lo = (((~lo >> 1) & ~MSB_MASK) + lo) & MSB_MASK;
hi = (((~hi >> 1) & ~MSB_MASK) + hi) & MSB_MASK;
#endif // USE_TWO_MASKS
#endif // USE_MYCROFT_ZEROBYTE
/* collect most significant bit of each byte in most significant nibble */
lo = lo * MAGICMUL;
hi = hi * MAGICMUL;
/* combine nibbles with results for high and low half into final result */
#if USE_ADD_COMBINATION
return (uint8_t)((hi >> 24) + (lo >> 28));
#else // USE_ADD_COMBINATION
return (uint8_t)((hi >> 24) | (lo >> 28));
#endif // USE_ADD_COMBINATION
}
uint8_t parentMask_ref (uint64_t childMask)
{
uint8_t parentMask = 0;
for (uint32_t i = 0; i < 8; i++) {
uint8_t child = childMask >> (8 * i);
parentMask |= (child > 0) << i;
}
return parentMask;
}
uint32_t build_mask (uint32_t a)
{
return ((a & 0x80808080) >> 7) * 0xff;
}
uint32_t vcmpgtu4 (uint32_t a, uint32_t b)
{
uint32_t r;
r = ((a & ~b) + (((a ^ ~b) >> 1) & 0x7f7f7f7f));
r = build_mask (r);
return r;
}
uint8_t parentMask_harold (uint64_t childMask)
{
uint32_t low = childMask;
uint32_t high = childMask >> 32;
uint32_t lowmask = vcmpgtu4 (low, 0) & 0x08040201;
uint32_t highmask = vcmpgtu4 (high, 0) & 0x80402010;
uint32_t mask = lowmask | highmask;
mask |= mask >> 16;
mask |= mask >> 8;
return (uint8_t)mask;
}
/*
From: geo <gmars...#gmail.com>
Newsgroups: sci.math,comp.lang.c,comp.lang.fortran
Subject: 64-bit KISS RNGs
Date: Sat, 28 Feb 2009 04:30:48 -0800 (PST)
This 64-bit KISS RNG has three components, each nearly
good enough to serve alone. The components are:
Multiply-With-Carry (MWC), period (2^121+2^63-1)
Xorshift (XSH), period 2^64-1
Congruential (CNG), period 2^64
*/
static uint64_t kiss64_x = 1234567890987654321ULL;
static uint64_t kiss64_c = 123456123456123456ULL;
static uint64_t kiss64_y = 362436362436362436ULL;
static uint64_t kiss64_z = 1066149217761810ULL;
static uint64_t kiss64_t;
#define MWC64 (kiss64_t = (kiss64_x << 58) + kiss64_c, \
kiss64_c = (kiss64_x >> 6), kiss64_x += kiss64_t, \
kiss64_c += (kiss64_x < kiss64_t), kiss64_x)
#define XSH64 (kiss64_y ^= (kiss64_y << 13), kiss64_y ^= (kiss64_y >> 17), \
kiss64_y ^= (kiss64_y << 43))
#define CNG64 (kiss64_z = 6906969069ULL * kiss64_z + 1234567ULL)
#define KISS64 (MWC64 + XSH64 + CNG64)
int main (void)
{
uint64_t childMask, count = 0;
uint8_t res, ref;
do {
childMask = KISS64;
ref = parentMask_ref (childMask);
#if USE_HAROLDS_SOLUTION
res = parentMask_harold (childMask);
#else // USE_HAROLDS_SOLUTION
res = parentMask (childMask);
#endif // USE_HAROLDS_SOLUTION
if (res != ref) {
printf ("\narg=%016llx res=%02x ref=%02x\n", childMask, res, ref);
return EXIT_FAILURE;
}
if (!(count & 0xffffff)) printf ("\r%llu", count);
count++;
} while (1);
return EXIT_SUCCESS;
}

Related

Multiply two uint64_ts and store result to uint64_t doesnt seem to work?

I am trying to multiply two uint64_ts and store the result to uint64_t. I found an existing answer on Stackoverflow which splits the inputs in to their four uint32_ts and joins the result later:
https://stackoverflow.com/a/28904636/1107474
I have created a full example using the code and pasted it below.
However, for 37 x 5 I am getting the result 0 instead of 185?
#include <iostream>
int main()
{
uint64_t a = 37; // Input 1
uint64_t b = 5; // Input 2
uint64_t a_lo = (uint32_t)a;
uint64_t a_hi = a >> 32;
uint64_t b_lo = (uint32_t)b;
uint64_t b_hi = b >> 32;
uint64_t a_x_b_hi = a_hi * b_hi;
uint64_t a_x_b_mid = a_hi * b_lo;
uint64_t b_x_a_mid = b_hi * a_lo;
uint64_t a_x_b_lo = a_lo * b_lo;
uint64_t carry_bit = ((uint64_t)(uint32_t)a_x_b_mid +
(uint64_t)(uint32_t)b_x_a_mid +
(a_x_b_lo >> 32) ) >> 32;
uint64_t multhi = a_x_b_hi +
(a_x_b_mid >> 32) + (b_x_a_mid >> 32) +
carry_bit;
std::cout << multhi << std::endl; // Outputs 0 instead of 185?
}
I'm merging your code with another answer in the original link.
#include <iostream>
int main()
{
uint64_t a = 37; // Input 1
uint64_t b = 5; // Input 2
uint64_t a_lo = (uint32_t)a;
uint64_t a_hi = a >> 32;
uint64_t b_lo = (uint32_t)b;
uint64_t b_hi = b >> 32;
uint64_t a_x_b_hi = a_hi * b_hi;
uint64_t a_x_b_mid = a_hi * b_lo;
uint64_t b_x_a_mid = b_hi * a_lo;
uint64_t a_x_b_lo = a_lo * b_lo;
/*
This is implementing schoolbook multiplication:
x1 x0
X y1 y0
-------------
00 LOW PART
-------------
00
10 10 MIDDLE PART
+ 01
-------------
01
+ 11 11 HIGH PART
-------------
*/
// 64-bit product + two 32-bit values
uint64_t middle = a_x_b_mid + (a_x_b_lo >> 32) + uint32_t(b_x_a_mid);
// 64-bit product + two 32-bit values
uint64_t carry = a_x_b_hi + (middle >> 32) + (b_x_a_mid >> 32);
// Add LOW PART and lower half of MIDDLE PART
uint64_t result = (middle << 32) | uint32_t(a_x_b_lo);
std::cout << result << std::endl;
std::cout << carry << std::endl;
}
This results in
Program stdout
185
0
Godbolt link: https://godbolt.org/z/97xhMvY53
Or you could use __uint128_t which is non-standard but widely available.
static inline void mul64(uint64_t a, uint64_t b, uint64_t& result, uint64_t& carry) {
__uint128_t va(a);
__uint128_t vb(b);
__uint128_t vr = va * vb;
result = uint64_t(vr);
carry = uint64_t(vr >> 64);
}
In the title of this question, you said you wanted to multiply two integers. But the code you found on that other Q&A (Getting the high part of 64 bit integer multiplication) isn't trying to do that, it's only trying to get the high half of the full product. For a 64x64 => 128-bit product, the high half is product >> 64.
37 x 5 = 185
185 >> 64 = 0
It's correctly emulating multihi = (37 * (unsigned __int128)5) >> 64, and you're forgetting about the >>64 part.
__int128 is a GNU C extension; it's much more efficient than emulating it manually with pure ISO C, but only supported on 64-bit targets by current compilers. See my answer on the same question. (ISO C23 is expected to have _BitInt(128) or whatever width you specify.)
In comments you were talking about floating-point mantissas. In an FP multiply, you have two n-bit mantissas (usually with their leading bits set), so the high half of the 2n-bit product will have n significant bits (more or less; maybe actually one place to the right IIRC).
Something like 37 x 5 would only happen with tiny subnormal floats, where the product would indeed underflow to zero. But in that case, it would be because you only get subnormals at the limits of the exponent range, and (37 * 2^-1022) * (5 * 2^-1022) would be 186 * 2^-2044, an exponent way too small to be represented in an FP format like IEEE binary64 aka double where -1022 was the minimum exponent.
You're using integers where a>>63 isn't 1, in fact they're both less than 2^32 so there are no significant bits outside the low 64 bits of the full 128-bit product.

Efficiently removing lower half-byte in a byte array - C++

I have a long byte array and I want to remove the lower nibble (the lower 4 bits) of every byte and move the rest together such that the result occupies half the space as the input.
For example, if my input is 057ABC23, my output should be 07B2.
My current approach looks like this:
// in is unsigned char*
size_t outIdx = 0;
for(size_t i = 0; i < input_length; i += 8)
{
in[outIdx++] = (in[i ] & 0xF0) | (in[i + 1] >> 4);
in[outIdx++] = (in[i + 2] & 0xF0) | (in[i + 3] >> 4);
in[outIdx++] = (in[i + 4] & 0xF0) | (in[i + 5] >> 4);
in[outIdx++] = (in[i + 6] & 0xF0) | (in[i + 7] >> 4);
}
... where I basically process 8 bytes of input in every loop, to illustrate that I can assume input_length to be divisible by 8 (even though it's probably not faster than processing only 2 bytes per loop). The operation is done in-place, overwriting the input array.
Is there a faster way to do this? For example, since I can read in 8 bytes at a time anyway, the operation could be done on 4-byte or 8-byte integers instead of individual bytes, but I cannot think of a way to do that. The compiler doesn't come up with something itself either, as I can see the output code still operates on bytes (-O3 seems to do some loop unrolling, but that's it).
I don't have control over the input, so I cannot store it differently to begin with.
There is a general technique for bit-fiddling to swap bits around. Suppose you have a 64-bit number, containing the following nibbles:
HxGxFxExDxCxBxAx
Here by x I denote a nibble whose value is unimportant (you want to delete it). The result of your bit-operation should be a 32-bit number HGFEDCBA.
First, delete all the x nibbles:
HxGxFxExDxCxBxAx & *_*_*_*_*_*_*_*_ = H_G_F_E_D_C_B_A_
Here I denote 0 by _, and binary 1111 by * for clarity.
Now, replicate your data:
H_G_F_E_D_C_B_A_ << 4 = _G_F_E_D_C_B_A__
H_G_F_E_D_C_B_A_ | _G_F_E_D_C_B_A__ = HGGFFEEDDCCBBAA_
Notice how some of your target nibbles are together. You need to retain these places, and delete duplicate data.
HGGFFEEDDCCBBAA_ & **__**__**__**__ = HG__FE__DC__BA__
From here, you can extract the result bytes directly, or do another iteration or two of the technique.
Next iteration:
HG__FE__DC__BA__ << 8 = __FE__DC__BA____
HG__FE__DC__BA__ | __FE__DC__BA____ = HGFEFEDCDCBABA__
HGFEFEDCDCBABA__ & ****____****____ = HGFE____DCBA____
Last iteration:
HGFE____DCBA____ << 16 = ____DCBA________
HGFE____DCBA____ | ____DCBA________ = HGFEDCBADCBA____
HGFEDCBADCBA____ >> 32 = ________HGFEDCBA
All x64-86 (and most x86) cpus have SSE2.
For each 16-bit lane do
t = (x & 0x00F0) | (x >> 12).
Then use the pack instruction to truncate each 16-bit lane to 8-bits.
For example, 0xABCD1234 would become 0x00CA0031 then the pack would make it 0xCA31.
#include <emmintrin.h>
void squish_32bytesTo16 (unsigned char* src, unsigned char* dst) {
const __m128i mask = _mm_set1_epi16(0x00F0);
__m128i src0 = _mm_loadu_si128((__m128i*)(void*)src);
__m128i src1 = _mm_loadu_si128((__m128i*)(void*)(src + sizeof(__m128i)));
__m128i t0 = _mm_or_si128(_mm_and_si128(src0, mask), _mm_srli_epi16(src0, 12));
__m128i t1 = _mm_or_si128(_mm_and_si128(src1, mask), _mm_srli_epi16(src1, 12));
_mm_storeu_si128((__m128i*)(void*)dst, _mm_packus_epi16(t0, t1));
}
Just to put the resulting code here for future reference, it now looks like this (assuming the system is little endian, and the input length is a multiple of 8 bytes):
void compress(unsigned char* in, size_t input_length)
{
unsigned int* inUInt = reinterpret_cast<unsigned int*>(in);
unsigned long long* inULong = reinterpret_cast<unsigned long long*>(in);
for(size_t i = 0; i < input_length / 8; ++i)
{
unsigned long long value = inULong[i] & 0xF0F0F0F0F0F0F0F0;
value = (value >> 4) | (value << 8);
value &= 0xFF00FF00FF00FF00;
value |= (value << 8);
value &= 0xFFFF0000FFFF0000;
value |= (value << 16);
inUInt[i] = static_cast<unsigned int>(value >> 32);
}
}
Benchmarked very roughly it's around twice as fast as the code in the question (using MSVC19 /O2).
Note that this is basically the solution anatolyg posted before (just put into code), so upvote that answer instead if you found this helpful.

Compute product of two integers as lower and higher half [duplicate]

I am looking for an efficient (optionally standard, elegant and easy to implement) solution to multiply relatively large numbers, and store the result into one or several integers :
Let say I have two 64 bits integers declared like this :
uint64_t a = xxx, b = yyy;
When I do a * b, how can I detect if the operation results in an overflow and in this case store the carry somewhere?
Please note that I don't want to use any large-number library since I have constraints on the way I store the numbers.
1. Detecting the overflow:
x = a * b;
if (a != 0 && x / a != b) {
// overflow handling
}
Edit: Fixed division by 0 (thanks Mark!)
2. Computing the carry is quite involved. One approach is to split both operands into half-words, then apply long multiplication to the half-words:
uint64_t hi(uint64_t x) {
return x >> 32;
}
uint64_t lo(uint64_t x) {
return ((1ULL << 32) - 1) & x;
}
void multiply(uint64_t a, uint64_t b) {
// actually uint32_t would do, but the casting is annoying
uint64_t s0, s1, s2, s3;
uint64_t x = lo(a) * lo(b);
s0 = lo(x);
x = hi(a) * lo(b) + hi(x);
s1 = lo(x);
s2 = hi(x);
x = s1 + lo(a) * hi(b);
s1 = lo(x);
x = s2 + hi(a) * hi(b) + hi(x);
s2 = lo(x);
s3 = hi(x);
uint64_t result = s1 << 32 | s0;
uint64_t carry = s3 << 32 | s2;
}
To see that none of the partial sums themselves can overflow, we consider the worst case:
x = s2 + hi(a) * hi(b) + hi(x)
Let B = 1 << 32. We then have
x <= (B - 1) + (B - 1)(B - 1) + (B - 1)
<= B*B - 1
< B*B
I believe this will work - at least it handles Sjlver's test case. Aside from that, it is untested (and might not even compile, as I don't have a C++ compiler at hand anymore).
The idea is to use following fact which is true for integral operation:
a*b > c if and only if a > c/b
/ is integral division here.
The pseudocode to check against overflow for positive numbers follows:
if (a > max_int64 / b) then "overflow" else "ok".
To handle zeroes and negative numbers you should add more checks.
C code for non-negative a and b follows:
if (b > 0 && a > 18446744073709551615 / b) {
// overflow handling
}; else {
c = a * b;
}
Note, max value for 64 type:
18446744073709551615 == (1<<64)-1
To calculate carry we can use approach to split number into two 32-digits and multiply them as we do this on the paper. We need to split numbers to avoid overflow.
Code follows:
// split input numbers into 32-bit digits
uint64_t a0 = a & ((1LL<<32)-1);
uint64_t a1 = a >> 32;
uint64_t b0 = b & ((1LL<<32)-1);
uint64_t b1 = b >> 32;
// The following 3 lines of code is to calculate the carry of d1
// (d1 - 32-bit second digit of result, and it can be calculated as d1=d11+d12),
// but to avoid overflow.
// Actually rewriting the following 2 lines:
// uint64_t d1 = (a0 * b0 >> 32) + a1 * b0 + a0 * b1;
// uint64_t c1 = d1 >> 32;
uint64_t d11 = a1 * b0 + (a0 * b0 >> 32);
uint64_t d12 = a0 * b1;
uint64_t c1 = (d11 > 18446744073709551615 - d12) ? 1 : 0;
uint64_t d2 = a1 * b1 + c1;
uint64_t carry = d2; // needed carry stored here
Although there have been several other answers to this question, I several of them have code that is completely untested, and thus far no one has adequately compared the different possible options.
For that reason, I wrote and tested several possible implementations (the last one is based on this code from OpenBSD, discussed on Reddit here). Here's the code:
/* Multiply with overflow checking, emulating clang's builtin function
*
* __builtin_umull_overflow
*
* This code benchmarks five possible schemes for doing so.
*/
#include <stddef.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <limits.h>
#ifndef BOOL
#define BOOL int
#endif
// Option 1, check for overflow a wider type
// - Often fastest and the least code, especially on modern compilers
// - When long is a 64-bit int, requires compiler support for 128-bits
// ints (requires GCC >= 3.0 or Clang)
#if LONG_BIT > 32
typedef __uint128_t long_overflow_t ;
#else
typedef uint64_t long_overflow_t;
#endif
BOOL
umull_overflow1(unsigned long lhs, unsigned long rhs, unsigned long* result)
{
long_overflow_t prod = (long_overflow_t)lhs * (long_overflow_t)rhs;
*result = (unsigned long) prod;
return (prod >> LONG_BIT) != 0;
}
// Option 2, perform long multiplication using a smaller type
// - Sometimes the fastest (e.g., when mulitply on longs is a library
// call).
// - Performs at most three multiplies, and sometimes only performs one.
// - Highly portable code; works no matter how many bits unsigned long is
BOOL
umull_overflow2(unsigned long lhs, unsigned long rhs, unsigned long* result)
{
const unsigned long HALFSIZE_MAX = (1ul << LONG_BIT/2) - 1ul;
unsigned long lhs_high = lhs >> LONG_BIT/2;
unsigned long lhs_low = lhs & HALFSIZE_MAX;
unsigned long rhs_high = rhs >> LONG_BIT/2;
unsigned long rhs_low = rhs & HALFSIZE_MAX;
unsigned long bot_bits = lhs_low * rhs_low;
if (!(lhs_high || rhs_high)) {
*result = bot_bits;
return 0;
}
BOOL overflowed = lhs_high && rhs_high;
unsigned long mid_bits1 = lhs_low * rhs_high;
unsigned long mid_bits2 = lhs_high * rhs_low;
*result = bot_bits + ((mid_bits1+mid_bits2) << LONG_BIT/2);
return overflowed || *result < bot_bits
|| (mid_bits1 >> LONG_BIT/2) != 0
|| (mid_bits2 >> LONG_BIT/2) != 0;
}
// Option 3, perform long multiplication using a smaller type (this code is
// very similar to option 2, but calculates overflow using a different but
// equivalent method).
// - Sometimes the fastest (e.g., when mulitply on longs is a library
// call; clang likes this code).
// - Performs at most three multiplies, and sometimes only performs one.
// - Highly portable code; works no matter how many bits unsigned long is
BOOL
umull_overflow3(unsigned long lhs, unsigned long rhs, unsigned long* result)
{
const unsigned long HALFSIZE_MAX = (1ul << LONG_BIT/2) - 1ul;
unsigned long lhs_high = lhs >> LONG_BIT/2;
unsigned long lhs_low = lhs & HALFSIZE_MAX;
unsigned long rhs_high = rhs >> LONG_BIT/2;
unsigned long rhs_low = rhs & HALFSIZE_MAX;
unsigned long lowbits = lhs_low * rhs_low;
if (!(lhs_high || rhs_high)) {
*result = lowbits;
return 0;
}
BOOL overflowed = lhs_high && rhs_high;
unsigned long midbits1 = lhs_low * rhs_high;
unsigned long midbits2 = lhs_high * rhs_low;
unsigned long midbits = midbits1 + midbits2;
overflowed = overflowed || midbits < midbits1 || midbits > HALFSIZE_MAX;
unsigned long product = lowbits + (midbits << LONG_BIT/2);
overflowed = overflowed || product < lowbits;
*result = product;
return overflowed;
}
// Option 4, checks for overflow using division
// - Checks for overflow using division
// - Division is slow, especially if it is a library call
BOOL
umull_overflow4(unsigned long lhs, unsigned long rhs, unsigned long* result)
{
*result = lhs * rhs;
return rhs > 0 && (SIZE_MAX / rhs) < lhs;
}
// Option 5, checks for overflow using division
// - Checks for overflow using division
// - Avoids division when the numbers are "small enough" to trivially
// rule out overflow
// - Division is slow, especially if it is a library call
BOOL
umull_overflow5(unsigned long lhs, unsigned long rhs, unsigned long* result)
{
const unsigned long MUL_NO_OVERFLOW = (1ul << LONG_BIT/2) - 1ul;
*result = lhs * rhs;
return (lhs >= MUL_NO_OVERFLOW || rhs >= MUL_NO_OVERFLOW) &&
rhs > 0 && SIZE_MAX / rhs < lhs;
}
#ifndef umull_overflow
#define umull_overflow2
#endif
/*
* This benchmark code performs a multiply at all bit sizes,
* essentially assuming that sizes are logarithmically distributed.
*/
int main()
{
unsigned long i, j, k;
int count = 0;
unsigned long mult;
unsigned long total = 0;
for (k = 0; k < 0x40000000 / LONG_BIT / LONG_BIT; ++k)
for (i = 0; i != LONG_MAX; i = i*2+1)
for (j = 0; j != LONG_MAX; j = j*2+1) {
count += umull_overflow(i+k, j+k, &mult);
total += mult;
}
printf("%d overflows (total %lu)\n", count, total);
}
Here are the results, testing with various compilers and systems I have (in this case, all testing was done on OS X, but results should be similar on BSD or Linux systems):
+------------------+----------+----------+----------+----------+----------+
| | Option 1 | Option 2 | Option 3 | Option 4 | Option 5 |
| | BigInt | LngMult1 | LngMult2 | Div | OptDiv |
+------------------+----------+----------+----------+----------+----------+
| Clang 3.5 i386 | 1.610 | 3.217 | 3.129 | 4.405 | 4.398 |
| GCC 4.9.0 i386 | 1.488 | 3.469 | 5.853 | 4.704 | 4.712 |
| GCC 4.2.1 i386 | 2.842 | 4.022 | 3.629 | 4.160 | 4.696 |
| GCC 4.2.1 PPC32 | 8.227 | 7.756 | 7.242 | 20.632 | 20.481 |
| GCC 3.3 PPC32 | 5.684 | 9.804 | 11.525 | 21.734 | 22.517 |
+------------------+----------+----------+----------+----------+----------+
| Clang 3.5 x86_64 | 1.584 | 2.472 | 2.449 | 9.246 | 7.280 |
| GCC 4.9 x86_64 | 1.414 | 2.623 | 4.327 | 9.047 | 7.538 |
| GCC 4.2.1 x86_64 | 2.143 | 2.618 | 2.750 | 9.510 | 7.389 |
| GCC 4.2.1 PPC64 | 13.178 | 8.994 | 8.567 | 37.504 | 29.851 |
+------------------+----------+----------+----------+----------+----------+
Based on these results, we can draw a few conclusions:
Clearly, the division-based approach, although simple and portable, is slow.
No technique is a clear winner in all cases.
On modern compilers, the use-a-larger-int approach is best, if you can use it
On older compilers, the long-multiplication approach is best
Surprisingly, GCC 4.9.0 has performance regressions over GCC 4.2.1, and GCC 4.2.1 has performance regressions over GCC 3.3
A version that also works when a == 0:
x = a * b;
if (a != 0 && x / a != b) {
// overflow handling
}
Easy and fast with clang and gcc:
unsigned long long t a, b, result;
if (__builtin_umulll_overflow(a, b, &result)) {
// overflow!!
}
This will use hardware support for overflow detection where available. By being compiler extensions it can even handle signed integer overflow (replace umul with smul), eventhough that is undefined behavior in C++.
If you need not just to detect overflow but also to capture the carry, you're best off breaking your numbers down into 32-bit parts. The code is a nightmare; what follows is just a sketch:
#include <stdint.h>
uint64_t mul(uint64_t a, uint64_t b) {
uint32_t ah = a >> 32;
uint32_t al = a; // truncates: now a = al + 2**32 * ah
uint32_t bh = b >> 32;
uint32_t bl = b; // truncates: now b = bl + 2**32 * bh
// a * b = 2**64 * ah * bh + 2**32 * (ah * bl + bh * al) + al * bl
uint64_t partial = (uint64_t) al * (uint64_t) bl;
uint64_t mid1 = (uint64_t) ah * (uint64_t) bl;
uint64_t mid2 = (uint64_t) al * (uint64_t) bh;
uint64_t carry = (uint64_t) ah * (uint64_t) bh;
// add high parts of mid1 and mid2 to carry
// add low parts of mid1 and mid2 to partial, carrying
// any carry bits into carry...
}
The problem is not just the partial products but the fact that any of the sums can overflow.
If I had to do this for real, I would write an extended-multiply routine in the local assembly language. That is, for example, multiply two 64-bit integers to get a 128-bit result, which is stored in two 64-bit registers. All reasonable hardware provides this functionality in a single native multiply instruction—it's not just accessible from C.
This is one of those rare cases where the solution that's most elegant and easy to program is actually to use assembly language. But it's certainly not portable :-(
The GNU Portability Library (Gnulib) contains a module intprops, which has macros that efficiently test whether arithmetic operations would overflow.
For example, if an overflow in multiplication would occur, INT_MULTIPLY_OVERFLOW (a, b) would yield 1.
Perhaps the best way to solve this problem is to have a function, which multiplies two UInt64 and results a pair of UInt64, an upper part and a lower part of the UInt128 result. Here is the solution, including a function, which displays the result in hex. I guess you perhaps prefer a C++ solution, but I have a working Swift-Solution which shows, how to manage the problem:
func hex128 (_ hi: UInt64, _ lo: UInt64) -> String
{
var s: String = String(format: "%08X", hi >> 32)
+ String(format: "%08X", hi & 0xFFFFFFFF)
+ String(format: "%08X", lo >> 32)
+ String(format: "%08X", lo & 0xFFFFFFFF)
return (s)
}
func mul64to128 (_ multiplier: UInt64, _ multiplicand : UInt64)
-> (result_hi: UInt64, result_lo: UInt64)
{
let x: UInt64 = multiplier
let x_lo: UInt64 = (x & 0xffffffff)
let x_hi: UInt64 = x >> 32
let y: UInt64 = multiplicand
let y_lo: UInt64 = (y & 0xffffffff)
let y_hi: UInt64 = y >> 32
let mul_lo: UInt64 = (x_lo * y_lo)
let mul_hi: UInt64 = (x_hi * y_lo) + (mul_lo >> 32)
let mul_carry: UInt64 = (x_lo * y_hi) + (mul_hi & 0xffffffff)
let result_hi: UInt64 = (x_hi * y_hi) + (mul_hi >> 32) + (mul_carry >> 32)
let result_lo: UInt64 = (mul_carry << 32) + (mul_lo & 0xffffffff)
return (result_hi, result_lo)
}
Here is an example to verify, that the function works:
var c: UInt64 = 0
var d: UInt64 = 0
(c, d) = mul64to128(0x1234567890123456, 0x9876543210987654)
// 0AD77D742CE3C72E45FD10D81D28D038 is the result of the above example
print(hex128(c, d))
(c, d) = mul64to128(0xFFFFFFFFFFFFFFFF, 0xFFFFFFFFFFFFFFFF)
// FFFFFFFFFFFFFFFE0000000000000001 is the result of the above example
print(hex128(c, d))
There is a simple (and often very fast solution) which has not been mentioned yet. The solution is based on the fact that n-Bit times m-Bit multiplication does never overflow for a product width of n+m-bit or higher but overflows for all result widths smaller than n+m-1.
Because my old description might have been too difficult to read for some people, I try it again:
What you need is checking the sum of leading-zeroes of both operands. It would be very easy to prove mathematically.
Let x be n-Bit and y be m-Bit. z = x * y is k-Bit. Because the product can be n+m bit large at most it can overflow. Let's say. x*y is p-Bit long (without leading zeroes). The leading zeroes of the product are clz(x * y) = n+m - p. clz behaves similar to log, hence:
clz(x * y) = clz(x) + clz(y) + c with c = either 1 or 0.
(thank you for the c = 1 advice in the comment!)
It overflows when k < p <= n+m <=> n+m - k > n+m - p = clz(x * y).
Now we can use this algorithm:
if max(clz(x * y)) = clz(x) + clz(y) +1 < (n+m - k) --> overflow
if max(clz(x * y)) = clz(x) + clz(y) +1 == (n+m - k) --> overflow if c = 0
else --> no overflow
How to check for overflow in the middle case? I assume, you have a multiplication instruction. Then we easily can use it to see the leading zeroes of the result, i.e.:
if clz(x * y / 2) == (n+m - k) <=> msb(x * y/2) == 1 --> overflow
else --> no overflow
You do the multiplication by treating x/2 as fixed point and y as normal integer:
msb(x * y/2) = msb(floor(x * y / 2))
floor(x * y/2) = floor(x/2) * y + (lsb(x) * floor(y/2)) = (x >> 1)*y + (x & 1)*(y >> 1)
(this result never overflows in case of clz(x)+clz(y)+1 == (n+m -k))
The trick is using builtins/intrinsics. In GCC it looks this way:
static inline int clz(int a) {
if (a == 0) return 32; //only needed for x86 architecture
return __builtin_clz(a);
}
/**#fn static inline _Bool chk_mul_ov(uint32_t f1, uint32_t f2)
* #return one, if a 32-Bit-overflow occurs when unsigned-unsigned-multipliying f1 with f2 otherwise zero. */
static inline _Bool chk_mul_ov(uint32_t f1, uint32_t f2) {
int lzsum = clz(f1) + clz(f2); //leading zero sum
return
lzsum < sizeof(f1)*8-1 || ( //if too small, overflow guaranteed
lzsum == sizeof(f1)*8-1 && //if special case, do further check
(int32_t)((f1 >> 1)*f2 + (f1 & 1)*(f2 >> 1)) < 0 //check product rightshifted by one
);
}
...
if (chk_mul_ov(f1, f2)) {
//error handling
}
...
Just an example for n = m = k = 32-Bit unsigned-unsigned-multiplication. You can generalize it to signed-unsigned- or signed-signed-multiplication. And even no multiple-bit-shift is required (because some microcontrollers implement one-bit-shifts only but sometimes support product divided by two with a single instruction like Atmega!). However, if no count-leading-zeroes instruction exists but long multiplication, this might not be better.
Other compilers have their own way of specifying intrinsics for CLZ operations.
Compared to checking upper half of the multiplication the clz-method should scale better (in worst case) than using a highly optimized 128-Bit multiplication to check for 64-Bit overflow. Multiplication needs over linear overhead while count bits needs only linear overhead.
This code worked out-of-the box for me when tried.
I've been working with this problem this days and I have to say that it has impressed me the number of times I have seen people saying the best way to know if there has been an overflow is to divide the result, thats totally inefficient and unnecessary. The point for this function is that it must be as fast as possible.
There are two options for the overflow detection:
1º- If possible create the result variable twice as big as the multipliers, for example:
struct INT32struct {INT16 high, low;};
typedef union
{
struct INT32struct s;
INT32 ll;
} INT32union;
INT16 mulFunction(INT16 a, INT16 b)
{
INT32union result.ll = a * b; //32Bits result
if(result.s.high > 0)
Overflow();
return (result.s.low)
}
You will know inmediately if there has been an overflow, and the code is the fastest possible without writing it in machine code. Depending on the compiler this code can be improved in machine code.
2º- Is impossible to create a result variable twice as big as the multipliers variable:
Then you should play with if conditions to determine the best path. Continuing with the example:
INT32 mulFunction(INT32 a, INT32 b)
{
INT32union s_a.ll = abs(a);
INT32union s_b.ll = abs(b); //32Bits result
INT32union result;
if(s_a.s.hi > 0 && s_b.s.hi > 0)
{
Overflow();
}
else if (s_a.s.hi > 0)
{
INT32union res1.ll = s_a.s.hi * s_b.s.lo;
INT32union res2.ll = s_a.s.lo * s_b.s.lo;
if (res1.hi == 0)
{
result.s.lo = res1.s.lo + res2.s.hi;
if (result.s.hi == 0)
{
result.s.ll = result.s.lo << 16 + res2.s.lo;
if ((a.s.hi >> 15) ^ (b.s.hi >> 15) == 1)
{
result.s.ll = -result.s.ll;
}
return result.s.ll
}else
{
Overflow();
}
}else
{
Overflow();
}
}else if (s_b.s.hi > 0)
{
//Same code changing a with b
}else
{
return (s_a.lo * s_b.lo);
}
}
I hope this code helps you to have a quite efficient program and I hope the code is clear, if not I'll put some coments.
best regards.
Here is a trick for detecting whether multiplication of two unsigned integers overflows.
We make the observation that if we multiply an N-bit-wide binary number with an M-bit-wide binary number, the product does not have more than N + M bits.
For instance, if we are asked to multiply a three-bit number with a twenty-nine bit number, we know that this doesn't overflow thirty-two bits.
#include <stdlib.h>
#include <stdio.h>
int might_be_mul_oflow(unsigned long a, unsigned long b)
{
if (!a || !b)
return 0;
a = a | (a >> 1) | (a >> 2) | (a >> 4) | (a >> 8) | (a >> 16) | (a >> 32);
b = b | (b >> 1) | (b >> 2) | (b >> 4) | (b >> 8) | (b >> 16) | (b >> 32);
for (;;) {
unsigned long na = a << 1;
if (na <= a)
break;
a = na;
}
return (a & b) ? 1 : 0;
}
int main(int argc, char **argv)
{
unsigned long a, b;
char *endptr;
if (argc < 3) {
printf("supply two unsigned long integers in C form\n");
return EXIT_FAILURE;
}
a = strtoul(argv[1], &endptr, 0);
if (*endptr != 0) {
printf("%s is garbage\n", argv[1]);
return EXIT_FAILURE;
}
b = strtoul(argv[2], &endptr, 0);
if (*endptr != 0) {
printf("%s is garbage\n", argv[2]);
return EXIT_FAILURE;
}
if (might_be_mul_oflow(a, b))
printf("might be multiplication overflow\n");
{
unsigned long c = a * b;
printf("%lu * %lu = %lu\n", a, b, c);
if (a != 0 && c / a != b)
printf("confirmed multiplication overflow\n");
}
return 0;
}
A smattering of tests: (on 64 bit system):
$ ./uflow 0x3 0x3FFFFFFFFFFFFFFF
3 * 4611686018427387903 = 13835058055282163709
$ ./uflow 0x7 0x3FFFFFFFFFFFFFFF
might be multiplication overflow
7 * 4611686018427387903 = 13835058055282163705
confirmed multiplication overflow
$ ./uflow 0x4 0x3FFFFFFFFFFFFFFF
might be multiplication overflow
4 * 4611686018427387903 = 18446744073709551612
$ ./uflow 0x5 0x3FFFFFFFFFFFFFFF
might be multiplication overflow
5 * 4611686018427387903 = 4611686018427387899
confirmed multiplication overflow
The steps in might_be_mul_oflow are almost certainly slower than just doing the division test, at least on mainstream processors used in desktop workstations, servers and mobile devices. On chips without good division support, it could be useful.
It occurs to me that there is another way to do this early rejection test.
We start with a pair of numbers arng and brng which are initialized to 0x7FFF...FFFF and 1.
If a <= arng and b <= brng we can conclude that there is no overflow.
Otherwise, we shift arng to the right, and shift brng to the left, adding one bit to brng, so that they are 0x3FFF...FFFF and 3.
If arng is zero, finish; otherwise repeat at 2.
The function now looks like:
int might_be_mul_oflow(unsigned long a, unsigned long b)
{
if (!a || !b)
return 0;
{
unsigned long arng = ULONG_MAX >> 1;
unsigned long brng = 1;
while (arng != 0) {
if (a <= arng && b <= brng)
return 0;
arng >>= 1;
brng <<= 1;
brng |= 1;
}
return 1;
}
}
When your using e.g. 64 bits variables, implement 'number of significant bits' with nsb(var) = { 64 - clz(var); }.
clz(var) = count leading zeros in var, a builtin command for GCC and Clang, or probably available with inline assembly for your CPU.
Now use the fact that nsb(a * b) <= nsb(a) + nsb(b) to check for overflow. When smaller, it is always 1 smaller.
Ref GCC: Built-in Function: int __builtin_clz (unsigned int x)
Returns the number of leading 0-bits in x, starting at the most significant bit position. If x is 0, the result is undefined.
I was thinking about this today and stumbled upon this question, my thoughts led me to this result. TLDR, while I find it "elegant" in that it only uses a few lines of code (could easily be a one liner), and has some mild math that simplifies to something relatively simple conceptually, this is mostly "interesting" and I haven't tested it.
If you think of an unsigned integer as being a single digit with radix 2^n where n is the number of bits in the integer, then you can map those numbers to radians around the unit circle, e.g.
radians(x) = x * (2 * pi * rad / 2^n)
When the integer overflows, it is equivalent to wrapping around the circle. So calculating the carry is equivalent to calculating the number of times multiplication would wrap around the circle. To calculate the number of times we wrap around the circle we divide radians(x) by 2pi radians. e.g.
wrap(x) = radians(x) / (2*pi*rad)
= (x * (2*pi*rad / 2^n)) / (2*pi*rad / 1)
= (x * (2*pi*rad / 2^n)) * (1 / 2*pi*rad)
= x * 1 / 2^n
= x / 2^n
Which simplifies to
wrap(x) = x / 2^n
This makes sense. The number of times a number, for example, 15 with radix 10, wraps around is 15 / 10 = 1.5, or one and a half times. However, we can't use 2 digits here (assuming we are limited to a single 2^64 digit).
Say we have a * b, with radix R, we can calculate the carry with
Consider that: wrap(a * b) = a * wrap(b)
wrap(a * b) = (a * b) / R
a * wrap(b) = a * (b / R)
a * (b / R) = (a * b) / R
carry = floor(a * wrap(b))
Take for example a = 9 and b = 5, which are factors of 45 (i.e. 9 * 5 = 45).
wrap(5) = 5 / 10 = 0.5
a * wrap(5) = 9 * 0.5 = 4.5
carry = floor(9 * wrap(5)) = floor(4.5) = 4
Note that if the carry was 0, then we would not have had overflow, for example if a = 2, b=2.
In C/C++ (if the compiler and architecture supports it) we have to use long double.
Thus we have:
long double wrap = b / 18446744073709551616.0L; // this is b / 2^64
unsigned long carry = (unsigned long)(a * wrap); // floor(a * wrap(b))
bool overflow = carry > 0;
unsigned long c = a * b;
c here is the lower significant "digit", i.e. in base 10 9 * 9 = 81, carry = 8, and c = 1.
This was interesting to me in theory, so I thought I'd share it, but one major caveat is with the floating point precision in computers. Using long double, there may be rounding errors for some numbers when we calculate the wrap variable depending on how many significant digits your compiler/arch uses for long doubles, I believe it should be 20 more more to be sure. Another issue with this result, is that it may not perform as well as some of the other solutions simply by using floating points and division.
If you just want to detect overflow, how about converting to double, doing the multiplication and if
|x| < 2^53, convert to int64
|x| < 2^63, make the multiplication using int64
otherwise produce whatever error you want?
This seems to work:
int64_t safemult(int64_t a, int64_t b) {
double dx;
dx = (double)a * (double)b;
if ( fabs(dx) < (double)9007199254740992 )
return (int64_t)dx;
if ( (double)INT64_MAX < fabs(dx) )
return INT64_MAX;
return a*b;
}

Bit shifts and their logical operators

This program below moves the last (junior) and the penultimate bytes variable i type int. I'm trying to understand why the programmer wrote this
i = (i & LEADING_TWO_BYTES_MASK) | ((i & PENULTIMATE_BYTE_MASK) >> 8) | ((i & LAST_BYTE_MASK) << 8);
Can anyone explain to me in plain English whats going on in the program below.
#include <stdio.h>
#include <cstdlib>
#define LAST_BYTE_MASK 255 //11111111
#define PENULTIMATE_BYTE_MASK 65280 //1111111100000000
#define LEADING_TWO_BYTES_MASK 4294901760 //11111111111111110000000000000000
int main(){
unsigned int i = 0;
printf("i = ");
scanf("%d", &i);
i = (i & LEADING_TWO_BYTES_MASK) | ((i & PENULTIMATE_BYTE_MASK) >> 8) | ((i & LAST_BYTE_MASK) << 8);
printf("i = %d", i);
system("pause");
}
Since you asked for plain english: He swaps the first and second bytes of an integer.
The expression is indeed a bit convoluted but in essence the author does this:
// Mask out relevant bytes
unsigned higher_order_bytes = i & LEADING_TWO_BYTES_MASK;
unsigned first_byte = i & LAST_BYTE_MASK;
unsigned second_byte = i & PENULTIMATE_BYTE_MASK;
// Switch positions:
unsigned first_to_second = first_byte << 8;
unsigned second_to_first = second_byte >> 8;
// Concatenate back together:
unsigned result = higher_order_bytes | first_to_second | second_to_first;
Incidentally, defining the masks using hexadecimal notation is more readable than using decimal. Furthermore, using #define here is misguided. Both C and C++ have const:
unsigned const LEADING_TWO_BYTES_MASK = 0xFFFF0000;
unsigned const PENULTIMATE_BYTE_MASK = 0xFF00;
unsigned const LAST_BYTE_MASK = 0xFF;
To understand this code you need to know what &, | and bit shifts are doing on the bit level.
It's more instructive to define your masks in hexadecimal rather than decimal, because then they correspond directly to the binary representations and it's easy to see which bits are on and off:
#define LAST 0xFF // all bits in the first byte are 1
#define PEN 0xFF00 // all bits in the second byte are 1
#define LEAD 0xFFFF0000 // all bits in the third and fourth bytes are 1
Then
i = (i & LEAD) // leave the first 2 bytes of the 32-bit integer the same
| ((i & PEN) >> 8) // take the 3rd byte and shift it 8 bits right
| ((i & LAST) << 8) // take the 4th byte and shift it 8 bits left
);
So the expression is swapping the two least significant bytes while leaving the two most significant bytes the same.

Given an array of uint8_t what is a good way to extract any subsequence of bits as a uint32_t?

I have run into an interesting problem lately:
Lets say I have an array of bytes (uint8_t to be exact) of length at least one. Now i need a function that will get a subsequence of bits from this array, starting with bit X (zero based index, inclusive) and having length L and will return this as an uint32_t. If L is smaller than 32 the remaining high bits should be zero.
Although this is not very hard to solve, my current thoughts on how to do this seem a bit cumbersome to me. I'm thinking of a table of all the possible masks for a given byte (start with bit 0-7, take 1-8 bits) and then construct the number one byte at a time using this table.
Can somebody come up with a nicer solution? Note that i cannot use Boost or STL for this - and no, it is not a homework, its a problem i run into at work and we do not use Boost or STL in the code where this thing goes. You can assume that: 0 < L <= 32 and that the byte array is large enough to hold the subsequence.
One example of correct input/output:
array: 00110011 1010 1010 11110011 01 101100
subsequence: X = 12 (zero based index), L = 14
resulting uint32_t = 00000000 00000000 00 101011 11001101
Only the first and last bytes in the subsequence will involve some bit slicing to get the required bits out, while the intermediate bytes can be shifted in whole into the result. Here's some sample code, absolutely untested -- it does what I described, but some of the bit indices could be off by one:
uint8_t bytes[];
int X, L;
uint32_t result;
int startByte = X / 8, /* starting byte number */
startBit = 7 - X % 8, /* bit index within starting byte, from LSB */
endByte = (X + L) / 8, /* ending byte number */
endBit = 7 - (X + L) % 8; /* bit index within ending byte, from LSB */
/* Special case where start and end are within same byte:
just get bits from startBit to endBit */
if (startByte == endByte) {
uint8_t byte = bytes[startByte];
result = (byte >> endBit) & ((1 << (startBit - endBit)) - 1);
}
/* All other cases: get ending bits of starting byte,
all other bytes in between,
starting bits of ending byte */
else {
uint8_t byte = bytes[startByte];
result = byte & ((1 << startBit) - 1);
for (int i = startByte + 1; i < endByte; i++)
result = (result << 8) | bytes[i];
byte = bytes[endByte];
result = (result << (8 - endBit)) | (byte >> endBit);
}
Take a look at std::bitset and boost::dynamic_bitset.
I would be thinking something like loading a uint64_t with a cast and then shifting left and right to lose the uninteresting bits.
uint32_t extract_bits(uint8_t* bytes, int start, int count)
{
int shiftleft = 32+start;
int shiftright = 64-count;
uint64_t *ptr = (uint64_t*)(bytes);
uint64_t hold = *ptr;
hold <<= shiftleft;
hold >>= shiftright;
return (uint32_t)hold;
}
For the sake of completness, i'am adding my solution inspired by the comments and answers here. Thanks to all who bothered to think about the problem.
static const uint8_t firstByteMasks[8] = { 0xFF, 0x7F, 0x3F, 0x1F, 0x0F, 0x07, 0x03, 0x01 };
uint32_t getBits( const uint8_t *buf, const uint32_t bitoff, const uint32_t len, const uint32_t bitcount )
{
uint64_t result = 0;
int32_t startByte = bitoff / 8; // starting byte number
int32_t endByte = ((bitoff + bitcount) - 1) / 8; // ending byte number
int32_t rightShift = 16 - ((bitoff + bitcount) % 8 );
if ( endByte >= len ) return -1;
if ( rightShift == 16 ) rightShift = 8;
result = buf[startByte] & firstByteMasks[bitoff % 8];
result = result << 8;
for ( int32_t i = startByte + 1; i <= endByte; i++ )
{
result |= buf[i];
result = result << 8;
}
result = result >> rightShift;
return (uint32_t)result;
}
Few notes: i tested the code and it seems to work just fine, however, there may be bugs. If i find any, i will update the code here. Also, there are probably better solutions!