Construct bitset from array of integers - c++

It's easy to construct a bitset<64> from a uint64_t:
uint64_t flags = ...;
std::bitset<64> bs{flags};
But is there a good way to construct a bitset<64 * N> from a uint64_t[N], such that flags[0] would refer to the lowest 64 bits?
uint64_t flags[3];
// ... some assignments
std::bitset<192> bs{flags}; // this very unhelpfully compiles
// yet is totally invalid
Or am I stuck having to call set() in a loop?

std::bitset has no range constructor, so you will have to loop, but setting every bit individually with std::bitset::set() is underkill. std::bitset has support for binary operators, so you can at least set 64 bits in bulk:
std::bitset<192> bs;
for(int i = 2; i >= 0; --i) {
bs <<= 64;
bs |= flags[i];
}
Update: In the comments, #icando raises the valid concern that bitshifts are O(N) operations for std::bitsets. For very large bitsets, this will ultimately eat the performance boost of bulk processing. In my benchmarks, the break-even point for a std::bitset<N * 64> in comparison to a simple loop that sets the bits individually and does not mutate the input data:
int pos = 0;
for(auto f : flags) {
for(int b = 0; b < 64; ++b) {
bs.set(pos++, f >> b & 1);
}
}
is somewhere around N == 200 (gcc 4.9 on x86-64 with libstdc++ and -O2). Clang performs somewhat worse, breaking even around N == 160. Gcc with -O3 pushes it up to N == 250.
Taking the lower end, this means that if you want to work with bitsets of 10000 bits or larger, this approach may not be for you. On 32-bit platforms (such as common ARMs), the threshold will probably lie lower, so keep that in mind when you work with 5000-bit bitsets on such platforms. I would argue, however, that somewhere far before this point, you should have asked yourself if a bitset is really the right choice of container.

If initializing from range is important, you might consider using std::vector
It does have constructor from pair of iterators

Related

How to construct a Bitset from a vector of int [duplicate]

It's easy to construct a bitset<64> from a uint64_t:
uint64_t flags = ...;
std::bitset<64> bs{flags};
But is there a good way to construct a bitset<64 * N> from a uint64_t[N], such that flags[0] would refer to the lowest 64 bits?
uint64_t flags[3];
// ... some assignments
std::bitset<192> bs{flags}; // this very unhelpfully compiles
// yet is totally invalid
Or am I stuck having to call set() in a loop?
std::bitset has no range constructor, so you will have to loop, but setting every bit individually with std::bitset::set() is underkill. std::bitset has support for binary operators, so you can at least set 64 bits in bulk:
std::bitset<192> bs;
for(int i = 2; i >= 0; --i) {
bs <<= 64;
bs |= flags[i];
}
Update: In the comments, #icando raises the valid concern that bitshifts are O(N) operations for std::bitsets. For very large bitsets, this will ultimately eat the performance boost of bulk processing. In my benchmarks, the break-even point for a std::bitset<N * 64> in comparison to a simple loop that sets the bits individually and does not mutate the input data:
int pos = 0;
for(auto f : flags) {
for(int b = 0; b < 64; ++b) {
bs.set(pos++, f >> b & 1);
}
}
is somewhere around N == 200 (gcc 4.9 on x86-64 with libstdc++ and -O2). Clang performs somewhat worse, breaking even around N == 160. Gcc with -O3 pushes it up to N == 250.
Taking the lower end, this means that if you want to work with bitsets of 10000 bits or larger, this approach may not be for you. On 32-bit platforms (such as common ARMs), the threshold will probably lie lower, so keep that in mind when you work with 5000-bit bitsets on such platforms. I would argue, however, that somewhere far before this point, you should have asked yourself if a bitset is really the right choice of container.
If initializing from range is important, you might consider using std::vector
It does have constructor from pair of iterators

Is it possible to micro-optimize "x = max(a,b); y = min(a,b);"?

I had an algorithm that started out like
int sumLargest2 ( int * arr, size_t n )
{
int largest(max(arr[0], arr[1])), secondLargest(min(arr[0],arr[1]));
// ...
and I realized that the first is probably not optimal because calling max and then min is repetitious when you consider that the information required to know the minimum is already there once you've found the maximum. So I figured out that I could do
int largest = max(arr[0], arr[1]);
int secondLargest = arr[0] == largest ? arr[1] : arr[0];
to shave off the useless invocation of min, but I'm not sure that actually saves any number of operations. Are there any fancy bit-shifting algorithms that can do the equivalent of
int largest(max(arr[0], arr[1])), secondLargest(min(arr[0],arr[1]));
?????
In C++, you can use std::minmax to produce a std::pair of the minimum and the maximum. This is particularly easy in combination with std::tie:
#include <algorithm>
#include <utility>
int largest, secondLargest;
std::tie(secondLargest, largest) = std::minmax(arr[0], arr[1]);
GCC, at least, is capable of optimizing the call to minmax into a single comparison, identical to the result of the C code below.
In C, you could write the test out yourself:
int largest, secondLargest;
if (arr[0] < arr[1]) {
largest = arr[1];
secondLargest = arr[0];
} else {
largest = arr[0];
secondLargest = arr[1];
}
How about:
int largestIndex = arr[1] > arr[0];
int largest = arr[largestIndex];
int secondLargest = arr[1 - largestIndex];
The first line relies on an implicit cast of a boolean result to 1 in the case of true and 0 in the case of false.
I'm going to assume that you'd rather solve the larger problem... That is, getting the sum of the largest two numbers in an array.
What you are trying to do is a std::partial_sort().
Let's implement it.
int sumLargest2(int * arr, size_t n) {
int * first = arr;
int * middle = arr + 2;
int * last = arr + n;
std::partial_sort(first, middle, last, std::greater<int>());
return arr[0] + arr[1];
}
And if you're unable to modify arr, then I'd recommend looking into std::partial_sort_copy().
x = max(a, b);
y = a + b - x;
It won't necessarily be faster, but it will be different.
Also beware of overflows.
If your intention is to reduce the function call to find min mad max you can try std::minmax_element. This is available since C++11.
auto result = std::minmax_element(arr, arr+n);
std::cout<< "min:"<< *result.first<<"\n";
std::cout<< "max :" <<*result.second << "\n";
If you just want to find the bigger of two values go:
if(a > b)
{
largest = a;
second = b;
}
else
{
largest = b;
second = a;
}
No function calls, one comparison, two assignments.
I'm assuming C++...
Short answer, use std::minmax and compile with the right optimizations and the right instruction set parameters.
Long ugly answer, The compiler cannot make all the assumptions necessary to make it really, really fast. You can. In this case, you can change the algorithm to process all data first and you can force alignment on the data. Doing all this, you can use intrinsics to make it faster.
Although I haven't tested it in this particular case, I've seen enormous performance improvements using these guidelines.
Since you're not passing 2 integers to the function, I'm assuming your using an array and want to iterate it somehow. You now have a choice to make: make 2 arrays and use min/max or use 1 array with both a and b. This decision alone can already influence the performance.
If you have 2 arrays, these can be allocated on 32-byte boundaries with aligned malloc's and then processed using intrinsics. If you are going for real, raw performance - this is the way to go.
F.ex, let's assume you have AVX2. (NOTE: I'm not sure if you do and you SHOULD check this using CPU id's!). Go to the cheat sheet here: https://software.intel.com/sites/landingpage/IntrinsicsGuide/ and pick your poison.
The intrinsics you're looking for are in this case probably:
_mm256_min_epi32
_mm256_max_epi32
_mm256_stream_load_si256
If you have to do this for the entire array, you probably want to keep all the stuff in a single __mm256 register before merging the individual items. E.g.: do a min/max per 256-bit vector, and when the loop is done, extract the 32-bit items and do a min/max on that.
Long nicer answer: So ... as for the compiler. Compilers do attempt to optimize these kinds of things, but run into problems.
If you have 2 different arrays that you process, the compiler has to know that they are different in order to be able to optimize it. This is the reason why stuff like restrict exists, which tells the compiler exactly this little thing you probably already knew while writing the code.
Also, the compiler doesn't know your memory is aligned, so it has to check this and branch... for each call. We don't want this; which means we want it to inline its stuff. So, add inline, put it in a header file and that's that. You can also use aligned to give him a hint.
Your compiler also didn't get the hint that the int* won't change over time. If it cannot change, it's a good idea to tell him that using the const keyword.
A compiler uses an instruction set to do the compilation. Normally, they already use SSE, but AVX2 can help a lot (as I've shown with the intrinsics above). If you can compile it with those flags, make sure to use them - they help a lot.
Run in release mode, compile with optimizations on 'fast' and see what happens under the hood. If you do all this, you should see vpmax... instructions appearing in the inner loops, which means that the compiler uses the intrinsics just fine.
I don't know what else you want to do in the loop... if you use all these instructions you should hit the memory speed on big arrays.
How about a time-space trade-off?
#include <utility>
template<typename T>
std::pair<T, T>
minmax(T const& a, T const& b)
{ return b < a ? std::make_pair(b, a) : std::make_pair(a, b); }
//main
std::pair<int, int> values = minmax(a[0], a[1]);
int largest = values.second;
int secondLargest = values.first;

What is the performance of std::bitset?

I recently asked a question on Programmers regarding reasons to use manual bit manipulation of primitive types over std::bitset.
From that discussion I have concluded that the main reason is its comparatively poorer performance, although I'm not aware of any measured basis for this opinion. So next question is:
what is the performance hit, if any, likely to be incurred by using std::bitset over bit-manipulation of a primitive?
The question is intentionally broad, because after looking online I haven't been able to find anything, so I'll take what I can get. Basically I'm after a resource that provides some profiling of std::bitset vs 'pre-bitset' alternatives to the same problems on some common machine architecture using GCC, Clang and/or VC++. There is a very comprehensive paper which attempts to answer this question for bit vectors:
http://www.cs.up.ac.za/cs/vpieterse/pub/PieterseEtAl_SAICSIT2010.pdf
Unfortunately, it either predates or considered out of scope std::bitset, so it focuses on vectors/dynamic array implementations instead.
I really just want to know whether std::bitset is better than the alternatives for the use cases it is intended to solve. I already know that it is easier and clearer than bit-fiddling on an integer, but is it as fast?
Update
It's been ages since I posted this one, but:
I already know that it is easier and clearer than bit-fiddling on an
integer, but is it as fast?
If you are using bitset in a way that does actually make it clearer and cleaner than bit-fiddling, like checking for one bit at a time instead of using a bit mask, then inevitably you lose all those benefits that bitwise operations provide, like being able to check to see if 64 bits are set at one time against a mask, or using FFS instructions to quickly determine which bit is set among 64-bits.
I'm not sure that bitset incurs a penalty to use in all ways possible (ex: using its bitwise operator&), but if you use it like a fixed-size boolean array which is pretty much the way I always see people using it, then you generally lose all those benefits described above. We unfortunately can't get that level of expressiveness of just accessing one bit at a time with operator[] and have the optimizer figure out all the bitwise manipulations and FFS and FFZ and so forth going on for us, at least not since the last time I checked (otherwise bitset would be one of my favorite structures).
Now if you are going to use bitset<N> bits interchangeably with like, say, uint64_t bits[N/64] as in accessing both the same way using bitwise operations, it might be on par (haven't checked since this ancient post). But then you lose many of the benefits of using bitset in the first place.
for_each method
In the past I got into some misunderstandings, I think, when I proposed a for_each method to iterate through things like vector<bool>, deque, and bitset. The point of such a method is to utilize the internal knowledge of the container to iterate through elements more efficiently while invoking a functor, just as some associative containers offer a find method of their own instead of using std::find to do a better than linear-time search.
For example, you can iterate through all set bits of a vector<bool> or bitset if you had internal knowledge of these containers by checking for 64 elements at a time using a 64-bit mask when 64 contiguous indices are occupied, and likewise use FFS instructions when that's not the case.
But an iterator design having to do this type of scalar logic in operator++ would inevitably have to do something considerably more expensive, just by the nature in which iterators are designed in these peculiar cases. bitset lacks iterators outright and that often makes people wanting to use it to avoid dealing with bitwise logic to use operator[] to check each bit individually in a sequential loop that just wants to find out which bits are set. That too is not nearly as efficient as what a for_each method implementation could do.
Double/Nested Iterators
Another alternative to the for_each container-specific method proposed above would be to use double/nested iterators: that is, an outer iterator which points to a sub-range of a different type of iterator. Client code example:
for (auto outer_it = bitset.nbegin(); outer_it != bitset.nend(); ++outer_it)
{
for (auto inner_it = outer_it->first; inner_it != outer_it->last; ++inner_it)
// do something with *inner_it (bit index)
}
While not conforming to the flat type of iterator design available now in standard containers, this can allow some very interesting optimizations. As an example, imagine a case like this:
bitset<64> bits = 0x1fbf; // 0b1111110111111;
In that case, the outer iterator can, with just a few bitwise iterations ((FFZ/or/complement), deduce that the first range of bits to process would be bits [0, 6), at which point we can iterate through that sub-range very cheaply through the inner/nested iterator (it would just increment an integer, making ++inner_it equivalent to just ++int). Then when we increment the outer iterator, it can then very quickly, and again with a few bitwise instructions, determine that the next range would be [7, 13). After we iterate through that sub-range, we're done. Take this as another example:
bitset<16> bits = 0xffff;
In such a case, the first and last sub-range would be [0, 16), and the bitset could determine that with a single bitwise instruction at which point we can iterate through all set bits and then we're done.
This type of nested iterator design would map particularly well to vector<bool>, deque, and bitset as well as other data structures people might create like unrolled lists.
I say that in a way that goes beyond just armchair speculation, since I have a set of data structures which resemble the likes of deque which are actually on par with sequential iteration of vector (still noticeably slower for random-access, especially if we're just storing a bunch of primitives and doing trivial processing). However, to achieve the comparable times to vector for sequential iteration, I had to use these types of techniques (for_each method and double/nested iterators) to reduce the amount of processing and branching going on in each iteration. I could not rival the times otherwise using just the flat iterator design and/or operator[]. And I'm certainly not smarter than the standard library implementers but came up with a deque-like container which can be sequentially iterated much faster, and that strongly suggests to me that it's an issue with the standard interface design of iterators in this case which come with some overhead in these peculiar cases that the optimizer cannot optimize away.
Old Answer
I'm one of those who would give you a similar performance answer, but I'll try to give you something a bit more in-depth than "just because". It is something I came across through actual profiling and timing, not merely distrust and paranoia.
One of the biggest problems with bitset and vector<bool> is that their interface design is "too convenient" if you want to use them like an array of booleans. Optimizers are great at obliterating all that structure you establish to provide safety, reduce maintenance cost, make changes less intrusive, etc. They do an especially fine job with selecting instructions and allocating the minimal number of registers to make such code run as fast as the not-so-safe, not-so-easy-to-maintain/change alternatives.
The part that makes the bitset interface "too convenient" at the cost of efficiency is the random-access operator[] as well as the iterator design for vector<bool>. When you access one of these at index n, the code has to first figure out which byte the nth bit belongs to, and then the sub-index to the bit within that. That first phase typically involves a division/rshifts against an lvalue along with modulo/bitwise and which is more costly than the actual bit operation you're trying to perform.
The iterator design for vector<bool> faces a similar awkward dilemma where it either has to branch into different code every 8+ times you iterate through it or pay that kind of indexing cost described above. If the former is done, it makes the logic asymmetrical across iterations, and iterator designs tend to take a performance hit in those rare cases. To exemplify, if vector had a for_each method of its own, you could iterate through, say, a range of 64 elements at once by just masking the bits against a 64-bit mask for vector<bool> if all the bits are set without checking each bit individually. It could even use FFS to figure out the range all at once. An iterator design would tend to inevitably have to do it in a scalar fashion or store more state which has to be redundantly checked every iteration.
For random access, optimizers can't seem to optimize away this indexing overhead to figure out which byte and relative bit to access (perhaps a bit too runtime-dependent) when it's not needed, and you tend to see significant performance gains with that more manual code processing bits sequentially with advanced knowledge of which byte/word/dword/qword it's working on. It's somewhat of an unfair comparison, but the difficulty with std::bitset is that there's no way to make a fair comparison in such cases where the code knows what byte it wants to access in advance, and more often than not, you tend to have this info in advance. It's an apples to orange comparison in the random-access case, but you often only need oranges.
Perhaps that wouldn't be the case if the interface design involved a bitset where operator[] returned a proxy, requiring a two-index access pattern to use. For example, in such a case, you would access bit 8 by writing bitset[0][6] = true; bitset[0][7] = true; with a template parameter to indicate the size of the proxy (64-bits, e.g.). A good optimizer may be able to take such a design and make it rival the manual, old school kind of way of doing the bit manipulation by hand by translating that into: bitset |= 0x60;
Another design that might help is if bitsets provided a for_each_bit kind of method, passing a bit proxy to the functor you provide. That might actually be able to rival the manual method.
std::deque has a similar interface problem. Its performance shouldn't be that much slower than std::vector for sequential access. Yet unfortunately we access it sequentially using operator[] which is designed for random access or through an iterator, and the internal rep of deques simply don't map very efficiently to an iterator-based design. If deque provided a for_each kind of method of its own, then there it could potentially start to get a lot closer to std::vector's sequential access performance. These are some of the rare cases where that Sequence interface design comes with some efficiency overhead that optimizers often can't obliterate. Often good optimizers can make convenience come free of runtime cost in a production build, but unfortunately not in all cases.
Sorry!
Also sorry, in retrospect I wandered a bit with this post talking about vector<bool> and deque in addition to bitset. It's because we had a codebase where the use of these three, and particularly iterating through them or using them with random-access, were often hotspots.
Apples to Oranges
As emphasized in the old answer, comparing straightforward usage of bitset to primitive types with low-level bitwise logic is comparing apples to oranges. It's not like bitset is implemented very inefficiently for what it does. If you genuinely need to access a bunch of bits with a random access pattern which, for some reason or other, needs to check and set just one bit a time, then it might be ideally implemented for such a purpose. But my point is that almost all use cases I've encountered didn't require that, and when it's not required, the old school way involving bitwise operations tends to be significantly more efficient.
Did a short test profiling std::bitset vs bool arrays for sequential and random access - you can too:
#include <iostream>
#include <bitset>
#include <cstdlib> // rand
#include <ctime> // timer
inline unsigned long get_time_in_ms()
{
return (unsigned long)((double(clock()) / CLOCKS_PER_SEC) * 1000);
}
void one_sec_delay()
{
unsigned long end_time = get_time_in_ms() + 1000;
while(get_time_in_ms() < end_time)
{
}
}
int main(int argc, char **argv)
{
srand(get_time_in_ms());
using namespace std;
bitset<5000000> bits;
bool *bools = new bool[5000000];
unsigned long current_time, difference1, difference2;
double total;
one_sec_delay();
total = 0;
current_time = get_time_in_ms();
for (unsigned int num = 0; num != 200000000; ++num)
{
bools[rand() % 5000000] = rand() % 2;
}
difference1 = get_time_in_ms() - current_time;
current_time = get_time_in_ms();
for (unsigned int num2 = 0; num2 != 100; ++num2)
{
for (unsigned int num = 0; num != 5000000; ++num)
{
total += bools[num];
}
}
difference2 = get_time_in_ms() - current_time;
cout << "Bool:" << endl << "sum total = " << total << ", random access time = " << difference1 << ", sequential access time = " << difference2 << endl << endl;
one_sec_delay();
total = 0;
current_time = get_time_in_ms();
for (unsigned int num = 0; num != 200000000; ++num)
{
bits[rand() % 5000000] = rand() % 2;
}
difference1 = get_time_in_ms() - current_time;
current_time = get_time_in_ms();
for (unsigned int num2 = 0; num2 != 100; ++num2)
{
for (unsigned int num = 0; num != 5000000; ++num)
{
total += bits[num];
}
}
difference2 = get_time_in_ms() - current_time;
cout << "Bitset:" << endl << "sum total = " << total << ", random access time = " << difference1 << ", sequential access time = " << difference2 << endl << endl;
delete [] bools;
cin.get();
return 0;
}
Please note: the outputting of the sum total is necessary so the compiler doesn't optimise out the for loop - which some do if the result of the loop isn't used.
Under GCC x64 with the following flags: -O2;-Wall;-march=native;-fomit-frame-pointer;-std=c++11;
I get the following results:
Bool array:
random access time = 4695, sequential access time = 390
Bitset:
random access time = 5382, sequential access time = 749
Not a great answer here, but rather a related anecdote:
A few years ago I was working on real-time software and we ran into scheduling problems. There was a module which was way over time-budget, and this was very surprising because the module was only responsible for some mapping and packing/unpacking of bits into/from 32-bit words.
It turned out that the module was using std::bitset. We replaced this with manual operations and the execution time decreased from 3 milliseconds to 25 microseconds. That was a significant performance issue and a significant improvement.
The point is, the performance issues caused by this class can be very real.
In addition to what the other answers said about the performance of access, there may also be a significant space overhead: Typical bitset<> implementations simply use the longest integer type to back their bits. Thus, the following code
#include <bitset>
#include <stdio.h>
struct Bitfield {
unsigned char a:1, b:1, c:1, d:1, e:1, f:1, g:1, h:1;
};
struct Bitset {
std::bitset<8> bits;
};
int main() {
printf("sizeof(Bitfield) = %zd\n", sizeof(Bitfield));
printf("sizeof(Bitset) = %zd\n", sizeof(Bitset));
printf("sizeof(std::bitset<1>) = %zd\n", sizeof(std::bitset<1>));
}
produces the following output on my machine:
sizeof(Bitfield) = 1
sizeof(Bitset) = 8
sizeof(std::bitset<1>) = 8
As you see, my compiler allocates a whopping 64 bits to store a single one, with the bitfield approach, I only need to round up to eight bits.
This factor eight in space usage can become important if you have a lot of small bitsets.
Rhetorical question: Why std::bitset is written in that inefficacy way?
Answer: It is not.
Another rhetorical question: What is difference between:
std::bitset<128> a = src;
a[i] = true;
a = a << 64;
and
std::bitset<129> a = src;
a[i] = true;
a = a << 63;
Answer: 50 times difference in performance http://quick-bench.com/iRokweQ6JqF2Il-T-9JSmR0bdyw
You need be very careful what you ask for, bitset support lot of things but each have it own cost. With correct handling you will have exactly same behavior as raw code:
void f(std::bitset<64>& b, int i)
{
b |= 1L << i;
b = b << 15;
}
void f(unsigned long& b, int i)
{
b |= 1L << i;
b = b << 15;
}
Both generate same assembly: https://godbolt.org/g/PUUUyd (64 bit GCC)
Another thing is that bitset is more portable but this have cost too:
void h(std::bitset<64>& b, unsigned i)
{
b = b << i;
}
void h(unsigned long& b, unsigned i)
{
b = b << i;
}
If i > 64 then bit set will be zero and in case of unsigned we have UB.
void h(std::bitset<64>& b, unsigned i)
{
if (i < 64) b = b << i;
}
void h(unsigned long& b, unsigned i)
{
if (i < 64) b = b << i;
}
With check preventing UB both generate same code.
Another place is set and [], first one is safe and mean you will never get UB but this will cost you a branch. [] have UB if you use wrong value but is fast as using var |= 1L<< i;. Of corse if std::bitset do not need have more bits than biggest int available on system because other wise you need split value to get correct element in internal table. This mean for std::bitset<N> size N is very important for performance. If is bigger or smaller than optimal one you will pay cost of it.
Overall I find that best way is use something like that:
constexpr size_t minBitSet = sizeof(std::bitset<1>)*8;
template<size_t N>
using fasterBitSet = std::bitset<minBitSet * ((N + minBitSet - 1) / minBitSet)>;
This will remove cost of trimming exceeding bits: http://quick-bench.com/Di1tE0vyhFNQERvucAHLaOgucAY

Which is the fastest way of generating a vector of random bits?

I have the following question. Suppose I want to generate a std::vector<bool>, or std::vector<unsigned int>, or even a C-style array that contains only 0's and 1's, uniformly distributed (i.e., at each run should have something like 0110 or 1011, you get the idea). There are 2 approaches:
generate each element using an std::uniform_int_distribution(0,1)
generate a random integer between 0 and 2^n-1 (where n is the number of bits), then use a bitset<n>
I am talking here about large vectors, and did some timing but didn't get any clear insight.
Does anyone know which method is more efficient? In my opinion the second approach should be better, but I am not convinced.
If performance is a concern and your output set needs to be large, I'd skip vector<bool>. While compact, they are lower performance to access an individual member.
Either use a std::vector<unsigned int> and call .resize(N) on it to make sure it's large enough, or allocate a C style array via malloc. Either of these will have performant access if you use [index] to access.
Secondly, get your own RNG, many system ones suck. e.g. MSVC uses an LCG, albeit shifted right by 16 bits. Even so, http://en.wikipedia.org/wiki/Linear_congruential_generator notes that they should be avoided if you really care. My recommendation is an Xor-Shift RNG because it's (A) blindingly fast, and (B) designed by George Marsaglia, who was a bit of an RNG expert.
The other advantage is that you can get your own to RNG return a full 32 bit unsigned int per call, versus MSVC rand() and possibly others which only provide 15 bits per call. Therefore you'll make over twice as many calls to rand() as you will to your own generator.
To actually fill the array, you'd do something like:
void fillarray(unsigned int *data, unsigned int count)
{
unsigned int randomValue;
for (int index = 0; index < count; index++)
{
if ((index & 31) == 0)
{
randomValue = nextRandom();
}
data[index] = randomValue & 1;
randomValue >>= 1;
}
}

C++: building iterator from bits

I have a bitmap and would like to return an iterator of positions of set bits. Right now I just walk the whole bitmap and if bit is set, then I provide next position. I believe this could be done more effectively: for example build statically array for each combination of bits in single byte and return vector of positions. This can't be done for a whole int, because array would be too big. But maybe there are some better solutions? Do you know any smart algorithms for this?
I can suggest several ideas.
Turns out modern CPUs have dedicated instructions for finding the next set bit in a 32- or 64-bit word.
I like very much your idea of constructing the iterator for the whole bitmap from prepared efficient per-byte mini-iterators, this is really cool and I'm surprised I've never seen it before!
If your bitmap is very sparse, you may represent it in some other form, such as a balanced tree, where iteration algorithms are quite well-known.
If your bitmap is sparse but with dense regions (that sounds exotic, but I've encountered situations where this was exactly the case), use a balanced tree of small (32-bit or 64-bit) bitmaps and use a combined iteration algorithm for a tree and for bits of a word.
To avoid the memory overhead of an explicit tree, use an implicit one, like in the canonical heapsort algorithm. After your bitset is ready and will not be mutated, build a "pyramid" on top of it where level(N+1)[i] = level(N)[2*i] | level(N)[2*i+1]. This will allow you to rapidly skip uninhabited regions of the bitset, and iteration will be done in a fashion similar to iterating over a regular binary tree. You might as well build a pyramid of inhabitance, starting from bytes, etc.: it all depends on how sparse your bitset it.
There are well-known bit tricks for finding the number of leading zeros in a word; see, for example, the code of java's standard libraries:
You might gain a lot of performance by using a passive iterator instead of an active one, t.i. instead of begin() and operator++(), provide a foreach(F) function for your bitset where F has operator(). If you need passive iteration with premature termination, make F's operator() return a boolean that denotes whether termination is requested.
EDIT: I couldn't resist trying out your approach with preparing iterators for bytes. I wrote a code generator into C#2.0 that produces code of the following form:
IEnumerable<int> bits(byte[] bytes) {
for(int i=0; i<bytes.Length; ++i) {
int oi=8*i;
switch(bytes[i]) {
....
case 74: yield return oi+1; yield return oi+4; yield return oi+6; break;
....
}
}
}
I compared its performance for counting bits of a random 50%-filled byte array (10Mb) with the performance of code that does not use iterators at all and consists of two loops:
for (int i = 0; i < bytes.Length; ++i) {
byte b = bytes[i];
for (int j = 7; j >= 0; --j) {
if (((int)b & (1 << j)) != 0) s++;
}
}
The second code snippet is just some 1.66 times faster than the first one (~1.5s vs ~2.5s). I think that sparser bit arrays might even make the first code outperform the second one.