High Performance Computing: the use of shared_array vs atomics? - c++

I'm curious if anyone here has knowledge on the efficiency of atomics, specifically std::atomic<int>. My problem goes as follows:
I have a data set, say data = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12} that is passed into an algorithm algo(begin(data), end(data)). algo partitions the data into chunks and executes each chunk asynchronously, so algo would perform it's operation on, say, 4 different chunks:
{1, 2, 3}
{4, 5, 6}
{7, 8, 9}
{10, 11, 12}
in each separate partition I need to return the count of elements that satisfy a predicate op at the end of each partition
//partition lambda function
{
//'it' corresponds to the position in it's respective partition
if( op(*it) )
count++;
//return the count at the end of this partition
return count;
}
the problem is that I'm going to run into a data race just incrementing 1 variable with 4 chunks executing asynchronously. I was thinking of two possible solutions:
use a std::atomic
the problem here is I know very little about C++'s atomics, and from what i've heard they can be inefficient. Is this true? what results should I expect to see with using atomics to keep track of a count?
use a shared array, where the size is the partition count
I know my shared arrays pretty well so this idea doesn't seem too bad, but I'm unsure how it would hold up when a very small chunk size is given, which would make the shared array keeping track of the count at the end of each partition quite large. It would be useful however as the algorithm doesn't have to wait for anything to finish to increment, it simply places it's respective count in the shared array.
so with both my ideas, I could implement it possible as:
//partition lambda function, count is now atomic
{
//'it' corresponds to the position in it's respective partition
if( op(*it) )
count++;
//return the count at the end of this partition
return count.load();
}
//partition lambda function, count is in shared array that will be be accessed later
//instead of returned
{
int count = 0;
//'it' corresponds to the position in it's respective partition
if( op(*it) )
count++;
//total count at end of each partition. ignore fact that partition_id = 0 wouldn't work
shared_arr[partition_id] = shared_arr[partition_id - 1] + count;
}
any ideas on atomic vs shared_array?

Related

Why are finding elements most efficient in arrays in c++?

I need a fast STL container for finding if an element exists in it, so I tested arrays, vectors, sets, and unordered sets. I thought that sets were optimized for finding elements, because of unique and ordered values, but the fastest for 10 million iterations are:
arrays (0.3 secs)
vectors (1.7 secs)
unordered sets (1.9 secs)
sets (3 secs)
Here is the code:
#include <algorithm>
#include <iostream>
#include <set>
#include <unordered_set>
#include <vector>
int main() {
using std::cout, std::endl, std::set, std::unordered_set, std::vector, std::find;
int i;
const long ITERATIONS = 10000000;
int a[] {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15};
for (int i = 0; i < ITERATIONS; i++) {
if (find(a, a + 16, rand() % 64) == a + 16) {}
else {}
}
vector<int> v{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15};
for (i = 0; i < ITERATIONS; i++) {
if (find(v.begin(), v.end(), rand() % 64) == v.end()) {}
else {}
}
set<int> s({0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15});
for (i = 0; i < ITERATIONS; i++) {
if (find(s.begin(), s.end(), rand() % 64) == s.end()) {}
else {}
}
unordered_set<int> us({0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15});
for (i = 0; i < ITERATIONS; i++) {
if (find(us.begin(), us.end(), rand() % 64) == us.end()) {}
else {}
}
}
Please remember that in C and C++ there is the as if rule!
This means compiler can transform code by any means (even by dropping code) as long as observable result of running code remains unchanged.
Here is godbolt of your code.
Now note what compiler did for if (find(a, a + 16, rand() % 64) == a + 16) {}:
.L206:
call rand
sub ebx, 1
jne .L206
Basically compiler noticed that result of it is not used and remove everything expect calling rand() which has side effects (visible changes in results).
The same happen for std::vector:
.L207:
call rand
sub ebx, 1
jne .L207
And even for std::set and std::unordered_set compiler was able to perform same optimization. The difference you are seeing (you didn't specified how you did that) is just result of initializing all of this variables which is time consuming for more complex containers.
Writing good performance test is hard and should be approached with caution.
There is also second problem with your question. Time complexity of given code.
Searching array and searching std::set and std::unrodered_set scales differently to size of data. For small data set simple array will be fates since its simple implementation and optimal access to memory. As data size grow time complexity of std::find for array will grow as O(n) on other hand slower std::set time to find item will grow as O(log n) and for std::unordered_set it will be constant time O(1). So for small amount of data array will be fastest, for medium size std::set is the winner and if amount of data will be large std::unordered_set will be best.
Take a look on this benchmark example which uses google benchmark.
You are not measuring efficiency, you are measuring performance. And doing so badly.
The effect of address space randomization or just different usernames or other variable in env has up to about 40% effect on speed. That's more than the difference between -O0 and -O2. You are only measuring one single system with one single address space layout 10000000 times. That makes the value about meaningless.
And yet you still manage to figure out that for 16 ints any attempt to be better than just looking at all of them will perform worse. A simple linear search in a single cache line (or two if the layout is bad, more likely) is just simply the best way.
Now try again with 10000000 ints and run the binary 1000 times. Even better if you use a layout randomizer to truly exclude accidents of the layout from the timing.
Note: Iirc the limit for sort where an array with bubble sort is best is somewhere between 32 and 64 ints and find is even simpler.

What is the fastest way to see if an array has two common elements?

Suppose that we have a very long array, of, say, int to make the problem simpler.
What is the fastest way (or just a fast way, if it's not the fastest), in C++ to see if an array has more than one common elements in C++?
To clarify, this function should return this:
[2, 5, 4, 3] => false
[2, 8, 2, 5, 7, 3, 4] => true
[8, 8, 5] => true
[1, 2, 3, 4, 1, 7, 1, 1, 7, 1, 2, 2, 3, 4] => true
[9, 1, 12] => false
One strategy is to loop through the array and for each array element loop through the array again to check. However, this can be very costly and expensive (literally O(n^2)). Is there any better way?
(✠Update Below) Insert the array elements to a std::unordered_set and if the insertion fails, it means you have duplicates.
Something like as follows:
#include <iostream>
#include <vector>
#include <unordered_set>
bool has_duplicates(const std::vector<int>& vec)
{
std::unordered_set<int> set;
for (int ele : vec)
if (const auto [iter, inserted] = set.emplace(ele); !inserted)
return true; // has duplicates!
return false;
}
int main()
{
std::vector<int> vec1{ 1, 2, 3 };
std::cout << std::boolalpha << has_duplicates(vec1) << '\n'; // false
std::vector<int> vec2{ 12, 3, 2, 3 };
std::cout << std::boolalpha << has_duplicates(vec2) << '\n'; // true
}
✠Update: As discussed in the comments, this can or may not be the fastest solution. In OP's case, as explained in Marcus Müller's answer, anO(N·log(N)) method would be better, which we can achieve by having a sorted array check for dupes.
Here is a quick benchmark that I made for the two cases "UnorderedSetInsertion" and the "ArraySort". Following are the result for GCC 10.3, C++20, O3:
This is nearly just a sorting problem, just that you can abort the sorting once you've hit a single equality and return true.
So, if you're memory-limited (That's often the case, not actually time-limited), an in-place sorting algorithm that aborts when it encounters to identical elements will do; so, std::sort with a comparator function that raises an exception when it encounters equality. Complexity would be O(N·log(N)), but let's be honest here: the fact that this is probably less indirect in memory addressing then the creation of a tree-like bucket structure might help. In that sense, I can only recommend you actually compare this to JeJos solution – that looks pretty reasonable, too!
The thing here is that there's very likely not a one-size-fits-all solution: what is fastest will depend on the amount of integers we're talking about. Even quadratic complexity might be better than any of our "clever" answers if that keeps memory access nice and linear – I'm almost certain your speed here is not bounded by your CPU, but by the amount of data you need to shuffle to and from RAM.
How about binning data (or create a histogram), and check for mode of the resultant data. A mode > 1 indicates a repeat value.

Virtually defragment a fragmented memory as if it was contiguous in c++

is there a way or is it possible to take e.g 10 memory regions (e.g. pointers with given size) and create a sort of overlay such that they can be handled/treated as contiguous?
The use case would be something like reconstruct a message out of "n" frames without copying them around.
Of course the "n" frames are appended/prependend with a header which should be stripped in order to reconstruct the information. Moreover a variable could be e.g. splitted across two consecutive frames.
Few more details for future help.
Otter solution is quite nice but it lacks the possibility to lay a structure on top of multiple boost::join-ed block.
Of course a std::copy of the joined block will create a contiguous copy of all the interested and fragmented regions but in my case i would like it to be "virtual" due to performance constraints.
Regards,
boost::range::join is a great helper here - link. When working with random access ranges it will also produce random access range with quick access to elements. As the manual tells The resultant range will have the lowest common traversal of the two ranges supplied as parameters
Also when working with plain memory boost::make_iterator_range cound help.
Take a look at this short example.
int arr1[] = { 0, 1, 2, 3, 4, 5 }; // let's join these 3 plain memory arrays
int arr2[] = { 6, 7, 8, 9 };
int arr3[] = { 10, 11, 12};
int* mem1 = arr1; // let's make the example more complicated
int* mem2 = arr2; // because int arr1[] with known size will be recognized
int* mem3 = arr3; // as a range by boost::join
auto res1 = boost::range::join(boost::make_iterator_range(mem1, mem1 + 6),
boost::make_iterator_range(mem2, mem2 + 4)); // join 2 ranges by pointer arithmetics
auto res2 = boost::range::join(res1, // join previously joined range
boost::make_iterator_range(mem3, mem3 + 3));
for (auto& r : res2) // the resulted range is iterable
{
std::cout << r << "\n";
}
std::cout << res2[12]; // outputs '12', note that this result
// was eventually got by pointer arithmetics
// applyed to mem3

Pick a random pwm pin in Arduino

I want to pick a random PWM pin each time a loop repeats. The pins that are PWM capable in the Arduino UNO are pins: 3,5,6,11,10,9. I tried rnd() but it gives me linear values from a range, same with TrueRandom.Random(1,9).
Well, there are at least two way to do it.
The first (and probably best) way is to load those values into an array of size six, generate a number in the range zero through five and get the value from that position in the array.
In other words, psedo-code such as:
values = [3, 5, 6, 9, 10, 11]
num = values[randomInclusive(0..5)]
In terms of actually implementing that pseudo-code, I'd look at something like:
int getRandomPwmPin() {
static const int candidate[] = {3, 5, 6, 9, 10, 11};
static const int count = sizeof(candidate) / sizeof(*candidate);
return candidate[TrueRandom.random(0, count)];
}
There's also the naive way of doing it, which is to generate numbers in a range and simply throw away those that don't meet your specification (i.e., go back and get another one). This is actually an inferior method as it may take longer to get a suitable number under some circumstances. Technically, it could even take an infinitely(a) long time if suitable values don't appear.
This would be along the lines of (psedo-code):
num = -1 // force entry into loop
while num is not one of 3, 5, 6, 9, 10, 11:
num = randomInclusive(3..11)
which becomes:
int getRandomPwmPin() {
int value;
do {
value = TrueRandom.random(3, 12);
} while ((value == 4) || (value == 7) || (value == 8));
return value;
}
As stated, the former solution is probably the best one. I include the latter only for informational purposes.
(a) Yes, I know. Over an long enough time frame, statistics pretty much guarantees you'll get a useful value. Stop being a pedant about my hyperbole :-)
The trick is to make a list of pins and then pick an entry from the list at random
int pins[]={3,5,6,11,10,9}
int choice = rnd() //in range 0-5
pin = pins[choice]
see Generating random integer from a range to get number in range

Data structure for matching sets

I have an application where I have a number of sets. A set might be
{4, 7, 12, 18}
unique numbers and all less than 50.
I then have several data items:
1 {1, 2, 4, 7, 8, 12, 18, 23, 29}
2 {3, 4, 6, 7, 15, 23, 34, 38}
3 {4, 7, 12, 18}
4 {1, 4, 7, 12, 13, 14, 15, 16, 17, 18}
5 {2, 4, 6, 7, 13, 15}
Data items 1, 3 and 4 match the set because they contain all items in the set.
I need to design a data structure that is super fast at identifying whether a data item is a member of a set includes all the members that are part of the set (so the data item is a superset of the set). My best estimates at the moment suggest that there will be fewer than 50,000 sets.
My current implementation has my sets and data as unsigned 64 bit integers and the sets stored in a list. Then to check a data item I iterate through the list doing a ((set & data) == set) comparison. It works and it's space efficient but it's slow (O(n)) and I'd be happy to trade some memory for some performance. Does anyone have any better ideas about how to organize this?
Edit:
Thanks very much for all the answers. It looks like I need to provide some more information about the problem. I get the sets first and I then get the data items one by one. I need to check whether the data item is matches one of the sets.
The sets are very likely to be 'clumpy' for example for a given problem 1, 3 and 9 might be contained in 95% of sets; I can predict this to some degree in advance (but not well).
For those suggesting memoization: this is this the data structure for a memoized function. The sets represent general solutions that have already been computed and the data items are new inputs to the function. By matching a data item to a general solution I can avoid a whole lot of processing.
I see another solution which is dual to yours (i.e., testing a data item against every set) and that is using a binary tree where each node tests whether a specific item is included or not.
For instance if you had the sets A = { 2, 3 } and B = { 4 } and C = { 1, 3 } you'd have the following tree
_NOT_HAVE_[1]___HAVE____
| |
_____[2]_____ _____[2]_____
| | | |
__[3]__ __[3]__ __[3]__ __[3]__
| | | | | | | |
[4] [4] [4] [4] [4] [4] [4] [4]
/ \ / \ / \ / \ / \ / \ / \ / \
. B . B . B . B B C B A A A A
C B C B
C
After making the tree, you'd simply need to make 50 comparisons---or how ever many items you can have in a set.
For instance, for { 1, 4 }, you branch through the tree : right (the set has 1), left (doesn't have 2), left, right, and you get [ B ], meaning only set B is included in { 1, 4 }.
This is basically called a "Binary Decision Diagram". If you are offended by the redundancy in the nodes (as you should be, because 2^50 is a lot of nodes...) then you should consider the reduced form, which is called a "Reduced, Ordered Binary Decision Diagram" and is a commonly used data-structure. In this version, nodes are merged when they are redundant, and you no longer have a binary tree, but a directed acyclic graph.
The Wikipedia page on ROBBDs can provide you with more information, as well as links to libraries which implement this data-structure for various languages.
I can't prove it, but I'm fairly certain that there is no solution that can easily beat the O(n) bound. Your problem is "too general": every set has m = 50 properties (namely, property k is that it contains the number k) and the point is that all these properties are independent of each other. There aren't any clever combinations of properties that can predict the presence of other properties. Sorting doesn't work because the problem is very symmetric, any permutation of your 50 numbers will give the same problem but screw up any kind of ordering. Unless your input has a hidden structure, you're out of luck.
However, there is some room for speed / memory tradeoffs. Namely, you can precompute the answers for small queries. Let Q be a query set, and supersets(Q) be the collection of sets that contain Q, i.e. the solution to your problem. Then, your problem has the following key property
Q ⊆ P => supersets(Q) ⊇ supersets(P)
In other words, the results for P = {1,3,4} are a subcollection of the results for Q = {1,3}.
Now, precompute all answers for small queries. For demonstration, let's take all queries of size <= 3. You'll get a table
supersets({1})
supersets({2})
...
supersets({50})
supersets({1,2})
supersets({2,3})
...
supersets({1,2,3})
supersets({1,2,4})
...
supersets({48,49,50})
with O(m^3) entries. To compute, say, supersets({1,2,3,4}), you look up superset({1,2,3}) and run your linear algorithm on this collection. The point is that on average, superset({1,2,3}) will not contain the full n = 50,000 elements, but only a fraction n/2^3 = 6250 of those, giving an 8-fold increase in speed.
(This is a generalization of the "reverse index" method that other answers suggested.)
Depending on your data set, memory use will be rather terrible, though. But you might be able to omit some rows or speed up the algorithm by noting that a query like {1,2,3,4} can be calculated from several different precomputed answers, like supersets({1,2,3}) and supersets({1,2,4}), and you'll use the smallest of these.
If you're going to improve performance, you're going to have to do something fancy to reduce the number of set comparisons you make.
Maybe you can partition the data items so that you have all those where 1 is the smallest element in one group, and all those where 2 is the smallest item in another group, and so on.
When it comes to searching, you find the smallest value in the search set, and look at the group where that value is present.
Or, perhaps, group them into 50 groups by 'this data item contains N' for N = 1..50.
When it comes to searching, you find the size of each group that holds each element of the set, and then search just the smallest group.
The concern with this - especially the latter - is that the overhead of reducing the search time might outweigh the performance benefit from the reduced search space.
You could use inverted index of your data items. For your example
1 {1, 2, 4, 7, 8, 12, 18, 23, 29}
2 {3, 4, 6, 7, 15, 23, 34, 38}
3 {4, 7, 12, 18}
4 {1, 4, 7, 12, 13, 14, 15, 16, 17, 18}
5 {2, 4, 6, 7, 13, 15}
the inverted index will be
1: {1, 4}
2: {1, 5}
3: {2}
4: {1, 2, 3, 4, 5}
5: {}
6: {2, 5}
...
So, for any particular set {x_0, x_1, ..., x_i} you need to intersect sets for x_0, x_1 and others. For example, for the set {2,3,4} you need to intersect {1,5} with {2} and with {1,2,3,4,5}. Because you could have all your sets in inverted index sorted, you could intersect sets in min of lengths of sets that are to be intersected.
Here could be an issue, if you have very 'popular' items (as 4 in our example) with huge index set.
Some words about intersecting. You could use sorted lists in inverted index, and intersect sets in pairs (in increasing length order). Or as you have no more than 50K items, you could use compressed bit sets (about 6Kb for every number, fewer for sparse bit sets, about 50 numbers, not so greedily), and intersect bit sets bitwise. For sparse bit sets that will be efficiently, I think.
A possible way to divvy up the list of bitmaps, would be to create an array of (Compiled Nibble Indicators)
Let's say one of your 64 bit bitmaps has the bit 0 to bit 8 set.
In hex we can look at it as 0x000000000000001F
Now, let's transform that into a simpler and smaller representation.
Each 4 bit Nibble, either has at least one bit set, or not.
If it does, we represent it as a 1, if not we represent it as a 0.
So the hex value reduces to bit pattern 0000000000000011, as the right hand 2 nibbles have are the only ones that have bits in them. Create an array, that holds 65536 values, and use them as a head of linked lists, or set of large arrays....
Compile each of your bit maps, into it's compact CNI. Add it to the correct list, until all of the lists have been compiled.
Then take your needle. Compile it into its CNI form. Use that to value, to subscript to the head of the list. All bitmaps in that list have a possibility of being a match.
All bitmaps in the other lists can not match.
That is a way to divvy them up.
Now in practice, I doubt a linked list would meet your performance requirements.
If you write a function to compile a bit map to CNI, you could use it as a basis to sort your array by the CNI. Then have your array of 65536 heads, simply subscript into the original array as the start of a range.
Another technique would be to just compile a part of the 64 bit bitmap, so you have fewer heads. Analysis of your patterns should give you an idea of what nibbles are most effective in partitioning them up.
Good luck to you, and please let us know what you finally end up doing.
Evil.
The index of the sets that match the search criterion resemble the sets themselves. Instead of having the unique indexes less than 50, we have unique indexes less than 50000. Since you don't mind using a bit of memory, you can precompute matching sets in a 50 element array of 50000 bit integers. Then you index into the precomputed matches and basically just do your ((set & data) == set) but on the 50000 bit numbers which represent the matching sets. Here's what I mean.
#include <iostream>
enum
{
max_sets = 50000, // should be >= 64
num_boxes = max_sets / 64 + 1,
max_entry = 50
};
uint64_t sets_containing[max_entry][num_boxes];
#define _(x) (uint64_t(1) << x)
uint64_t sets[] =
{
_(1) | _(2) | _(4) | _(7) | _(8) | _(12) | _(18) | _(23) | _(29),
_(3) | _(4) | _(6) | _(7) | _(15) | _(23) | _(34) | _(38),
_(4) | _(7) | _(12) | _(18),
_(1) | _(4) | _(7) | _(12) | _(13) | _(14) | _(15) | _(16) | _(17) | _(18),
_(2) | _(4) | _(6) | _(7) | _(13) | _(15),
0,
};
void big_and_equals(uint64_t lhs[num_boxes], uint64_t rhs[num_boxes])
{
static int comparison_counter = 0;
for (int i = 0; i < num_boxes; ++i, ++comparison_counter)
{
lhs[i] &= rhs[i];
}
std::cout
<< "performed "
<< comparison_counter
<< " comparisons"
<< std::endl;
}
int main()
{
// Precompute matches
memset(sets_containing, 0, sizeof(uint64_t) * max_entry * num_boxes);
int set_number = 0;
for (uint64_t* p = &sets[0]; *p; ++p, ++set_number)
{
int entry = 0;
for (uint64_t set = *p; set; set >>= 1, ++entry)
{
if (set & 1)
{
std::cout
<< "sets_containing["
<< entry
<< "]["
<< (set_number / 64)
<< "] gets bit "
<< set_number % 64
<< std::endl;
uint64_t& flag_location =
sets_containing[entry][set_number / 64];
flag_location |= _(set_number % 64);
}
}
}
// Perform search for a key
int key[] = {4, 7, 12, 18};
uint64_t answer[num_boxes];
memset(answer, 0xff, sizeof(uint64_t) * num_boxes);
for (int i = 0; i < sizeof(key) / sizeof(key[0]); ++i)
{
big_and_equals(answer, sets_containing[key[i]]);
}
// Display the matches
for (int set_number = 0; set_number < max_sets; ++set_number)
{
if (answer[set_number / 64] & _(set_number % 64))
{
std::cout
<< "set "
<< set_number
<< " matches"
<< std::endl;
}
}
return 0;
}
Running this program yields:
sets_containing[1][0] gets bit 0
sets_containing[2][0] gets bit 0
sets_containing[4][0] gets bit 0
sets_containing[7][0] gets bit 0
sets_containing[8][0] gets bit 0
sets_containing[12][0] gets bit 0
sets_containing[18][0] gets bit 0
sets_containing[23][0] gets bit 0
sets_containing[29][0] gets bit 0
sets_containing[3][0] gets bit 1
sets_containing[4][0] gets bit 1
sets_containing[6][0] gets bit 1
sets_containing[7][0] gets bit 1
sets_containing[15][0] gets bit 1
sets_containing[23][0] gets bit 1
sets_containing[34][0] gets bit 1
sets_containing[38][0] gets bit 1
sets_containing[4][0] gets bit 2
sets_containing[7][0] gets bit 2
sets_containing[12][0] gets bit 2
sets_containing[18][0] gets bit 2
sets_containing[1][0] gets bit 3
sets_containing[4][0] gets bit 3
sets_containing[7][0] gets bit 3
sets_containing[12][0] gets bit 3
sets_containing[13][0] gets bit 3
sets_containing[14][0] gets bit 3
sets_containing[15][0] gets bit 3
sets_containing[16][0] gets bit 3
sets_containing[17][0] gets bit 3
sets_containing[18][0] gets bit 3
sets_containing[2][0] gets bit 4
sets_containing[4][0] gets bit 4
sets_containing[6][0] gets bit 4
sets_containing[7][0] gets bit 4
sets_containing[13][0] gets bit 4
sets_containing[15][0] gets bit 4
performed 782 comparisons
performed 1564 comparisons
performed 2346 comparisons
performed 3128 comparisons
set 0 matches
set 2 matches
set 3 matches
3128 uint64_t comparisons beats 50000 comparisons so you win. Even in the worst case, which would be a key which has all 50 items, you only have to do num_boxes * max_entry comparisons which in this case is 39100. Still better than 50000.
Since the numbers are less than 50, you could build a one-to-one hash using a 64-bit integer and then use bitwise operations to test the sets in O(1) time. The hash creation would also be O(1). I think either an XOR followed by a test for zero or an AND followed by a test for equality would work. (If I understood the problem correctly.)
Put your sets into an array (not a linked list) and SORT THEM. The sorting criteria can be either 1) the number of elements in the set (number of 1-bits in the set representation), or 2) the lowest element in the set. For example, let A={7, 10, 16} and B={11, 17}. Then B<A under criterion 1), and A<B under criterion 2). Sorting is O(n log n), but I assume that you can afford some preprocessing time, i.e., that the search structure is static.
When a new data item arrives, you can use binary search (logarithmic time) to find the starting candidate set in the array. Then you search linearly through the array and test the data item against the set in the array until the data item becomes "greater" than the set.
You should choose your sorting criterion based on the spread of your sets. If all sets have 0 as their lowest element, you shouldn't choose criterion 2). Vice-versa, if the distribution of set cardinalities is not uniform, you shouldn't choose criterion 1).
Yet another, more robust, sorting criterion would be to compute the span of elements in each set, and sort them according to that. For example, the lowest element in set A is 7, and highest is 16, so you would encode its span as 0x1007; similarly the B's span would be 0x110B. Sort the sets according to the "span code" and again use binary search to find all sets with the same "span code" as your data item.
Computing the "span code" is slow in ordinary C, but it can be done fast if you resort to assembly -- most CPUs have instructions that find the most/least significant set bit.
This is not a real answer more an observation: this problem looks like it could be efficiently parallellized or even distributed, which would at least reduce the running time to O(n / number of cores)
You can build a reverse index of "haystack" lists that contain each element:
std::set<int> needle; // {4, 7, 12, 18}
std::vector<std::set<int>> haystacks;
// A list of your each of your data sets:
// 1 {1, 2, 4, 7, 8, 12, 18, 23, 29}
// 2 {3, 4, 6, 7, 15, 23, 34, 38}
// 3 {4, 7, 12, 18}
// 4 {1, 4, 7, 12, 13, 14, 15, 16, 17, 18}
// 5 {2, 4, 6, 7, 13,
std::hash_map[int, set<int>> element_haystacks;
// element_haystacks maps each integer to the sets that contain it
// (the key is the integers from the haystacks sets, and
// the set values are the index into the 'haystacks' vector):
// 1 -> {1, 4} Element 1 is in sets 1 and 4.
// 2 -> {1, 5} Element 2 is in sets 2 and 4.
// 3 -> {2} Element 3 is in set 3.
// 4 -> {1, 2, 3, 4, 5} Element 4 is in sets 1 through 5.
std::set<int> answer_sets; // The list of haystack sets that contain your set.
for (set<int>::const_iterator it = needle.begin(); it != needle.end(); ++it) {
const std::set<int> &new_answer = element_haystacks[i];
std::set<int> existing_answer;
std::swap(existing_answer, answer_sets);
// Remove all answers that don't occur in the new element list.
std::set_intersection(existing_answer.begin(), existing_answer.end(),
new_answer.begin(), new_answer.end(),
inserter(answer_sets, answer_sets.begin()));
if (answer_sets.empty()) break; // No matches :(
}
// answer_sets now lists the haystack_ids that include all your needle elements.
for (int i = 0; i < answer_sets.size(); ++i) {
cout << "set: " << element_haystacks[answer_sets[i]];
}
If I'm not mistaken, this will have a max runtime of O(k*m), where is the avg number of sets that an integer belongs to and m is the avg size of the needle set (<50). Unfortunately, it'll have a significant memory overhead due to building the reverse mapping (element_haystacks).
I'm sure you could improve this a bit if you stored sorted vectors instead of sets and element_haystacks could be a 50 element vector instead of a hash_map.
I'm surprised no one has mentioned that the STL contains an algorithm to handle this sort of thing for you. Hence, you should use includes. As it describes it performs at most 2*(N+M)-1 comparisons for a worst case performance of O(M+N).
Hence:
bool isContained = includes( myVector.begin(), myVector.end(), another.begin(), another.end() );
if you're needing O( log N ) time, I'll have to yield to the other responders.
Another idea is to completely prehunt your elephants.
Setup
Create a 64 bit X 50,000 element bit array.
Analyze your search set, and set the corresponding bits in each row.
Save the bit map to disk, so it can be reloaded as needed.
Searching
Load the element bit array into memory.
Create a bit map array, 1 X 50000. Set all of the values to 1. This is the search bit array
Take your needle, and walk though each value. Use it as a subscript into the element bit array. Take the corresponding bit array, then AND it into the search array.
Do that for all values in your needle, and your search bit array, will hold a 1,
for every matching element.
Reconstruct
Walk through the search bit array, and for each 1, you can use the element bit array, to reconstruct the original values.
How many data items do you have? Are they really all unique? Could you cache popular data items, or use a bucket/radix sort before the run to group repeated items together?
Here is an indexing approach:
1) Divide the 50-bit field into e.g. 10 5-bit sub-fields. If you really have 50K sets then 3 17-bit chunks might be nearer the mark.
2) For each set, choose a single subfield. A good choice is the sub-field where that set has the most bits set, with ties broken almost arbitrarily - e.g. use the leftmost such sub-field.
3) For each possible bit-pattern in each sub-field note down the list of sets which are allocated to that sub-field and match that pattern, considering only the sub-field.
4) Given a new data item, divide it into its 5-bit chunks and look each up in its own lookup table to get a list of sets to test against. If your data is completely random you get a factor of two speedup or more, depending on how many bits are set in the densest sub-field of each set. If an adversary gets to make up random data for you, perhaps they find data items that almost but not quite match loads of sets and you don't do very well at all.
Possibly there is scope for taking advantage of any structure in your sets, by numbering bits so that sets tend to have two or more bits in their best sub-field - e.g. do cluster analysis on the bits, treating them as similar if they tend to appear together in sets. Or if you can predict patterns in the data items, alter the allocation of sets to sub-fields in step(2) to reduce the number of expected false matches.
Addition:
How many tables would need to have to guarantee that any 2 bits always fall into the same table? If you look at the combinatorial definition in http://en.wikipedia.org/wiki/Projective_plane, you can see that there is a way to extract collections of 7 bits from 57 (=1 + 7 + 49) bits in 57 different ways so that for any two bits at least one collection contains both of them. Probably not very useful, but it's still an answer.