Perfect shuffle and unshuffle, with no auxiliary array - c++

Before any concrete question, please note that my goal is not to shuffle randomly the array, but to perform a perfect shuffle as the one a ideal dealer may do with a set of cards, that is, splitting the deck in half and performing one shuffle passage (interleaving a card from one half deck with one card from the other half). (This is actually one exercise from Algorithms in C third edition by Sedgewick: nbr 11.3 page 445)
So, i'm not interested in algorithms like Fisher-Yates shuffle etc.
Said that, my point is to avoid using any auxiliary array when performing the shuffle, the code I was able to deliver is the following:
template<typename T>
void two_way_shuffle(vector<T>& data,int l,int r)
{
int n=(r-l)+1;
int m=(r+l)/2;
if(n%2==0) ++m;
int s,p{l+1};
for(;p<=r;p+=2,m++)
{
s=data[m]; //Make space
//Shift the elements
for(int i{m};i>p;i--)
data[i]=data[i-1];
//Put the element in the final position
data[p]=s;
}
}
template<typename T>
void two_way_unshuffle(vector<T>& data,int l,int r)
{
int n=(r-l)+1;
if(n%2!=0){
--r;
--n;
}
int m=(r+l)/2;
int s,p{l+1},w{r-1};
for(;p<=w;p++,w--)
{
s=data[w];
for(int i{w};i>p;i--)
data[i]=data[i-1];
data[p]=s;
}
//Complete the operation
for(p=l+1,w=m;p<w;p++,w--)
swap(data[p],data[w]);
}
The basic idea beside the shuffling operation is to keep track of the position in the array 'p' where the next element should be inserted, then copying that element from the array data at position 'w' leaving one empty space, then shifting the array from left to right in order to move the empty space exactly to the position 'p', once that was done the code just move to data[p] the previously saved value 's'. Pretty simple.. And it looks to work properly (some shuffle examples)
FROM: 12345678
TO: 15263748
FROM: IS THIS WORKING CORRECTLY OR NOT?
TO: ICSO RTRHEICST LWYO ROKRI NNGO T?
FROM: SHUFFLING TEST
TO: SNHGU FTFELSIT
My very problem is in the unshuffle operation, I believe that the swap sequence in the last for loop can be avoided, but I'm not able to find a cleaver way to do that. The problem is that the main loop unshuffle performs a shift operation which eventually leave the elements of the first half of the array in the wrong order (thus, requiring the swap in the second for).
I'm almost sure there must be a clever way to do the job without such a code complication..
Do you have any suggestion on what may be improved in the unshuffle algorithm? Or any other suggestion on what I've done?
Thanks

I think your code keeps too many variables. When you shuffle the array, you are treating a range [lo, hi] of decreasing size, where you swap the rightmost entry hi with the block of elements to te left [lo, hi). You can express the algorithm just in terms of these two variables:
0 1 2 3 4 5
------- swap [1, 3) and [3]
0 3 1 2 4 5
---- swap [3, 4) and [4]
0 3 1 4 2 5
- [5, 5) is empty: break
This is the same for an odd number of elements, except that the empty range goes out of bound, which is okay, since we don't access it. The operations here are: shift block right, place old hi element at lo position, increase lo and hi with strides 2 and 1.
When you unshuffle, you must revert the steps: Start lo and hi at the last element for even-sized and one element beyond the array for odd-sized array. Then do all steps in reverse order: decrease lo and hi first, then shift the block left and place the old lo at hi position. Stop when lo reaches 1. (It will reach 1, because we've started at an odd index and we decrement by 2.)
You can test your algorithm by printing lo and hi as you go: They must be the same for shuffling and unshufling, only in reverse order.
So:
template<typename T>
void weave(std::vector<T>& data)
{
size_t lo = 1;
size_t hi = (data.size() + 1) / 2;
while (lo < hi) {
T m = data[hi];
for (size_t i = hi; i-- > lo; ) data[i + 1] = data[i];
data[lo] = m;
lo += 2;
hi++;
}
}
template<typename T>
void unweave(std::vector<T>& data)
{
size_t n = data.size();
size_t lo = n + n % 2 - 1;
size_t hi = lo;
while (hi--, lo-- && lo--) {
T m = data[lo];
for (size_t i = lo; i < hi; i++) data[i] = data[i + 1];
data[hi] = m;
}
}
I've removed the left and right indices, which makes the code less flexible, but hopefully clearer to read. You can put them back in and will only need them to calculate your initial values for lo and hi.

You can perform both these steps with a fixed number of swaps, at the cost of a non-linear index calculation step. But for practical applications the time cost of the swap usually drives the total time, so these algorithms approach O(N).
For perfect shuffling, see https://cs.stackexchange.com/a/105263/58310 (or https://cs.stackexchange.com/q/105604/58310 for an alternate explanation)
For unshuffling, a related algorithm is described here: https://stackoverflow.com/a/55112294/10396

Related

Efficient algorithm to produce closest triplet from 3 arrays?

I need to implement an algorithm in C++ that, when given three arrays of unequal sizes, produces triplets a,b,c (one element contributed by each array) such that max(a,b,c) - min(a,b,c) is minimized. The algorithm should produce a list of these triplets, in order of size of max(a,b,c)-min(a,b,c). The arrays are sorted.
I've implemented the following algorithm (note that I now use arrays of type double), however it runs excruciatingly slow (even when compiled using GCC with -03 optimization, and other combinations of optimizations). The dataset (and, therefore, each array) has potentially tens of millions of elements. Is there a faster/more efficient method? A significant speed increase is necessary to accomplish the required task in a reasonable time frame.
void findClosest(vector<double> vec1, vector<double> vec2, vector<double> vec3){
//calculate size of each array
int len1 = vec1.size();
int len2 = vec2.size();
int len3 = vec3.size();
int i = 0; int j = 0; int k = 0; int res_i, res_j, res_k;
int diff = INT_MAX;
int iter = 0; int iter_bound = min(min(len1,len2),len3);
while(iter < iter_bound)
while(i < len1 && j < len2 && k < len3){
int minimum = min(min(vec1[i], vec2[j]), vec3[k]);
int maximum = max(max(vec1[i], vec2[j]), vec3[k]);
//if new difference less than previous difference, update difference, store
//resultants
if(fabs(maximum - minimum) < diff){ diff = maximum-minimum; res_i = i; res_j = j; res_k = k;}
//increment minimum value
if(vec1[i] == minimum) ++i;
else if(vec2[j] == minimum) ++j;
else ++k;
}
//"remove" triplet
vec1.erase(vec1.begin() + res_i);
vec2.erase(vec2.begin() + res_j);
vec3.erase(vec3.begin() + res_k);
--len1; --len2; --len3;
++iter_bound;
}
OK, you're going to need to be clever in a few ways to make this run well.
The first thing that you need is a priority queue, which is usually implemented with a heap. With that, the algorithm in pseudocode is:
Make a priority queue for possible triples in order of max - min, then how close median is to their average.
Make a pass through all 3 arrays, putting reasonable triples for every element into the priority queue
While the priority queue is not empty:
Pull a triple out
If all three of the triple are not used:
Add triple to output
Mark the triple used
else:
If you can construct reasonable triplets for unused elements:
Add them to the queue
Now for this operation to succeed, you need to efficiently find elements that are currently unused. Doing that at first is easy, just keep an array of bools where you mark off the indexes of the used values. But once a lot have been taken off, your search gets long.
The trick for that is to have a vector of bools for individual elements, a second for whether both in a pair have been used, a third for where all 4 in a quadruple have been used and so on. When you use an element just mark the individual bool, then go up the hierarchy, marking off the next level if the one you're paired with is marked off, else stopping. This additional data structure of size 2n will require an average of marking 2 bools per element used, but allows you to find the next unused index in either direction in at most O(log(n)) steps.
The resulting algorithm will be O(n log(n)).

How to erase elements more efficiently from a vector or set?

Problem statement:
Input:
First two inputs are integers n and m. n is the number of knights fighting in the tournament (2 <= n <= 100000, 1 <= m <= n-1). m is the number of battles that will take place.
The next line contains n power levels.
The next m lines contain two integers l and r, indicating the range of knight positions to compete in the ith battle.
After each battle, all nights apart from the one with the highest power level will be eliminated.
The range for each battle is given in terms of the new positions of the knights, not the original positions.
Output:
Output m lines, the ith line containing the original positions (indices) of the knights from that battle. Each line is in ascending order.
Sample Input:
8 4
1 0 5 6 2 3 7 4
1 3
2 4
1 3
0 1
Sample Output:
1 2
4 5
3 7
0
Here is a visualisation of this process.
1 2
[(1,0),(0,1),(5,2),(6,3),(2,4),(3,5),(7,6),(4,7)]
-----------------
4 5
[(1,0),(6,3),(2,4),(3,5),(7,6),(4,7)]
-----------------
3 7
[(1,0),(6,3),(7,6),(4,7)]
-----------------
0
[(1,0),(7,6)]
-----------
[(7,6)]
I have solved this problem. My program produces the correct output, however, it is O(n*m) = O(n^2). I believe that if I erase knights more efficiently from the vector, efficiency can be increased. Would it be more efficient to erase elements using a set? I.e. erase contiguous segments rather that individual knights. Is there an alternative way to do this that is more efficient?
#define INPUT1(x) scanf("%d", &x)
#define INPUT2(x, y) scanf("%d%d", &x, &y)
#define OUTPUT1(x) printf("%d\n", x);
int main(int argc, char const *argv[]) {
int n, m;
INPUT2(n, m);
vector< pair<int,int> > knights(n);
for (int i = 0; i < n; i++) {
int power;
INPUT(power);
knights[i] = make_pair(power, i);
}
while(m--) {
int l, r;
INPUT2(l, r);
int max_in_range = knights[l].first;
for (int i = l+1; i <= r; i++) if (knights[i].first > max_in_range) {
max_in_range = knights[i].first;
}
int offset = l;
int range = r-l+1;
while (range--) {
if (knights[offset].first != max_in_range) {
OUTPUT1(knights[offset].second));
knights.erase(knights.begin()+offset);
}
else offset++;
}
printf("\n");
}
}
Well, removing from vector wouldn't be efficient for sure. Removing from set, or unordered set would be more effective (use iterators instead of indexes).
Yet the problem will still remain O(n^2), because you have two nested whiles running n*m times.
--EDIT--
I believe I understand the question now :)
First let's calculate the complexity of your code above. Your worst case would be the case that max range in all battles is 1 (two nights for each battle) and the battles are not ordered with respect to the position. Which means you have m battles (in this case m = n-1 ~= O(n))
The first while loop runs n times
For runs for once every time which makes it n*1 = n in total
The second while loop runs once every time which makes it n again.
Deleting from vector means n-1 shifts that makes it O(n).
Thus with the complexity of the vector total complexity is O(n^2)
First of all, you don't really need the inner for loop. Take the first knight as the max in range, compare the rest in the range one-by-one and remove the defeated ones.
Now, i believe it can be done in O(nlogn) with using std::map. The key to the map is the position and the value is the level of the knight.
Before proceeding, finding and removing an element in map is logarithmic, iterating is constant.
Finally, your code should look like:
while(m--) // n times
strongest = map.find(first_position); // find is log(n) --> n*log(n)
for (opponent = next of strongest; // this will run 1 times, since every range is 1
opponent in range;
opponent = next opponent) // iterating is constant
// removing from map is log(n) --> n * 1 * log(n)
if strongest < opponent
remove strongest, opponent is the new strongest
else
remove opponent, (be careful to remove it after iterating to next)
Ok, now the upper bound would be O(2*nlogn) = O(nlogn). If the ranges increases, that makes the run time of upper loop decrease but increases the number of remove operations. I'm sure the upper bound won't change, let's make it a homework for you to calculate :)
A solution with a treap is pretty straightforward.
For each query, you need to split the treap by implicit key to obtain the subtree that corresponds to the [l, r] range (it takes O(log n) time).
After that, you can iterate over the subtree and find the knight with the maximum strength. After that, you just need to merge the [0, l) and [r + 1, end) parts of the treap with the node that corresponds to this knight.
It's clear that all parts of the solution except for the subtree traversal and printing work in O(log n) time per query. However, each operation reinserts only one knight and erase the rest from the range, so the size of the output (and the sum of sizes of subtrees) is linear in n. So the total time complexity is O(n log n).
I don't think you can solve with standard stl containers because there'no standard container that supports getting an iterator by index quickly and removing arbitrary elements.

Varying initializer in a 'for loop' in C++

int i = 0;
for(; i<size-1; i++) {
int temp = arr[i];
arr[i] = arr[i+1];
arr[i+1] = temp;
}
Here I started with the fist position of array. What if after the loop I need to execute the for loop again where the for loop starts with the next position of array.
Like for first for loop starts from: Array[0]
Second iteration: Array[1]
Third iteration: Array[2]
Example:
For array: 1 2 3 4 5
for i=0: 2 1 3 4 5, 2 3 1 4 5, 2 3 4 1 5, 2 3 4 5 1
for i=1: 1 3 2 4 5, 1 3 4 2 5, 1 3 4 5 2 so on.
You can nest loops inside each other, including the ability for the inner loop to access the iterator value of the outer loop. Thus:
for(int start = 0; start < size-1; start++) {
for(int i = start; i < size-1; i++) {
// Inner code on 'i'
}
}
Would repeat your loop with an increasing start value, thus repeating with a higher initial value for i until you're gone through your list.
Suppose you have a routine to generate all possible permutations of the array elements for a given length n. Suppose the routine, after processing all n! permutations, leaves the n items of the array in their initial order.
Question: how can we build a routine to make all possible permutations of an array with (n+1) elements?
Answer:
Generate all permutations of the initial n elements, each time process the whole array; this way we have processed all n! permutations with the same last item.
Now, swap the (n+1)-st item with one of those n and repeat permuting n elements – we get another n! permutations with a new last item.
The n elements are left in their previous order, so put that last item back into its initial place and choose another one to put at the end of an array. Reiterate permuting n items.
And so on.
Remember, after each call the routine leaves the n-items array in its initial order. To retain this property at n+1 we need to make sure the same element gets finally placed at the end of an array after the (n+1)-st iteration of n! permutations.
This is how you can do that:
void ProcessAllPermutations(int arr[], int arrLen, int permLen)
{
if(permLen == 1)
ProcessThePermutation(arr, arrLen); // print the permutation
else
{
int lastpos = permLen - 1; // last item position for swaps
for(int pos = lastpos; pos >= 0; pos--) // pos of item to swap with the last
{
swap(arr[pos], arr[lastpos]); // put the chosen item at the end
ProcessAllPermutations(arr, arrLen, permLen - 1);
swap(arr[pos], arr[lastpos]); // put the chosen item back at pos
}
}
}
and here is an example of the routine running: https://ideone.com/sXp35O
Note, however, that this approach is highly ineffective:
It may work in a reasonable time for very small input size only. The number of permutations is a factorial function of the array length, and it grows faster than exponentially, which makes really BIG number of tests.
The routine has no short return. Even if the first or second permutation is the correct result, the routine will perform all the rest of n! unnecessary tests, too. Of course one can add a return path to break iteration, but that would make the code somewhat ugly. And it would bring no significant gain, because the routine will have to make n!/2 test on average.
Each generated permutation appears deep in the last level of the recursion. Testing for a correct result requires making a call to ProcessThePermutation from within ProcessAllPermutations, so it is difficult to replace the callee with some other function. The caller function must be modified each time you need another method of testing / procesing / whatever. Or one would have to provide a pointer to a processing function (a 'callback') and push it down through all the recursion, down to the place where the call will happen. This might be done indirectly by a virtual function in some context object, so it would look quite nice – but the overhead of passing additional data down the recursive calls can not be avoided.
The routine has yet another interesting property: it does not rely on the data values. Elements of the array are never compared. This may sometimes be an advantage: the routine can permute any kind of objects, even if they are not comparable. On the other hand it can not detect duplicates, so in case of equal items it will make repeated results. In a degenerate case of all n equal items the result will be n! equal sequences.
So if you ask how to generate all permutations to detect a sorted one, I must answer: DON'T.
Do learn effective sorting algorithms instead.

C++ Optimizing this Algorithm

After watching some Terence Tao videos, I wanted to try implementing algorithms into c++ code to find all the prime numbers up to a number n. In my first version, where I simply had every integer from 2 to n tested to see if they were divisible by anything from 2 to sqrt(n), I got the program to find the primes between 1-10,000,000 in ~52 seconds.
Attempting to optimize the program, and implementing what I now know to be the Sieve of Eratosthenes, I assumed the task would be done much faster than 51 seconds, but sadly, that wasn't the case. Even going up to 1,000,000 took a considerable amount of time (didn't time it, though)
#include <iostream>
#include <vector>
using namespace std;
void main()
{
vector<int> tosieve = {};
for (int i = 2; i < 1000001; i++)
{
tosieve.push_back(i);
}
for (int j = 0; j < tosieve.size(); j++)
{
for (int k = j + 1; k < tosieve.size(); k++)
{
if (tosieve[k] % tosieve[j] == 0)
{
tosieve.erase(tosieve.begin() + k);
}
}
}
//for (int f = 0; f < tosieve.size(); f++)
//{
// cout << (tosieve[f]) << endl;
//}
cout << (tosieve.size()) << endl;
system("pause");
}
Is it the repeated referencing of the vectors or something? Why is this so slow? Even if I'm completely overlooking something (could be, complete beginner at this :I) I would think that finding the primes between 2 and 1,000,000 with this horrible inefficient method would be faster than my original way of finding them from 2 to 10,000,000.
Hope someone has a clear answer to this - hopefully I can use whatever knowledge is gleaned in the future when optimizing programs using a lot of recursion.
The problem is that 'erase' moves every element in the vector down one, meaning it is an O(n) operation.
There are three alternative choices:
1) Just mark deleted elements as 'empty' (make them 0, for example). This will mean future passes have to pass over those empty positions, but that isn't that expensive.
2) Make a new vector, and push_back new values into there.
3) Use std::remove_if: This will move the elements down, but do it in a single pass so will be more efficient. If you use std::remove_if, then you will have to remember it doesn't resize the vector itself.
Most of vector operations, including erase() have a O(n) linear time complexity.
Since you have two loops of size 10^6, and a vector of size 10^6, your algorithm executes up to 10^18 operations.
Qubic algorithms for such a big N will take a huge amount of time.
N = 10^6 is even big enough for quadratic algorithms.
Please, read carefully about Sieve of Eratosthenes. The fact that both full search and Sieve of Eratosthenes algorithms took the same time, means that you have done the second one wrong.
I see two performanse issues here:
First of all, push_back() will have to reallocate the dynamic memory block once in a while. Use reserve():
vector<int> tosieve = {};
tosieve.resreve(1000001);
for (int i = 2; i < 1000001; i++)
{
tosieve.push_back(i);
}
Second erase() has to move all Elements behind the one you try to remove. You set the elements to 0 instead and do a run over the vector in the end (untested code):
for (auto& x : tosieve) {
for (auto y = tosieve.begin(); *y < x; ++y) // this check works only in
// the case of an ordered vector
if (y != 0 && x % y == 0) x = 0;
}
{ // this block will make sure, that sieved will be released afterwards
auto sieved = vector<int>{};
for(auto x : tosieve)
sieved.push_back(x);
swap(tosieve, sieved);
} // the large memory block is released now, just keep the sieved elements.
consider to use standard algorithms instead of hand written loops. They help you to state your intent. In this case I see std::transform() for the outer loop of the sieve, std::any_of() for the inner loop, std::generate_n() for filling tosieve at the beginning and std::copy_if() for filling sieved (untested code):
vector<int> tosieve = {};
tosieve.resreve(1000001);
generate_n(back_inserter(tosieve), 1000001, []() -> int {
static int i = 2; return i++;
});
transform(begin(tosieve), end(tosieve), begin(tosieve), [](int i) -> int {
return any_of(begin(tosieve), begin(tosieve) + i - 2,
[&i](int j) -> bool {
return j != 0 && i % j == 0;
}) ? 0 : i;
});
swap(tosieve, [&tosieve]() -> vector<int> {
auto sieved = vector<int>{};
copy_if(begin(tosieve), end(tosieve), back_inserter(sieved),
[](int i) -> bool { return i != 0; });
return sieved;
});
EDIT:
Yet another way to get that done:
vector<int> tosieve = {};
tosieve.resreve(1000001);
generate_n(back_inserter(tosieve), 1000001, []() -> int {
static int i = 2; return i++;
});
swap(tosieve, [&tosieve]() -> vector<int> {
auto sieved = vector<int>{};
copy_if(begin(tosieve), end(tosieve), back_inserter(sieved),
[](int i) -> bool {
return !any_of(begin(tosieve), begin(tosieve) + i - 2,
[&i](int j) -> bool {
return i % j == 0;
});
});
return sieved;
});
Now instead of marking elements, we don't want to copy afterwards, but just directly copy only the elements, we want to copy. This is not only faster than the above suggestion, but also better states the intent.
Very interesting task you have. Thanks!
With pleasure I implemented from scratch my own versions of solving it.
I created 3 separate (independent) functions, all based on Sieve of Eratosthenes. These 3 versions are different in their complexity and speed.
Just a quick note, my simplest (slowest) version finds all primes below your desired limit of 10'000'000 within just 0.025 sec (i.e. 25 milli-seconds).
I also tested all 3 versions to find primes below 2^32 (4'294'967'296), which is solved by "simple" version within 47 seconds, by "intermediate" version within 30 seconds, by "advanced" within 12 seconds. So within just 12 seconds it finds all primes below 4 Billion (there are 203'280'221 such primes below 2^32, see OEIS sequence)!!!
For simplicity I will describe in details only Simple version out of 3. Here's code:
template <typename T>
std::vector<T> GenPrimes_SieveOfEratosthenes(size_t end) {
// https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes
if (end <= 2)
return {};
size_t const cnt = end >> 1;
std::vector<u8> composites((cnt + 7) / 8);
auto Get = [&](size_t i){ return bool((composites[i / 8] >> (i % 8)) & 1); };
auto Set = [&](size_t i){ composites[i / 8] |= u8(1) << (i % 8); };
std::vector<T> primes = {2};
size_t i = 0;
for (i = 1; i < cnt; ++i) {
if (Get(i))
continue;
size_t const p = 2 * i + 1, start = (p * p) >> 1;
primes.push_back(p);
if (start >= cnt)
break;
for (size_t j = start; j < cnt; j += p)
Set(j);
}
for (i = i + 1; i < cnt; ++i)
if (!Get(i))
primes.push_back(2 * i + 1);
return primes;
}
This code implements simplest but fast algorithm of finding primes, called Sieve of Eratosthenes. As a small optimization of speed and memory, I search only over odd numbers. This odd numbers optimization gives me ability to store 2x times less memory and do 2x times less steps, hence improves both speed and memory consumption exactly 2 times.
Algorithm is simple, we allocate array of bits, this array at position K has bit 1 if K is composite, or has 0 if K is probably prime. At the end all 0 bits in array signify Definite primes (that are for sure primes). Also due to odd numbers optimization this bit-array stores only odd numbers, so K-th bit is actually a number 2 * K + 1.
Then left to right we go over this array of bits and if we meet 0 bit at position K then it means we found a prime number P = 2 * K + 1 and now starting from position (P * P) / 2 we mark every P-th bit with 1. It means we mark all numbers bigger than P*P that are composite, because they are divisible by P.
We do this procedure only until P * P becomes greater or equal to our limit End (we're finding all primes < End). This limit guarantees that after reaching it ALL zero bits inside array signify prime numbers.
Second version of code does only one optimization to this Simple version, it makes all multi-core (multi-threaded). But this only optimization makes code much bigger and more complex. Basically it slices whole range of bits into all cores, so that they write bits to memory in parallel.
I'll explain only my third Advanced version, it is most complex of 3 versions. It does not only multi-threaded optimization, but also so-called Primorial optimization.
What is Primorial, it is a product of first smallest primes, for example I take primorial 2 * 3 * 5 * 7 = 210.
We can see that any primorial splits infinite range of integers into wheels by modulus of this primorial. For example primorial 210 splits into ranges [0; 210), [210; 2210), [2210; 3*210), etc.
Now it is easy to mathematically prove that inside All ranges of primorial we can mark same positions of numbers as complex, exactly we can mark all numbers that are multiple of 2 or 3 or 5 or 7 as composite.
We can see that out of 210 remainders there are 162 remainders that are for sure composite, and only 48 remainders are probably prime.
Hence it is enough for us to check primality of only 48/210=22.8% of whole search space. This reduction of search space makes task more than 4x times faster, and 4x times less memory consuming.
One can see that my first Simple version in fact due to odd-only optimization was actually using Primorial equal to 2 optimization. Yes, if we take primorial 2 instead of primorial 210, then we gain exactly first version (Simple) algorithm.
All of my 3 versions are tested for correctness and speed. Although still some tiny bugs can remain. Note. Yet it is recommended not to use my code straight away in production, unless it is tested thoroughly.
All 3 versions are tested for correctness by re-using each other answers. I thoroughly test correctness by feeding all limits (end value) from 0 to 2^18. It takes some time to do this.
See main() function to figure out how to use my functions.
Try it online!
SOURCE CODE GOES HERE. Due to StackOverflow limit of 30K symbols per post, I can't inline source code here, as it is almost 30K in size and together with English post above it takes more than 30K. So I'm providing source code on separate Github Gist server, link below. Note that Try it online! link above also contains full source code, but I reduced search limit of 2^32 to smaller one due to GodBolt limit of running time to 3 seconds.
Github Gist code
Output:
10M time 'Simple' 0.024 sec
Time 2^32 'Simple' 46.924 sec, number of primes 203280221
Time 2^32 'Intermediate' 30.999 sec
Time 2^32 'Advanced' 11.359 sec
All checked till 0
All checked till 5000
All checked till 10000
All checked till 15000
All checked till 20000
All checked till 25000

Counting numbers a AND s = a

I am writing a program to meet the following specifications:
You have a list of integers, initially the list is empty.
You have to process Q operations of three kinds:
add s: Add integer s to your list, note that an integer can exist
more than one time in the list
del s: Delete one copy of integer s from the list, it's guaranteed
that at least one copy of s will exist in the list.
cnt s: Count how many integers a are there in the list such that a
AND s = a , where AND is bitwise AND operator
Additional constraints:
1 ≤ Q ≤ 200000
0 ≤ s < 2 ^ 16
I have two approaches but both time out, as the constraints are quite large.
I used the fact that a AND s = a if and only if s has all the set bits of a, and the other bits can be arbitrarily assigned. So we can iterate over all these numbers and increase their count by one.
For example, if we have the number 10: 1010
Then the numbers 1011,1111,1110 will be such that when anded with 1010, they will give 1010. So we increase the count of 10,11,14 and 15 by 1. And for delete we delete one from their respective counts.
Is there a faster method? Should I use a different data structure?
Let's consider two ways to solve it that are two slow, and then merge them into one solution, that will be guaranteed to finish in milliseconds.
Approach 1 (slow)
Allocate an array v of size 2^16. Every time you add an element, do the following:
void add(int s) {
for (int i = 0; i < (1 << 16); ++ i) if ((s & i) == 0) {
v[s | i] ++;
}
}
(to delete do the same, but decrement instead of incrementing)
Then to answer cnt s you just need to return the value of v[s]. To see why, note that v[s] is incremented exactly once for every number a that is added such that a & s == a (I will leave it is an exercise to figure out why this is the case).
Approach 2 (slow)
Allocate an array v of size 2^16. When you add an element s, just increment v[s]. To query the count, do the following:
int cnt(int s) {
int ret = 0;
for (int i = 0; i < (1 << 16); ++ i) if ((s | i) == s) {
ret += v[s & ~i];
}
return ret;
}
(x & ~y is a number that has all the bits that are set in x that are not set in y)
This is a more straightforward approach, and is very similar to what you do, but is written in a slightly different fashion. You will see why I wrote it this way when we combine the two approaches.
Both these approaches are too slow, because in which of them one operation is constant, and one is O(s), so in the worst case, when the entire input consists of the slow operations, we spend O(Q * s), which is prohibitively slow. Now let's merge the two approaches using meet-in-the-middle to get a faster solution.
Fast approach
We will merge the two approaches in the following way: add will work similarly to the first approach, but instead of considering every number a such that a & s == a, we will only consider numbers, that differ from s only in the lowest 8 bits:
void add(int s) {
for (int i = 0; i < (1 << 8); ++ i) if ((i & s) == 0) {
v[s | i] ++;
}
}
For delete do the same, but instead of incrementing elements, decrement them.
For counts we will do something similar to the second approach, but we will account for the fact that each v[a] is already accumulated for all combinations of the lowest 8 bits, so we only need to iterate over all the combinations of the higher 8 bits:
int cnt(int s) {
int ret = 0;
for (int i = 0; i < (1 << 8); ++ i) if ((s | (i << 8)) == s) {
ret += v[s & ~(i << 8)];
}
return ret;
}
Now both add and cnt work in O(sqrt(s)), so the entire approach is O(Q * sqrt(s)), which for your constraints should be milliseconds.
Pay extra attention to overflows -- you didn't provide the upper bound on s, if it is too high, you might want to replace ints with long longs.
One of the ways to solve it is to break list of queries in blocks of about sqrt(S) queries each. This is a standard approach, usually called sqrt-decomposition.
You have to store separately:
Array A[v]: how much times s is present.
Array R[v]: sum of A[i] for all i supersets of v (i.e. result of cnt(v)).
List W of all changes (add, del operations) within current block of queries.
Note: arrays A and R are valid only for all the changes from the fully processed block of queries. All the changes that happened within the currently processed block of queries are stored in W and are not yet applied to A and R.
Now we process queries block by block, for each block of queries we do:
For each query within block:
add(v): store increment for v into W list.
del(v): store decrement for v into W list.
cnt(v): return R[v] + X(W), where X(W) is total changed calculated by trivial processing of all the changes in the list W.
Apply all the changes from W to array A, clear list W.
Recalculate completely array R from array A.
Note that add and del take O(1) time, and cnt takes O(|W|) = O(sqrt(S)) time. So step 1 takes O(Q sqrt(S)) time in total.
Step 2 takes O(|W|) time, which totals in O(Q) time overall.
The most important part is step 3. We need to implement it in O(S). Given that there are Q / sqrt(S) blocks, this would total in O(Q sqrt(S)) time as wanted.
Unfortunately, recalculating array S can be done in only O(S log S) time. That would mean O(Q sqrt(S) log (S)) time. If we choose block size O(sqrt(S log S)), then overall time is O(Q sqrt(S log S)). No perfect, but interesting nonetheless =)
Given the data structure that you described in one of the comments, you could try the following algorithm (I am giving it in pseudo-code):
count-how-many-integers(integer s) {
sum = 0
for i starting from s and increasing by 1 until s*2 {
if (i AND s) == i {
sum = sum + a[i]
}
}
return sum
}
More sophisticated optimizations should be possible in the inner loop to reduce the number of times the test is performed.