Atomic int incorrectly incrementing? Intel TBB Implementation - c++

I'm implementing a multi-threaded program to validate the Collatz Conjecture for a range of numbers using Intel TBB, and I am having trouble figuring out why the atomic variable <int> count (which keeps count of how many numbers were validated) is not incremented correctly.
For the relevant code listed below, I used a small interval (validating just numbers 1-10, but the problem scales as the interval gets larger) and I consistently get a return value of 18 for count. Any ideas?
task_scheduler_init init(4);
atomic<int> count;
void main
{
tick_count parallel_collatz_start = tick_count::now();
parallel_collatz();
tick_count parallel_collatz_end = tick_count::now();
double parallel_time = 1000 *(parallel_collatz_end - parallel_collatz_start).seconds();
}
void parallel_collatz()
{
parallel_for
(
blocked_range<int>(1,10), [=](const blocked_range<int>&r)
{ for (int k = r.begin(); k <= r.end(); k++) { collatz(k); } }
);
}
long long collatz (long long n)
{
while (n != 1) {
if (n%2 == 0)
n = (n/2);
else
n = (3*n + 1);
}
if (n == 1) {
count++;
return n;
}
return -1;
}

The reason is probably that the constructor uses the half-open range, [1, 10) which means 1 inclusive 10 exclusive, so you're not validating numbers 1-10 but rather 1-9. Additionally you probably want to use != instead of <= in your loop condition.

Related

Need optimization tips for a subset sum like problem with a big constraint

Given a number 1 <= N <= 3*10^5, count all subsets in the set {1, 2, ..., N-1} that sum up to N. This is essentially a modified version of the subset sum problem, but with a modification that the sum and number of elements are the same, and that the set/array increases linearly by 1 to N-1.
I think i have solved this using dp ordered map and inclusion/exclusion recursive algorithm, but due to the time and space complexity i can't compute more than 10000 elements.
#include <iostream>
#include <chrono>
#include <map>
#include "bigint.h"
using namespace std;
//2d hashmap to store values from recursion; keys- i & sum; value- count
map<pair<int, int>, bigint> hmap;
bigint counter(int n, int i, int sum){
//end case
if(i == 0){
if(sum == 0){
return 1;
}
return 0;
}
//alternative end case if its sum is zero before it has finished iterating through all of the possible combinations
if(sum == 0){
return 1;
}
//case if the result of the recursion is already in the hashmap
if(hmap.find(make_pair(i, sum)) != hmap.end()){
return hmap[make_pair(i, sum)];
}
//only proceed further recursion if resulting sum wouldnt be negative
if(sum - i < 0){
//optimization that skips unecessary recursive branches
return hmap[make_pair(i, sum)] = counter(n, sum, sum);
}
else{
//include the number dont include the number
return hmap[make_pair(i, sum)] = counter(n, i - 1, sum - i) + counter(n, i - 1, sum);
}
}
The function has starting values of N, N-1, and N, indicating number of elements, iterator(which decrements) and the sum of the recursive branch(which decreases with every included value).
This is the code that calculates the number of the subsets. for input of 3000 it takes around ~22 seconds to output the result which is 40 digits long. Because of the long digits i had to use an arbitrary precision library bigint from rgroshanrg, which works fine for values less than ~10000. Testing beyond that gives me a segfault on line 28-29, maybe due to the stored arbitrary precision values becoming too big and conflicting in the map. I need to somehow up this code so it can work with values beyond 10000 but i am stumped with it. Any ideas or should i switch towards another algorithm and data storage?
Here is a different algorithm, described in a paper by Evangelos Georgiadis, "Computing Partition Numbers q(n)":
std::vector<BigInt> RestrictedPartitionNumbers(int n)
{
std::vector<BigInt> q(n, 0);
// initialize q with A010815
for (int i = 0; ; i++)
{
int n0 = i * (3 * i - 1) >> 1;
if (n0 >= q.size())
break;
q[n0] = 1 - 2 * (i & 1);
int n1 = i * (3 * i + 1) >> 1;
if (n1 < q.size())
q[n1] = 1 - 2 * (i & 1);
}
// construct A000009 as per "Evangelos Georgiadis, Computing Partition Numbers q(n)"
for (size_t k = 0; k < q.size(); k++)
{
size_t j = 1;
size_t m = k + 1;
while (m < q.size())
{
if ((j & 1) != 0)
q[m] += q[k] << 1;
else
q[m] -= q[k] << 1;
j++;
m = k + j * j;
}
}
return q;
}
It's not the fastest algorithm out there, and this took about half a minute for on my computer for n = 300000. But you only need to do it once (since it computes all partition numbers up to some bound) and it doesn't take a lot of memory (a bit over 150MB).
The results go up to but excluding n, and they assume that for each number, that number itself is allowed to be a partition of itself eg the set {4} is a partition of the number 4, in your definition of the problem you excluded that case so you need to subtract 1 from the result.
Maybe there's a nicer way to express A010815, that part of the code isn't slow though, I just think it looks bad.

For a given number N, how do I find x, S.T product of (x and no. of factors to x) = N?

to find factors of number, i am using function void primeFactors(int n)
# include <stdio.h>
# include <math.h>
# include <iostream>
# include <map>
using namespace std;
// A function to print all prime factors of a given number n
map<int,int> m;
void primeFactors(int n)
{
// Print the number of 2s that divide n
while (n%2 == 0)
{
printf("%d ", 2);
m[2] += 1;
n = n/2;
}
// n must be odd at this point. So we can skip one element (Note i = i +2)
for (int i = 3; i <= sqrt(n); i = i+2)
{
// While i divides n, print i and divide n
while (n%i == 0)
{
int k = i;
printf("%d ", i);
m[k] += 1;
n = n/i;
}
}
// This condition is to handle the case whien n is a prime number
// greater than 2
if (n > 2)
m[n] += 1;
printf ("%d ", n);
cout << endl;
}
/* Driver program to test above function */
int main()
{
int n = 72;
primeFactors(n);
map<int,int>::iterator it;
int to = 1;
for(it = m.begin(); it != m.end(); ++it){
cout << it->first << " appeared " << it->second << " times "<< endl;
to *= (it->second+1);
}
cout << to << " total facts" << endl;
return 0;
}
You can check it here. Test case n = 72.
http://ideone.com/kaabO0
How do I solve above problem using above algo. (Can it be optimized more ?). I have to consider large numbers as well.
What I want to do ..
Take example for N = 864, we found X = 72 as (72 * 12 (no. of factors)) = 864)
There is a prime-factorizing algorithm for big numbers, but actually it is not often used in programming contests.
I explain 3 methods and you can implementate using this algorithm.
If you implementated, I suggest to solve this problem.
Note: In this answer, I use integer Q for the number of queries.
O(Q * sqrt(N)) solution per query
Your algorithm's time complexity is O(n^0.5).
But you are implementating with int (32-bit), so you can use long long integers.
Here's my implementation: http://ideone.com/gkGkkP
O(sqrt(maxn) * log(log(maxn)) + Q * sqrt(maxn) / log(maxn)) algorithm
You can reduce the number of loops because composite numbers are not neccesary for integer i.
So, you can only use prime numbers in the loop.
Algorithm:
Calculate all prime numbers <= sqrt(n) with Eratosthenes's sieve. The time complexity is O(sqrt(maxn) * log(log(maxn))).
In a query, loop for i (i <= sqrt(n) and i is a prime number). The valid integer i is about sqrt(n) / log(n) with prime number theorem, so the time complexity is O(sqrt(n) / log(n)) per query.
More efficient algorithm
There are more efficient algorithm in the world, but it is not used often in programming contests.
If you check "Integer factorization algorithm" on the internet or wikipedia, you can find the algorithm like Pollard's-rho or General number field sieve.
Well,I will show you the code.
# include <stdio.h>
# include <iostream>
# include <map>
using namespace std;
const long MAX_NUM = 2000000;
long prime[MAX_NUM] = {0}, primeCount = 0;
bool isNotPrime[MAX_NUM] = {1, 1}; // yes. can be improve, but it is useless when sieveOfEratosthenes is end
void sieveOfEratosthenes() {
//#see https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes
for (long i = 2; i < MAX_NUM; i++) { // it must be i++
if (!isNotPrime[i]) //if it is prime,put it into prime[]
prime[primeCount++] = i;
for (long j = 0; j < primeCount && i * prime[j] < MAX_NUM; j++) { /*foreach prime[]*/
// if(i * prime[j] >= MAX_NUM){ // if large than MAX_NUM break
// break;
// }
isNotPrime[i * prime[j]] = 1; // set i * prime[j] not a prime.as you see, i * prime[j]
if (!(i % prime[j])) //if this prime the min factor of i,than break.
// and it is the answer why not i+=( (i & 1) ? 2 : 1).
// hint : when we judge 2,prime[]={2},we set 2*2=4 not prime
// when we judge 3,prime[]={2,3},we set 3*2=6 3*3=9 not prime
// when we judge 4,prime[]={2,3},we set 4*2=8 not prime (why not set 4*3=12?)
// when we judge 5,prime[]={2,3,5},we set 5*2=10 5*3=15 5*5=25 not prime
// when we judge 6,prime[]={2,3,5},we set 6*2=12 not prime,than we can stop
// why not put 6*3=18 6*5=30 not prime? 18=9*2 30=15*2.
// this code can make each num be set only once,I hope it can help you to understand
// this is difficult to understand but very useful.
break;
}
}
}
void primeFactors(long n)
{
map<int,int> m;
map<int,int>::iterator it;
for (int i = 0; prime[i] <= n; i++) // we test all prime small than n , like 2 3 5 7... it musut be i++
{
while (n%prime[i] == 0)
{
cout<<prime[i]<<" ";
m[prime[i]] += 1;
n = n/prime[i];
}
}
cout<<endl;
int to = 1;
for(it = m.begin(); it != m.end(); ++it){
cout << it->first << " appeared " << it->second << " times "<< endl;
to *= (it->second+1);
}
cout << to << " total facts" << endl;
}
int main()
{
//first init for calculate all prime numbers,for example we define MAX_NUM = 2000000
// the result of prime[] should be stored, you primeFactors will use it
sieveOfEratosthenes();
//second loop for i (i*i <= n and i is a prime number). n<=MAX_NUM
int n = 72;
primeFactors(n);
n = 864;
primeFactors(n);
return 0;
}
My best shot at performance without getting overboard with special algos.
The Erathostenes' seive - the complexity of the below is O(N*log(log(N))) - because the inner j loop starts from i*i instead of i.
#include <vector>
using std::vector;
void erathostenes_sieve(size_t upToN, vector<size_t>& primes) {
primes.clear();
vector<bool> bitset(upToN+1, true); // if the bitset[i] is true, the i is prime
bitset[0]=bitset[1]=0;
// if i is 2, will jump to 3, otherwise will jump on odd numbers only
for(size_t i=2; i<=upToN; i+=( (i&1) ? 2 : 1)) {
if(bitset[i]) { // i is prime
primes.push_back(i);
// it is enough to start the next cycle from i*i, because all the
// other primality tests below it are already performed:
// e.g:
// - i*(i-1) was surely marked non-prime when we considered multiples of 2
// - i*(i-2) was tested at (i-2) if (i-2) was prime or earlier (if non-prime)
for(size_t j=i*i; j<upToN; j+=i) {
bitset[j]=false; // all multiples of the prime with value of i
// are marked non-prime, using **addition only**
}
}
}
}
Now factoring based on the primes (set in a sorted vector). Before this, let's examine the myth of sqrt being expensive but a large bunch of multiplications is not.
First of all, let us note that sqrt is not that expensive anymore: on older CPU-es (x86/32b) it used to be twice as expensive as a division (and a modulo operation is division), on newer architectures the CPU costs are equal. Since factorisation is all about % operations again and again, one may still consider sqrt now and then (e.g. if and when using it saves CPU time).
For example consider the following code for an N=65537 (which is the 6553-th prime) assuming the primes has 10000 entries
size_t limit=std::sqrt(N);
size_t largestPrimeGoodForN=std::distance(
primes.begin(),
std::upper_limit(primes.begin(), primes.end(), limit) // binary search
);
// go descendingly from limit!!!
for(int i=largestPrimeGoodForN; i>=0; i--) {
// factorisation loop
}
We have:
1 sqrt (equal 1 modulo),
1 search in 10000 entries - at max 14 steps, each involving 1 comparison, 1 right-shift division-by-2 and 1 increment/decrement - so let's say a cost equal with 14-20 multiplications (if ever)
1 difference because of std::distance.
So, maximal cost - 1 div and 20 muls? I'm generous.
On the other side:
for(int i=0; primes[i]*primes[i]<N; i++) {
// factorisation code
}
Looks much simpler, but as N=65537 is prime, we'll go through all the cycle up to i=64 (where we'll find the first prime which cause the cycle to break) - a total of 65 multiplications.
Try this with a a higher prime number and I guarantee you the cost of 1 sqrt+1binary search are better use of the CPU cycle than all the multiplications on the way in the simpler form of the cycle touted as a better performance solution
So, back to factorisation code:
#include <algorithm>
#include <math>
#include <unordered_map>
void factor(size_t N, std::unordered_map<size_t, size_t>& factorsWithMultiplicity) {
factorsWithMultiplicity.clear();
while( !(N & 1) ) { // while N is even, cheaper test than a '% 2'
factorsWithMultiplicity[2]++;
N = N >> 1; // div by 2 of an unsigned number, cheaper than the actual /2
}
// now that we know N is even, we start using the primes from the sieve
size_t limit=std::sqrt(N); // sqrt is no longer *that* expensive,
vector<size_t> primes;
// fill the primes up to the limit. Let's be generous, add 1 to it
erathostenes_sieve(limit+1, primes);
// we know that the largest prime worth checking is
// the last element of the primes.
for(
size_t largestPrimeIndexGoodForN=primes.size()-1;
largestPrimeIndexGoodForN<primes.size(); // size_t is unsigned, so after zero will underflow
// we'll handle the cycle index inside
) {
bool wasFactor=false;
size_t factorToTest=primes[largestPrimeIndexGoodForN];
while( !( N % factorToTest) ) {
wasFactor=true;// found one
factorsWithMultiplicity[factorToTest]++;
N /= factorToTest;
}
if(1==N) { // done
break;
}
if(wasFactor) { // time to resynchronize the index
limit=std::sqrt(N);
largestPrimeIndexGoodForN=std::distance(
primes.begin(),
std::upper_bound(primes.begin(), primes.end(), limit)
);
}
else { // no luck this time
largestPrimeIndexGoodForN--;
}
} // done the factoring cycle
if(N>1) { // N was prime to begin with
factorsWithMultiplicity[N]++;
}
}

Total number of ways to write a positive integer as the sum of powers of 2 in efficient time

I've been looking at Number of ways to write n as a sum of powers of 2 and it works just fine, but I was wondering how to improve the run time efficiency of that algorithm. It fails to compute anything above ~1000 in any reasonable amount of time (under 10 seconds).
I'm assuming it has something to do with breaking it down into subproblems but don't know how to go about it. I was thinking something like O(n) or O(nlogn) runtime - I'm sure it is possible somehow. I just don't know how to split up the work efficiently.
code via Chasefornone
#include<iostream>
using namespace std;
int log2(int n)
{
int ret = 0;
while (n>>=1)
{
++ret;
}
return ret;
}
int power(int x,int y)
{
int ret=1,i=0;
while(i<y)
{
ret*=x;
i++;
}
return ret;
}
int getcount(int m,int k)
{
if(m==0)return 1;
if(k<0)return 0;
if(k==0)return 1;
if(m>=power(2,k))return getcount(m-power(2,k),k)+getcount(m,k-1);
else return getcount(m,k-1);
}
int main()
{
int m=0;
while(cin>>m)
{
int k=log2(m);
cout<<getcount(m,k)<<endl;
}
return 0;
}
Since we're dealing with powers of some base (in this case 2), we can easily do it in O(n) time (and space, if we consider the counts of fixed size).
The key is the generating function of the partitions. Let p(n) be the number of ways to write n as a sum of powers of the base b.
Then consider
∞
f(X) = ∑ p(n)*X^n
n=0
One can write f as an infinite product,
∞
f(X) = ∏ 1/(1 - X^(b^k))
k=0
and if one only wants the coefficients up to some limit l, one need only consider the factors with b^k <= l.
Multiplying them in the correct order (descending), at each step one knows that only coefficients whose index is divisible by b^i are nonzero, so on needs only n/b^k + n/b^(k-1) + ... + n/b + n additions of the coefficients, in total O(n).
Code (not guarding against overflow for larger arguments):
#include <stdio.h>
unsigned long long partitionCount(unsigned n);
int main(void) {
unsigned m;
while(scanf("%u", &m) == 1) {
printf("%llu\n", partitionCount(m));
}
return 0;
}
unsigned long long partitionCount(unsigned n) {
if (n < 2) return 1;
unsigned h = n /2, k = 1;
// find largest power of two not exceeding n
while(k <= h) k <<= 1;
// coefficient array
unsigned long long arr[n+1];
arr[0] = 1;
for(unsigned i = 1; i <= n; ++i) {
arr[i] = 0;
}
while(k) {
for(unsigned i = k; i <= n; i += k) {
arr[i] += arr[i-k];
}
k /= 2;
}
return arr[n];
}
is working fast enough:
$ echo "1000 end" | time ./a.out
1981471878
0.00user 0.00system 0:00.00elapsed
A generally-applicable approach to problems like this is to cache the intermediate results, e.g. as follows:
#include <iostream>
#include <map>
using namespace std;
map<pair<int,int>,int> cache;
/*
The log2() and power() functions remain unchanged and so are omitted for brevity
*/
int getcount(int m,int k)
{
map<pair<int,int>, int>::const_iterator it = cache.find(make_pair(m,k));
if (it != cache.end()) {
return it->second;
}
int count = -1;
if(m==0) {
count = 1;
} else if (k<0) {
count = 0;
} else if (k==0) {
count = 1;
} else if(m>=power(2,k)) {
count = getcount(m-power(2,k),k)+getcount(m,k-1);
} else {
count = getcount(m,k-1);
}
cache[make_pair(m,k)] = count;
return count;
}
/*
The main() function remains unchanged and so is omitted for brevity
*/
The result for the original program (which I've called nAsSum0) is:
$ echo 1000 | time ./nAsSum0
1981471878
59.40user 0.00system 0:59.48elapsed 99%CPU (0avgtext+0avgdata 467200maxresident)k
0inputs+0outputs (1935major+0minor)pagefaults 0swaps
For the version with caching:
$ echo 1000 | time ./nAsSum
1981471878
0.01user 0.01system 0:00.09elapsed 32%CPU (0avgtext+0avgdata 466176maxresident)k
0inputs+0outputs (1873major+0minor)pagefaults 0swaps
... both run on a Windows 7 PC under Cygwin. Thus, the version with caching was too quick for time to measure accurately, whereas the original version took about 1 minute to run.

My Sieve of Eratosthenes takes too long

I have implemented Sieve of Eratosthenes to solve the SPOJ problem PRIME1. Though the output is fine, my submission exceeds the time limit. How can I reduce the run time?
int main()
{
vector<int> prime_list;
prime_list.push_back(2);
vector<int>::iterator c;
bool flag=true;
unsigned int m,n;
for(int i=3; i<=32000;i+=2)
{
flag=true;
float s = sqrt(static_cast<float>(i));
for(c=prime_list.begin();c<=prime_list.end();c++)
{
if(*c>s)
break;
if(i%(*c)==0)
{
flag=false;
break;
}
}
if(flag==true)
{
prime_list.push_back(i);
}
}
int t;
cin>>t;
for (int times = 0; times < t; times++)
{
cin>> m >> n;
if (t) cout << endl;
if (m < 2)
m=2;
unsigned int j;
vector<unsigned int> req_list;
for(j=m;j<=n;j++)
{
req_list.push_back(j);
}
vector<unsigned int>::iterator k;
flag=true;
int p=0;
for(j=m;j<=n;j++)
{
flag=true;
float s = sqrt(static_cast<float>(j));
for(c=prime_list.begin();c<=prime_list.end();c++)
{
if((*c)!=j)
{
if((*c)>s)
break;
if(j%(*c)==0)
{
flag=false;
break;
}
}
}
if(flag==false)
{
req_list.erase (req_list.begin()+p);
p--;
}
p++;
}
for(k=req_list.begin();k<req_list.end();k++)
{
cout<<*k;
cout<<endl;
}
}
}
Your code is slow because you did not implement the Sieve of Eratosthenes algorithm. The algorithm works that way:
1) Create an array with size n-1, representing the numbers 2 to n, filling it with boolean values true (true means that the number is prime; do not forget we start counting from number 2 i.e. array[0] is the number 2)
2) Initialize array[0] = false.
3) Current_number = 2;
3) Iterate through the array by increasing the index by Current_number.
4) Search for the first number (except index 0) with true value.
5) Current_number = index + 2;
6) Continue steps 3-5 until search is finished.
This algorithm takes O(nloglogn) time.
What you do actually takes alot more time (O(n^2)).
Btw in the second step (where you search for prime numbers between n and m) you do not have to check if those numbers are prime again, ideally you will have calculated them in the first phase of the algorithm.
As I see in the site you linked the main problem is that you can't actually create an array with size n-1, because the maximum number n is 10^9, causing memory problems if you do it with this naive way. This problem is yours :)
I'd throw out what you have and start over with a really simple implementation of a sieve, and only add more complexity if really needed. Here's a possible starting point:
#include <vector>
#include <iostream>
int main() {
int number = 32000;
std::vector<bool> sieve(number,false);
sieve[0] = true; // Not used for now,
sieve[1] = true; // but you'll probably need these later.
for(int i = 2; i<number; i++) {
if(!sieve[i]) {
std::cout << "\t" << i;
for (int temp = 2*i; temp<number; temp += i)
sieve[temp] = true;
}
}
return 0;
}
For the given range (up to 32000), this runs in well under a second (with output directed to a file -- to the screen it'll generally be slower). It's up to you from there though...
I am not really sure that you have implemented the sieve of Erasthotenes. Anyway a couple of things that could speed up to some extent your algorithm would be: Avoid multiple rellocations of the vector contents by preallocating space (lookup std::vector<>::reserve). The operation sqrt is expensive, and you can probably avoid it altogether by modifying the tests (stop when the x*x > y instead of checking x < sqrt(y).
Then again, you will get a much better improvement by revising the actual algorithm. From a cursory look it seems as if you are iterating over all candidates and for each one of them, trying to divide with all the known primes that could be factors. The sieve of Erasthotenes takes a single prime and discards all multiples of that prime in a single pass.
Note that the sieve does not perform any operation to test whether a number is prime, if it was not discarded before then it is a prime. Each not prime number is visited only once for each unique factor. Your algorithm on the other hand is processing every number many times (against the existing primes)
I think one way to slightly speed up your sieve is the prevention of using the mod operator in this line.
if(i%(*c)==0)
Instead of the (relatively) expensive mod operation, maybe if you iterated forward in your sieve with addition.
Honestly, I don't know if this is correct. Your code is difficult to read without comments and with single letter variable names.
The way I understand the problem is that you have to generate all primes in a range [m,n].
A way to do this without having to compute all primes from [0,n], because this is most likely what's slowing you down, is to first generate all the primes in the range [0,sqrt(n)].
Then use the result to sieve in the range [m,n]. To generate the initial list of primes, implement a basic version of the sieve of Eratosthenes (Pretty much just a naive implementation from the pseudo code in the Wikipedia article will do the trick).
This should enable you to solve the problem in very little time.
Here's a simple sample implementation of the sieve of Eratosthenes:
std::vector<unsigned> sieve( unsigned n ) {
std::vector<bool> v( limit, true ); //Will be used for testing numbers
std::vector<unsigned> p; //Will hold the prime numbers
for( unsigned i = 2; i < n; ++i ) {
if( v[i] ) { //Found a prime number
p.push_back(i); //Stuff it into our list
for( unsigned j = i + i; j < n; j += i ) {
v[i] = false; //Isn't a prime/Is composite
}
}
}
return p;
}
It returns a vector containing only the primes from 0 to n. Then you can use this to implement the method I mentioned. Now, I won't provide the implementation for you, but, you basically have to do the same thing as in the sieve of Eratosthenes, but instead of using all integers [2,n], you just use the result you found. Not sure if this is giving away too much?
Since the SPOJ problem in the original question doesn't specify that it has to be solved with the Sieve of Eratosthenes, here's an alternative solution based on this article. On my six year old laptop it runs in about 15 ms for the worst single test case (n-m=100,000).
#include <set>
#include <iostream>
using namespace std;
int gcd(int a, int b) {
while (true) {
a = a % b;
if(a == 0)
return b;
b = b % a;
if(b == 0)
return a;
}
}
/**
* Here is Rowland's formula. We define a(1) = 7, and for n >= 2 we set
*
* a(n) = a(n-1) + gcd(n,a(n-1)).
*
* Here "gcd" means the greatest common divisor. So, for example, we find
* a(2) = a(1) + gcd(2,7) = 8. The prime generator is then a(n) - a(n-1),
* the so-called first differences of the original sequence.
*/
void find_primes(int start, int end, set<int>* primes) {
int an; // a(n)
int anm1 = 7; // a(n-1)
int diff;
for (int n = start; n < end; n++) {
an = anm1 + gcd(n, anm1);
diff = an - anm1;
if (diff > 1)
primes->insert(diff);
anm1 = an;
}
}
int main() {
const int end = 100000;
const int start = 2;
set<int> primes;
find_primes(start, end, &primes);
ticks = GetTickCount() - ticks;
cout << "Found " << primes.size() << " primes:" << endl;
set<int>::iterator iter = primes.begin();
for (; iter != primes.end(); ++iter)
cout << *iter << endl;
}
Profile your code, find hotspots, eliminate them. Windows, Linux profiler links.

weighted RNG speed problem in C++

Edit: to clarify, the problem is with the second algorithm.
I have a bit of C++ code that samples cards from a 52 card deck, which works just fine:
void sample_allcards(int table[5], int holes[], int players) {
int temp[5 + 2 * players];
bool try_again;
int c, n, i;
for (i = 0; i < 5 + 2 * players; i++) {
try_again = true;
while (try_again == true) {
try_again = false;
c = fast_rand52();
// reject collisions
for (n = 0; n < i + 1; n++) {
try_again = (temp[n] == c) || try_again;
}
temp[i] = c;
}
}
copy_cards(table, temp, 5);
copy_cards(holes, temp + 5, 2 * players);
}
I am implementing code to sample the hole cards according to a known distribution (stored as a 2d table). My code for this looks like:
void sample_allcards_weighted(double weights[][HOLE_CARDS], int table[5], int holes[], int players) {
// weights are distribution over hole cards
int temp[5 + 2 * players];
int n, i;
// table cards
for (i = 0; i < 5; i++) {
bool try_again = true;
while (try_again == true) {
try_again = false;
int c = fast_rand52();
// reject collisions
for (n = 0; n < i + 1; n++) {
try_again = (temp[n] == c) || try_again;
}
temp[i] = c;
}
}
for (int player = 0; player < players; player++) {
// hole cards according to distribution
i = 5 + 2 * player;
bool try_again = true;
while (try_again == true) {
try_again = false;
// weighted-sample c1 and c2 at once
// h is a number < 1325
int h = weighted_randi(&weights[player][0], HOLE_CARDS);
// i2h uses h and sets temp[i] to the 2 cards implied by h
i2h(&temp[i], h);
// reject collisions
for (n = 0; n < i; n++) {
try_again = (temp[n] == temp[i]) || (temp[n] == temp[i+1]) || try_again;
}
}
}
copy_cards(table, temp, 5);
copy_cards(holes, temp + 5, 2 * players);
}
My problem? The weighted sampling algorithm is a factor of 10 slower. Speed is very important for my application.
Is there a way to improve the speed of my algorithm to something more reasonable? Am I doing something wrong in my implementation?
Thanks.
edit: I was asked about this function, which I should have posted, since it is key
inline int weighted_randi(double *w, int num_choices) {
double r = fast_randd();
double threshold = 0;
int n;
for (n = 0; n < num_choices; n++) {
threshold += *w;
if (r <= threshold) return n;
w++;
}
// shouldn't get this far
cerr << n << "\t" << threshold << "\t" << r << endl;
assert(n < num_choices);
return -1;
}
...and i2h() is basically just an array lookup.
Your reject collisions are turning an O(n) algorithm into (I think) an O(n^2) operation.
There are two ways to select cards from a deck: shuffle and pop, or pick sets until the elements of the set are unique; you are doing the latter which requires a considerable amount of backtracking.
I didn't look at the details of the code, just a quick scan.
you could gain some speed by replacing the all the loops that check if a card is taken with a bit mask, eg for a pool of 52 cards, we prevent collisions like so:
DWORD dwMask[2] = {0}; //64 bits
//...
int nCard;
while(true)
{
nCard = rand_52();
if(!(dwMask[nCard >> 5] & 1 << (nCard & 31)))
{
dwMask[nCard >> 5] |= 1 << (nCard & 31);
break;
}
}
//...
My guess would be the memcpy(1326*sizeof(double)) within the retry-loop. It doesn't seem to change, so should it be copied each time?
Rather than tell you what the problem is, let me suggest how you can find it. Either 1) single-step it in the IDE, or 2) randomly halt it to see what it's doing.
That said, sampling by rejection, as you are doing, can take an unreasonably long time if you are rejecting most samples.
Your inner "try_again" for loop should stop as soon as it sets try_again to true - there's no point in doing more work after you know you need to try again.
for (n = 0; n < i && !try_again; n++) {
try_again = (temp[n] == temp[i]) || (temp[n] == temp[i+1]);
}
Answering the second question about picking from a weighted set also has an algorithmic replacement that should be less time complex. This is based on the principle of that which is pre-computed does not need to be re-computed.
In an ordinary selection, you have an integral number of bins which makes picking a bin an O(1) operation. Your weighted_randi function has bins of real length, thus selection in your current version operates in O(n) time. Since you don't say (but do imply) that the vector of weights w is constant, I'll assume that it is.
You aren't interested in the width of the bins, per se, you are interested in the locations of their edges that you re-compute on every call to weighted_randi using the variable threshold. If the constancy of w is true, pre-computing a list of edges (that is, the value of threshold for all *w) is your O(n) step which need only be done once. If you put the results in a (naturally) ordered list, a binary search on all future calls yields an O(log n) time complexity with an increase in space needed of only sizeof w / sizeof w[0].