What is fastest way to find a prime number in range? - c++

I have this code to find prime numbers:
void writePrimesToFile(int begin, int end, ofstream& file)
{
bool isPrime = 0;
for (int i = begin; i < end; i = i+2)
{
isPrime = 1;
for (int j = 2; j<i; j++)
if (i % j == 0)
{
isPrime = 0;
break;
}
if (isPrime)
file << i << " \n";
}
}
Is there a faster way to do it?
I tried googling a faster way but its all math and I don't understand how can I turn it into code.

Is there a faster way to do it?
Yes. There are faster primality test algorithms.
What is fastest way to find a prime number in range?
No one knows. If some one knows, then that person is guarding a massively important secret. No one has been able to prove that any of the known techniques is the fastest possible way to test primality.
You might have asked: What is the fastest known way to find a prime number in range.
The answer to that would be: It depends. The complexity of some algorithms grow asymptotically slower than that of other algorithms, but that is irrelevant if the input numbers are small. There are probabilistic methods that are very fast for some numbers, but have problematic cases where they are slower than deterministic methods.
Your input numbers are small, because they are of type int and therefore have quite limited range. With small numbers, a simple algorithm may be faster than a more complex one. To find out which algorithm is fastest for your use case, you must benchmark them.
I recommend starting with Sieve of Eratosthenes since it is asymptotically faster than your naïve approach, but also easy to implement (pseudo code courtesy of wikipedia):
Input: an integer n > 1
Let A be an array of Boolean values, indexed by integers 2 to n,
initially all set to true.
for i = 2, 3, 4, ..., not exceeding √n:
if A[i] is true:
for j = i², i²+i, i²+2i, i²+3i, ..., not exceeding n :
A[j] := false
Output: all i such that A[i] is true.

Related

Find 101 in prime numbers

I'm currently solving a problem which is pretty straightforward: I need to find all prime numbers up to N which contain 101 in them and count them.
Say if N is 1000 then the output should ne 6 since there is 101, 1013, 1015, 5101, 6101 and 8101. I used sieve's algorithm to get all of the prime numbers up to N, though I don't know how could I solve it completely. I thought of std::find, but resigned from that idea because the time complexity grows fast. I know I need to modify sieve's algorithm to fit my needs though I can't find any patterns.
Any help would be appreciated.
Edit:
I'm using this algorithm:
vector<int> sieve;
vector<int> primes;
for (int i = 1; i < max + 1; ++i)
sieve.push_back(i); // you'll learn more efficient ways to handle this later
sieve[0]=0;
for (int i = 2; i < max + 1; ++i) { // there are lots of brace styles, this is mine
if (sieve[i-1] != 0) {
primes.push_back(sieve[i-1]);
for (int j = 2 * sieve[i-1]; j < max + 1; j += sieve[i-1]) {
sieve[j-1] = 0;
}
}
}
Yes, checking every prime number for containing "101" is a waste of time. Generating all numbers containing 101 and checking whether they are prime is probably faster.
For generating the 101 numbers, let's look at the possible digit patterns, e.g. with 5 digits:
nn101
n101m
101mm
For each of these patterns, you get all the numbers by iterating n in an outer loop and m in an inner loop and doing the math to get the pattern's value (of course, you need not consider even m values, because the only even prime is 2). You are done when the values reaches N.
To check for being a prime, an easy way is to prepare a list of all primes up to M=sqrt(N), using a sieve if you like, and check whether your value is divisible by one of them.
This should be running in O(N^1.5). Why? The number of patterns grows with O(logN), the iterations inside each pattern with N/1000, giving O(N), the prime check with the number of primes up to M, being O(M/log(M)), finding these primes with a sieve being O(M^2). Altogether that's O(N * log(N) * sqrt(N) / log(sqrt(N)) + N) or O(N^1.5).
You don't need to modify your prime generation algorithm. When processing the primes you get from your gen alg you just have to check if the prime satisfies your condition:
e.g.
// p is the prime number
if (contains(p))
{
// print or write to file or whatever
}
with:
bool contains(int);
a function which checks your condition

Optimize counting sort?

Given that the input will be N numbers from 0 to N (with duplicates) how I can optimize the code bellow for both small and big arrays:
void countingsort(int* input, int array_size)
{
int max_element = array_size;//because no number will be > N
int *CountArr = new int[max_element+1]();
for (int i = 0; i < array_size; i++)
CountArr[input[i]]++;
for (int j = 0, outputindex = 0; j <= max_element; j++)
while (CountArr[j]--)
input[outputindex++] = j;
delete []CountArr;
}
Having a stable sort is not a requirement.
edit: In case it's not clear, I am talking about optimizing the algorithm.
IMHO there's nothing wrong here. I highly recommend this approach when max_element is small, numbers sorted are non sparse (i.e. consecutive and no gaps) and greater than or equal to zero.
A small tweak, I'd replace new / delete and just declare a finite array using heap, e.g. 256 for max_element.
int CountArr[256] = { }; // Declare and initialize with zeroes
As you bend these rules, i.e. sparse, negative numbers you'd be struggling with this approach. You will need to find an optimal hashing function to remap the numbers to your efficient array. The more complex the hashing becomes the benefit between this over well established sorting algorithms diminishes.
In terms of complexity this cannot be beaten. It's O(N) and beats standard O(NlogN) sorting by exploiting the extra knowledge that 0<x<N. You cannot go below O(N) because you need at least to swipe through the input array once.

find all unique triplet in given array with sum zero with in minimum execution time [duplicate]

This question already has an answer here:
Finding three elements that sum to K
(1 answer)
Closed 7 years ago.
I've got all unique triplets from code below but I want to reduce its time
complexity. It consists of three for loops. So my question is: Is it possible to do in minimum number of loops that it decreases its time complexity?
Thanks in advance. Let me know.
#include <cstdlib>
#include<iostream>
using namespace std;
void Triplet(int[], int, int);
void Triplet(int array[], int n, int sum)
{
// Fix the first element and find other two
for (int i = 0; i < n-2; i++)
{
// Fix the second element and find one
for (int j = i+1; j < n-1; j++)
{
// Fix the third element
for (int k = j+1; k < n; k++)
if (array[i] + array[j] + array[k] == sum)
cout << "Result :\t" << array[i] << " + " << array[j] << " + " << array[k]<<" = " << sum << endl;
}
}
}
int main()
{
int A[] = {-10,-20,30,-5,25,15,-2,12};
int sum = 0;
int arr_size = sizeof(A)/sizeof(A[0]);
cout<<"********************O(N^3) Time Complexity*****************************"<<endl;
Triplet(A,arr_size,sum);
return 0;
}
I'm not a wiz at algorithms but a way I can see making your program better is to do a binary search on your third loop for the value that will give you your sum in conjunction with the 2 previous values. This however requires your data to be sorted beforehand to make it work properly (which obviously has some overhead depending on your sorting algorithm (std::sort has an average time complexity of O (n log n))) .
You can always if you want to make use of parallel programming and make your program run off multiple threads but this can get very messy.
Aside from those suggestions, it is hard to think of a better way.
You can get a slightly better complexity of O(n^2*logn) easily enough if you first sort the list and then do a binary search for the third value. The sort takes O(nlogn) and the triplets search takes O(n^2) to ennumerate all the possible pairs that exist times O(logn) for the binary search of the thrid value for a total of O(nlogn + n^2logn) or simply O(n^2*logn).
There might be some other fancy things with binary search you can do to reduce that, but I can't see easily (at 4:00 am) anything better than that.
When a triplet sums to zero, the third number is completely determined by the two first. Thus, you're only free to choose two of the numbers in each triple. With n possible numbers this yields maximum n2 triplets.
I suspect, but I'm not sure, that that's the best complexity you can do. It's not clear to me whether the number of sum-to-zero triplets, for a random sequence of signed integers, will necessarily be on the order of n2. If it's less (not likely, but if) then it might be possible to do better.
Anyway, a simple way to do this with complexity on the order of n2 is to first scan through the numbers, storing them in a data structure with constant time lookup (the C++ standard library provides such). Then scan through the array as your posted code does, except vary only on the first and second number of the triple. For the third number, look it up in the constant time look-up data structure already established: if it's there then you have a potential new triple, otherwise not.
For each zero-sum triple thus found, put it also in a constant time look-up structure.
This ensures the uniqueness criterion at no extra complexity.
In the worst case, there are C(n, 3) triplets with sum zero in an array of size n. C(n, 3) is in Θ(n³), it takes Θ(n³) time just to print the triplets. In general, you cannot get better than cubic complexity.

Non-standard sorting algorithm for random unique integers

I have an array of at least 2000 random unique integers, each in range 0 < n < 65000.
I have to sort it and then get the index of a random value in the array. Each of these operations have to be as fast as possible. For searching the binary-search seems to serve well.
For sorting I used the standard quick sorting algorithm (qsort), but I was told that with the given information the standard sorting algorithms will not be the most efficient. So the question is simple - what would be the most efficient way to sort the array, with the given information? Totally puzzled by this.
I don't know why the person who told you that would be so peversely cryptic, but indeed qsort is not the most efficient way to sort integers (or generally anything) in C++. Use std::sort instead.
It's possible that you can improve on your implementation's std::sort for the stated special case (2000 distinct random integers in the range 0-65k), but you're unlikely to do a lot better and it almost certainly won't be worth the effort. The things I can think of that might help:
use a quicksort, but with a different pivot selection or a different threshold for switching to insertion sort from what your implementation of sort uses. This is basically tinkering.
use a parallel sort of some kind. 2000 elements is so small that I suspect the time to create additional threads will immediately kill any hope of a performance improvement. But if you're doing a lot of sorts then you can average the cost of creating the threads across all of them, and only worry about the overhead of thread synchronization rather than thread creation.
That said, if you generate and sort the array, then look up just one value in it, and then generate a new array, you would be wasting effort by sorting the whole array each time. You can just run across the array counting the number of values smaller than your target value: this count is the index it would have. Use std::count_if or a short loop.
Each of these operations have to be as fast as possible.
That is not a legitimate software engineering criterion. Almost anything can be made a minuscule bit faster with enough months or years of engineering effort -- nothing complex is ever "as fast as possible", and even if it was you wouldn't be able to prove that it cannot be faster, and even if you could there would be new hardware out there somewhere or soon to be invented for which the fastest solution is different and better. Unless you intend to spend your whole life on this task and ultimately fail, get a more realistic goal ;-)
For sorting uniformly distributed random integers Radix Sort is typically the fastest algorithm, it can be faster than quicksort by a factor of 2 or more. However, it may be hard to find an optimized implementation of that, quick sort is much more ubiquitous. Both Radix Sort and Quick Sort may have very bad worst case performance, like O(N^2), so if worst case performance is important you have to look elsewhere, maybe you pick introsort, which is similar to std::sort in C++.
For array look up a hash table is by far the fasted method. If you don't want yet another data structure, you can always pick binary search. If you have uniformly distributed numbers interpolation search is probably the most effective method (best average performance).
Quicksort's complexity is O(n*log(n)), where n = 2000 in your case. log(2000) = 10.965784.
You can sort in O(n) using one of these algorithms:
Counting sort
Radix sort
Bucket sort
I've compared std::sort() to counting sort for N = 100000000:
#include <iostream>
#include <vector>
#include <algorithm>
#include <time.h>
#include <string.h>
using namespace std;
void countSort(int t[], int o[], int c[], int n, int k)
{
// Count the number of each number in t[] and place that value into c[].
for (int i = 0; i < n; i++)
c[t[i]]++;
// Place the number of elements less than each value at i into c[].
for (int i = 1; i <= k; i++)
c[i] += c[i - 1];
// Place each element of t[] into its correct sorted position in the output o[].
for (int i = n - 1; i >= 0; i--)
{
o[c[t[i]] - 1] = t[i];
--c[t[i]];
}
}
void init(int t[], int n, int max)
{
for (int i = 0; i < n; i++)
t[i] = rand() % max;
}
double getSeconds(clock_t start)
{
return (double) (clock() - start) / CLOCKS_PER_SEC;
}
void print(int t[], int n)
{
for (int i = 0; i < n; i++)
cout << t[i] << " ";
cout << endl;
}
int main()
{
const int N = 100000000;
const int MAX = 65000;
int *t = new int[N];
init(t, N, MAX);
//print(t, N);
clock_t start = clock();
sort(t, t + N);
cout << "std::sort " << getSeconds(start) << endl;
//print(t, N);
init(t, N, MAX);
//print(t, N);
// o[] holds the sorted output.
int *o = new int[N];
// c[] holds counters.
int *c = new int[MAX + 1];
// Set counters to zero.
memset(c, 0, (MAX + 1) * sizeof(*c));
start = clock();
countSort(t, o, c, N, MAX);
cout << "countSort " << getSeconds(start) << endl;
//print(o, N);
delete[] t;
delete[] o;
delete[] c;
return 0;
}
Results (in seconds):
std::sort 28.6
countSort 10.97
For N = 2000 both algorithms give 0 time.
Standard sorting algorithms, as well as standard nearly anything, are very good general purpose solution. If you know nothing about your data, if it truly consists of "random unique integers", then you might as well go with one of the standard implementations.
On the other hand, most programming problems appear in a context that tells something about data, and the additional info usually leads to more efficient problem-specific solutions.
For example, does your data appear all at once or in chunks? If it comes piecemeal you may speed things up by interleaving incremental sorting, such as dual-pivot quicksort, with data acquisition.
Since the domain of your numbers is so small, you can create an array of 65000 entries, set the index of the numbers you see to one, and then collect all numbers that are set to one as your sorted array. This will be exactly 67000 (assuming initialization of array is without cost) iterations in total.
Since the lists contain 2000 entries, O(n*log(n)) will probably be faster. I can think of no other O(n) algorithm for this, so I suppose you are better off with a general purpose algorithm.

Optimizing my code for finding the factors of a given integer

Here is my code,but i'lld like to optimize it.I don't like the idea of it testing all the numbers before the square root of n,considering the fact that one could be faced with finding the factors of a large number. Your answers would be of great help. Thanks in advance.
unsigned int* factor(unsigned int n)
{
unsigned int tab[40];
int dim=0;
for(int i=2;i<=(int)sqrt(n);++i)
{
while(n%i==0)
{
tab[dim++]=i;
n/=i;
}
}
if(n>1)
tab[dim++]=n;
return tab;
}
Here's a suggestion on how to do this in 'proper' c++ (since you tagged as c++).
PS. Almost forgot to mention: I optimized the call to sqrt away :)
See it live on http://liveworkspace.org/code/6e2fcc2f7956fafbf637b54be2db014a
#include <vector>
#include <iostream>
#include <iterator>
#include <algorithm>
typedef unsigned int uint;
std::vector<uint> factor(uint n)
{
std::vector<uint> tab;
int dim=0;
for(unsigned long i=2;i*i <= n; ++i)
{
while(n%i==0)
{
tab.push_back(i);
n/=i;
}
}
if(n>1)
tab.push_back(n);
return tab;
}
void test(uint x)
{
auto v = factor(x);
std::cout << x << ":\t";
std::copy(v.begin(), v.end(), std::ostream_iterator<uint>(std::cout, ";"));
std::cout << std::endl;
}
int main(int argc, const char *argv[])
{
test(1);
test(2);
test(4);
test(43);
test(47);
test(9997);
}
Output
1:
2: 2;
4: 2;2;
43: 43;
47: 47;
9997: 13;769;
There's a simple change that will cut the run time somewhat: factor out all the 2's, then only check odd numbers.
If you use
... i*i <= n; ...
It may run much faster than i <= sqrt(n)
By the way, you should try to handle factors of negative n or at least be sure you never pass a neg number
I'm afraid you cannot. There is no known method in the planet can factorize large integers in polynomial time. However, there are some methods can help you slightly (not significantly) speed up your program. Search Wikipedia for more references. http://en.wikipedia.org/wiki/Integer_factorization
As seen from your solution , you find basically all prime numbers ( the condition while (n%i == 0)) works like that , especially for the case of large numbers , you could compute prime numbers beforehand, and keep checking only those. The prime number calculation could be done using Sieve of Eratosthenes method or some other efficient method.
unsigned int* factor(unsigned int n)
If unsigned int is the typical 32-bit type, the numbers are too small for any of the more advanced algorithms to pay off. The usual enhancements for the trial division are of course worthwhile.
If you're moving the division by 2 out of the loop, and divide only by odd numbers in the loop, as mentioned by Pete Becker, you're essentially halving the number of divisions needed to factor the input number, and thus speed up the function by a factor of very nearly 2.
If you carry that one step further and also eliminate the multiples of 3 from the divisors in the loop, you reduce the number of divisions and hence increase the speed by a factor close to 3 (on average; most numbers don't have any large prime factors, but are divisible by 2 or by 3, and for those the speedup is much smaller; but those numbers are quick to factor anyway. If you factor a longer range of numbers, the bulk of the time is spent factoring the few numbers with large prime divisors).
// if your compiler doesn't transform that to bit-operations, do it yourself
while(n % 2 == 0) {
tab[dim++] = 2;
n /= 2;
}
while(n % 3 == 0) {
tab[dim++] = 3;
n /= 3;
}
for(int d = 5, s = 2; d*d <= n; d += s, s = 6-s) {
while(n % d == 0) {
tab[dim++] = d;
n /= d;
}
}
If you're calling that function really often, it would be worthwhile to precompute the 6542 primes not exceeding 65535, store them in a static array, and divide only by the primes to eliminate all divisions that are a priori guaranteed to not find a divisor.
If unsigned int happens to be larger than 32 bits, then using one of the more advanced algorithms would be profitable. You should still begin with trial divisions to find the small prime factors (whether small should mean <= 1000, <= 10000, <= 100000 or perhaps <= 1000000 would need to be tested, my gut feeling says one of the smaller values would be better on average). If after the trial division phase the factorisation is not yet complete, check whether the remaining factor is prime using e.g. a deterministic (for the range in question) variant of the Miller-Rabin test. If it's not, search a factor using your favourite advanced algorithm. For 64 bit numbers, I'd recommend Pollard's rho algorithm or an elliptic curve factorisation. Pollard's rho algorithm is easier to implement and for numbers of that magnitude finds factors in comparable time, so that's my first recommendation.
Int is way to small to encounter any performance problems. I just tried to measure the time of your algorithm with boost but couldn't get any useful output (too fast). So you shouldn't worry about integers at all.
If you use i*i I was able to calculate 1.000.000 9-digit integers in 15.097 seconds. It's good to optimize an algorithm but instead of "wasting" time (depends on your situation) it's important to consider if a small improvement really is worth the effort. Sometimes you have to ask yourself if you rally need to be able to calculate 1.000.000 ints in 10 seconds or if 15 is fine as well.