You are given a sorted array of numbers, and followed by number of
queries, for each query if the queried number is present in the array
print its position, else print -1.
Input
First line contains N Q, number of elements in the array and number of
queries to follow,
Second line contains N numbers, elements of the array, each number
will be -10^9<= ai <= 10^9, 0 < N <= 10^5, 0 < Q <= 5*10^5
Reference:https://www.spoj.com/problems/BSEARCH1/
My code works fine on terminal but it excedes the time limit on the online judge even though it takes O(NQ) time.
Here is my code:
#include <iostream>
long long binarySearch(long long arr[], long long l , long long r , long long x) {
long long mid;
if (r >= l){
mid = (l+r)/2;
if (arr[mid] == x) {
if (arr[mid] == arr[mid-1]) {
while (arr[mid] == arr[mid-1]) {
--mid;
}
return mid;
}
else{
return mid;
}
}
if (arr[mid] > x) {
return binarySearch(arr,l,mid-1,x);
}
if (arr[mid] < x) {
return binarySearch(arr,mid+1,r,x);
}
}
return -1;
}
int main() {
long long n ,q;
std::cin >> n >> q;
long long array[n];
for (long long i = 0; i < n; ++i) {
std::cin >> array[i];
}
long long x;
long long arr2[q];
for (long long i = 0 ; i < q ; ++i) {
std::cin >> x;
std::cout << binarySearch(array,0,n-1,x) << "\n";
}
return 0;
}
You don't need to reinvent the wheel. You can use the C++ standard library algorithm std::lower_bound. It does a binary search for the first location where the value could be.
You can rewrite your function as follows:
#include <algorithm>
long long binarySearch(long long arr[], long long n, long long x)
{
// O(log n) binary search for first element not less that x
long long *itr = std::lower_bound(arr, arr + n, x);
// If itr is array end or location doesn't contain x
if (itr == arr + n || *itr != x) {
return -1;
}
// Compute index by address arithmetic
return itr - arr;
}
I've removed one unnecessary function parameter, just pass the array size. By the way, you don't need long long for this problem. Might as well use int and save some memory.
If you still have timeout problems it might be slow input/output. Try adding the next two lines at the beginning of your main().
std::ios_base::sync_with_stdio(false); // possibly faster I/O buffering
std::cin.tie(NULL); // don't flush output stream before doing input
I think what you are trying to do is to print the first position of the found element. However, if you have an array with all elements identical 1,1,1,1,1,1,1, then you are basically doing O(n) for one query, which results in O(nq) where n is the length of the array and q the number of queries in the worst case.
Suggestions:
I think what you need to do is to get rid of the duplicates.
Sort the array.
Create another array of pairs ( like std::vector<std::pair<int,int>. With (element, first-pos) as the pair.
Then do the binary search on it.
I think the main problem is that your implementation of binary search is recursive.
I have no idea if the local input data is significantly big to test your implementation is suitable. But if you change the implementation recursive to iterative like this example, the performance of the code will be enhanced.
The reason why the iterative implementation is better than recursive, check this.
Note that, with compiler having tail-call optimization functionality, the diminish in the speed of recursive implementation could be eliminated.
Related
Edit: Seems like the error is simply 9,999,999,999,999 being too big a number for an array.
I get this error "program received signal sigsegv segmentation fault" on my code.
Basically my code is to do an integer factorisation, and the exercise can be seen on codeabbey here. Basically I will receive input like 1000 and output them as a product of their factors, 2*2*2*5*5*5 in this case.
I do the above by having a vector of prime numbers which I generate using the Sieve of Eratosthenes method.
According to the website, the number of digits of the input will not exceed 13, hence my highest number is 9,999,999,999,999. Below is my code.
#include <iostream>
#include <vector>
#include <cstring>
unsigned long long int MAX_LIMIT = 9999999999999;
std::vector<unsigned long long int> intFactorisation (unsigned long long int num) {
std::vector<unsigned long long int> answers;
static std::vector<unsigned long long int> primes;
if (primes.empty()) { // generate prime numbers using sieve method
bool *arr = new bool[MAX_LIMIT];
memset (arr, true, MAX_LIMIT);
for (unsigned long long int x = 2; x*x < MAX_LIMIT; x++) {
if (arr[x] == true) {
for (unsigned long long int y = x*x; y < MAX_LIMIT; y += x) {
arr[y] = false; // THIS LINE ALWAYS HAS AN ERROR!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
}
}
}
for (unsigned long long int x = 2; x <= MAX_LIMIT; x++) {
if (arr[x]) {
primes.push_back(x);
}
}
}
std::vector<unsigned long long int>::iterator it = primes.begin(); // start the factorisation
while(it != primes.end()) {
if (num % *it == 0) {
answers.push_back(*it);
num/=*it;
}
else {
it++;
}
if (num == 1) {
break;
}
}
return answers;
}
int main() {
int maxi;
std::cin >> maxi;
int out[maxi];
for (int x = 0; x < maxi; x++) {
std::cin >> out[x];
}
for (auto it : out) {
std::vector<unsigned long long int> temp = intFactorisation(it);
for (std::vector<unsigned long long int>::iterator it = temp.begin();
it != temp.end(); it++) {
if (it == temp.end() - 1) {
std::cout << *it << " ";
}
else {
std::cout << *it << "*";
}
}
}
}
However, for some reason the program will always terminate at arr[y] = false in the function intFactorisation. I will see a notification giving the segmentation fault message on the bottom left of my screen while using CodeBlocks.
I already used 'new' on my ridiculously large array, so memory should be on the heap. I have tried using a smaller MAX_LIMIT such as 100,000 and my function works. Anyone knows why?
Also, I am wondering why I don't need to dereference my pointer arr. For example, arr[y] = false works, but *arr[y] or (*arr)[y] doesn't. Hopefully this can be clarified too.
Thanks for reading this and I appreciate any help.
There are two types of issues in the posted code, the memory managment and the algorithms.
Resource acquisition
The program exhibits three kinds of memory allocation:
Variable length array. In main, out is declared as int out[maxi], maxi beeing a variable, not a compile time constant. This is a C99 feature, it's never been part of any C++ standard, even if it is offered as a feature by some compiler.
bool *arr = new bool[MAX_LIMIT];. Modern guidelines suggest to avoid the use of bare new to allocate memory and to prefer smart pointers, standard containers and in general to follow the RAII idiom, but that's not even the major problem here. MAX_LIMIT is way too big to not lead to a std::bad_alloc exception and also delete is never called.
In the same function there's also a loop which will end up accessing the (unlikely) allocated memory out of bounds: for (unsigned long long int x = 2; x <= MAX_LIMIT; x++) { if (arr[x]) {, x will eventually become MAX_LIMIT, but you'll run out of memory way before.
std::vector. That would be the way, if only the program wouldn't try to fill the vector primes with all the primes up to MAX_LIMIT, which is 1013-1 or almost 80TB assuming a 64-bit type.
Algorithm
The idea is to first calculate all the possible primes and then, for each inputted number, to check if any of them is a factor. The problem is that the maximum possible number is a really big one, but the good news is that you don't need to calculate and store all the primes up to that number, but just to the square root of that.
Imagine to have tried all the primes up to that square root (let's call it S) and consequently divided the original number by any factor found. If there's still a remainder, that value isn't divisible by any of the mentioned primes and is less or equal to the original number. It has to be a prime number itself. All the possible factors <= S have been already ruled out, and so any hypothetical factor > S (what would be the result of the division by it? another already tested prime less than S).
From a design point of view, I'd also address how the primes are calculated and stored in OP's code. The factorization function is basically written as the follows.
unsigned long long int MAX_LIMIT = 9999999999999;
std::vector<unsigned long long int> intFactorisation (unsigned long long int num)
{
static std::vector<unsigned long long int> primes;
// ^^^^^^ ^
if (primes.empty())
{
// generate prime numbers up to MAX_LIMIT using sieve method
}
// Use the primes...
}
The static vector could have been initialized using a separete function and also declared const, but given the close relation with the factorization function it could be better to wrap those data and functionalities into a class responsible for the correct allocation and initialization of the resources.
Since the introduction of lambdas in the language, we can avoid most of the boilerplate associated with a normal functor class, so that the factorization function might be constructed as a stateful lambda returned by the following:
auto make_factorizer(uint64_t max_value)
{
uint64_t sqrt_of_max = std::ceil(std::sqrt(max_value));
// Assuming to have a function returning a vector of primes.
// The returned lambda will accept the number to be factorized and
// a reference to a vector where the factors will be stored
return [primes = all_primes_up_to(sqrt_of_max)](uint64_t number,
std::vector<uint64_t>& factors)
{
uint64_t n{number};
factors.clear();
// Test against all the known primes, in ascending order
for (auto prime : primes)
{
// You can stop earlier, if prime >= sqrt(n)
if (n / prime <= prime)
break;
// Keep "consuming" the number with the same prime,
// a factor can have a multiplicity greater than one.
while (n % prime == 0)
{
n /= prime;
factors.push_back(prime);
}
}
// If n == 1, it has already found and stored all the factors.
// If it has run out of primes or breaked out from the loop,
// either what remains is a prime or number was <= 1.
if (n != 1 || n == number)
{
factors.push_back(n);
}
};
}
Here a live implementation is tested.
Simpler solution could be finding all prime factors on-the-fly rather than initially listing all prime numbers and checking whether it is a factor of a given number.
Here is the function to find the factor without allocating large memory,
std::vector<unsigned long long int> intFactorisation(unsigned long long int num) {
std::vector<unsigned long long int> answers{};
unsigned long long int Current_Prime = 2;
bool found;
while (num!=1 && num!=0)
{
//push all the factor repeatedly until it is not divisible
// for input 12 push 2,2 and exit
while (num % Current_Prime == 0) {
answers.push_back(Current_Prime);
num /= Current_Prime;
}
//find Next Prime factor
while (Current_Prime <= num) {
Current_Prime++;
found = true;
for (unsigned long long int x = 2; x*x < Current_Prime; x++)
{
if (Current_Prime%x == 0) {
found = false;
break;
}
}
if (found == true)
break;
}
}
return answers;
}
I am working on the 2 sum problem, where I search for t-x (or y) to find if x + y = t and if there is a value added to x that makes the sum be in t.
T is all values from -10000 to 10000 inclusive. I implemented an nlogn solution because I did not know how to use a hash table (and most examples I see are for characters not integers). My nlogn solution is to use quick sort to sort the numbers, then use binary search to search for t-x.
I believe my problem is that currently I am also entering duplicates. An example, in the array {1,2,3,4,5} if t was 5, 2+3 and 1 + 4 equals five, but it should only give one, not two. In other words, I need to get all "distinct" or different sums in t. I believe that is what is wrong with my code. Supposedly the line x!=y should make it distinct, although I do not understand how and even when implemented still gives me the wrong answer.
Here is the link for the data file with test cases:
http://bit.ly/1JcLojP
The answer for 100 is 42, 1000 is 486, 10000 is 496, and 1000000 is 519. My output is 84,961,1009, and I did not test 1 million.
For my code, you can assume binary search and quick sort are properly implemented. Quick sort was supposed to give you how many times it compared things, however i never figured out how to return two things (the comparisons and the index).
#include <stdio.h>
#include <stdlib.h>
#include <iostream>
#include <fstream>
#include <string>
#include <ctime>
#include <cstdlib>
#include <cmath>
#include <cctype>
using namespace std;
long long binary_search(long long array[],long long first,long long last, long long search_key)
{
long long index;
if (first > last)
index = -1;
else
{
long long mid = (first + last)/2;
if (search_key == array[mid])
index = mid;
else
if (search_key < array[mid])
index = binary_search(array,first, mid-1, search_key);
else
index = binary_search(array, mid+1, last, search_key);
} // end if
return index;
}// end binarySearch
long long partition(long long arr[],long long l, long long h)
{
long long i;
long long p;
long long firsthigh;
p = h;
firsthigh = l;
long long temporary = 0;
for (i=(l +0); i<= h; i++)
{
if (arr[i] < arr[p])
{
long long temp2 = 0;
temp2 = arr[i];
arr[i] = arr[firsthigh];
arr[firsthigh] = temp2;
firsthigh ++;
}
}
temporary = arr[p];
arr[p] = arr[firsthigh];
arr[firsthigh] = temporary;
return(firsthigh);
}
long long quicksort(long long arr[], long long l, long long h)
{
long long p; /* index of partition */
if ((h-l)>0)
{
p = partition(arr,l,h);
quicksort(arr,l,p-1);
quicksort(arr,p+1,h);
}
if(h == l)
return 1;
else
return 0;
}
int main(int argc, const char * argv[])
{
long long array[1000000] = {0};
long long t;
long long count = 0;
ifstream inData;
inData.open("/Users/SeanYeh/downloads/100.txt");
cout<<"part 1"<<endl;
for (long long i=0;i<100;i++)
{
inData >> array[i];
//cout<<array[i]<<endl;
}inData.close();
cout<<"part 2"<<endl;
quicksort(array,0,100);
cout<<"part 3"<<endl;
for(t = 10000;t >= -10000;t--)
{
for(int x = 0;x<100;x++)
{
long long exists = binary_search(array,0,100,t-array[x]);
if (exists >= 0)
{
count++;
}
}
}
cout<<"The value of count is: "<<count<<endl;
}
To avoid duplicates you need to change your binary search range from [0,n] to [x+1,n]. Also once you find that a sum exists break out of the loop.
long long exists=binary_search(array, x+1, 100, t-array[x]);
if(exists >= 0)
{
count++;
break;
}
If you don't want to return duplicate values don't keep searching from the beginning of the list.
Example:
list = {1,2,3,4,5}
total = 5
Start your loop at the first item (1)
Now do your binary search on the rest of the items ({2,3,4,5})
Doing this will return the value 4
Second time around, the loop starts at 2
Binary search is done on {3,4,5}
etc
A problem arises in the case that you have duplicate items, so the list
{1,1,2,3,4,5} or {1,1,1,2,3,4,5}
This will yield 2 results even with the method I've shown, if this is acceptable to you ignore the next part of this answer, otherwise you can eliminate this by using std::set.
When you get the list, you put it in a set. The set will automatically eliminate duplicates for you and then you can continue with the method I've shown above.
Note: std::set is implemented internally as a binary search tree, so you can also get rid of your quicksort and binary search implementations because std::set does both
I tried a code on a coding website to find the largest prime factor of a number and it's exceeding the time limit for the last test case where probably they are using a large prime number. Can you please help me to reduce the complexity of the following code?
int main()
{
long n;
long int lar, fact;
long int sqroot;
int flag;
cin >> n;
lar=2, fact=2;
sqroot = sqrt(n);
flag = 0;
while(n>1)
{
if((fact > sqroot) && (flag == 0)) //Checking only upto Square Root
{
cout << n << endl;
break;
}
if(n%fact == 0)
{
flag = 1;
lar = fact;
while(n%fact == 0)
n = n/fact;
}
fact++;
}
if(flag == 1) //Don't display if loop fact reached squareroot value
cout << lar << endl;
}
Here I've also taken care of the loop checking till Square Root value. Still, how can I reduce its complexity further?
You can speed things up (if not reduce the complexity) by supplying a hard-coded list of the first N primes to use for the initial values of fact, since using composite values of fact are a waste of time. After that, avoid the obviously composite values of fact (like even numbers).
You can reduce the number of tests by skipping even numbers larger than 2, and stopping sooner if you have found smaller factors. Here is a simpler and faster version:
int main() {
unsigned long long n, lar, fact, sqroot;
cin >> n;
lar = 0;
while (n && n % 2 == 0) {
lar = 2;
n /= 2;
}
fact = 3;
sqroot = sqrt(n);
while (fact <= sqroot) {
if (n % fact == 0) {
lar = fact;
do { n /= fact; } while (n % fact == 0);
sqroot = sqrt(n);
}
fact += 2;
}
if (lar < n)
lar = n;
cout << lar << endl;
return 0;
}
I am not sure how large the input numbers may become, using the larger type unsigned long long for these computations will get you farther than long. Using a precomputed array of primes would help further, but not by a large factor.
The better result I've obtained is using the function below (lpf5()). It's based on the primality() function (below) that uses the formulas 6k+1, 6k-1 to individuate prime numbers. All prime numbers >= 5 may be expressed in one of the forms p=k*6+1 or p=k*6-1 with k>0 (but not all the numbers having such a forms are primes). Developing these formulas we can see a sequence like the following:
k=1 5,7
k=2 11,13
k=3 17,19
k=4 23,25*
k=5 29,31
.
.
.
k=10 59,61
k=11 65*,67
k=12 71,73
...
5,7,11,13,17,19,23,25,29,31,...,59,61,65,67,71,73,...
We observe that the difference between the terms is alternatively 2 and 4. Such a results may be obtained also using simple math. Is obvious that the difference between k*6+1 and k*6-1 is 2. It's simple to note that the difference between k*6+1 and (k+1)*6-1 is 4.
The function primality(x) returns x when x is prime (or 0 - take care) and the first divisor occurs when x is not prime.
I think you may obtain a better result inlining the primality() function inside the lpf5() function.
I've also tried to insert a table with some primes (from 1 to 383 - the primes in the first 128 results of the indicated formulas) inside the primality function, but the speed difference is unappreciable.
Here the code:
#include <stdio.h>
#include <math.h>
typedef long long unsigned int uint64;
uint64 lpf5(uint64 x);
uint64 primality(uint64 x);
uint64 lpf5(uint64 x)
{
uint64 x_=x;
while ( (x_=primality(x))!=x)
x=x/x_;
return x;
}
uint64 primality(uint64 x)
{
uint64 div=7,f=2,q;
if (x<4 || x==5)
return x;
if (!(x&1))
return 2;
if (!(x%3))
return 3;
if (!(x%5))
return 5;
q=sqrt(x);
while(div<=q) {
if (!(x%div)) {
return div;
}
f=6-f;
div+=f;
}
return x;
}
int main(void) {
uint64 x,k;
do {
printf("Input long int: ");
if (scanf("%llu",&x)<1)
break;
printf("Largest Prime Factor: %llu\n",lpf5(x));
} while(x!=0);
return 0;
}
here x,y<=10^12 and y-x<=10^6
i have looped from left to right and checked each number for a prime..this method is very slow when x and y are somewhat like 10^11 and 10^12..any faster approach?
i hv stored all primes till 10^6..can i use them to find primes between huge values like 10^10-10^12?
for(i=x;i<=y;i++)
{
num=i;
if(check(num))
{
res++;
}
}
my check function
int check(long long int num)
{
long long int i;
if(num<=1)
return 0;
if(num==2)
return 1;
if(num%2==0)
return 0;
long long int sRoot = sqrt(num*1.0);
for(i=3; i<=sRoot; i+=2)
{
if(num%i==0)
return 0;
}
return 1;
}
Use a segmented sieve of Eratosthenes.
That is, use a bit set to store the numbers between x and y, represented by x as an offset and a bit set for [0,y-x). Then sieve (eliminate multiples) for all the primes less or equal to the square root of y. Those numbers that remain in the set are prime.
With y at most 1012 you have to sieve with primes up to at most 106, which will take less than a second in a proper implementation.
This resource goes through a number of prime search algorithms in increasing complexity/efficiency. Here's the description of the best, that is PG7.8 (you'll have to translate back to C++, it shouldn't be too hard)
This algorithm efficiently selects potential primes by eliminating multiples of previously identified primes from consideration and
minimizes the number of tests which must be performed to verify the
primacy of each potential prime. While the efficiency of selecting
potential primes allows the program to sift through a greater range of
numbers per second the longer the program is run, the number of tests
which need to be performed on each potential prime does continue to
rise, (but rises at a slower rate compared to other algorithms).
Together, these processes bring greater efficiency to generating prime
numbers, making the generation of even 10 digit verified primes
possible within a reasonable amount of time on a PC.
Further skip sets can be developed to eliminate the selection of potential primes which can be factored by each prime that has already
been identified. Although this process is more complex, it can be
generalized and made somewhat elegant. At the same time, we can
continue to eliminate from the set of test primes each of the primes
which the skip sets eliminate multiples of, minimizing the number of
tests which must be performed on each potential prime.
You can use the Sieve of Eratosthenes algorithm. This page has some links to implementations in various languages: https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes.
Here is my implementation of Sieve of Erathostenes:
#include <string>
#include <iostream>
using namespace std;
const int k = 110000; //you can change this constant to whatever maximum int you would need to calculate
long int p[k]; //here we would store Sieve of Erathostenes from 2 to k
long int j;
void init_prime() //in here we set our array
{
for (int i = 2; i <= k; i++)
{
if (p[i] == 0)
{
j = i;
while (j <= k)
{
p[j] = i;
j = j + i;
}
}
}
/*for (int i = 2; i <= k; i++)
cout << p[i] << endl;*/ //if you uncomment this you can see the output of initialization...
}
string prime(int first, int last) //this is example of how you can use initialized array
{
string result = "";
for (int i = first; i <= last; i++)
{
if (p[i] == i)
result = result + to_str(i) + "";
}
return result;
}
int main() //I done this code some time ago for one contest, when first input was number of cases and then actual input came in so nocases means "number of cases"...
{
int nocases, first, last;
init_prime();
cin >> nocases;
for (int i = 1; i <= nocases; i++)
{
cin >> first >> last;
cout << prime(first, last);
}
return 0;
}
You can use the Sieve of Erathostenes to calculate factorial too. This is actually the fastest interpretation of the Sieve I could manage to create that day (it can calculate the Sieve of this range in less than a second)
I have implemented Sieve of Eratosthenes to solve the SPOJ problem PRIME1. Though the output is fine, my submission exceeds the time limit. How can I reduce the run time?
int main()
{
vector<int> prime_list;
prime_list.push_back(2);
vector<int>::iterator c;
bool flag=true;
unsigned int m,n;
for(int i=3; i<=32000;i+=2)
{
flag=true;
float s = sqrt(static_cast<float>(i));
for(c=prime_list.begin();c<=prime_list.end();c++)
{
if(*c>s)
break;
if(i%(*c)==0)
{
flag=false;
break;
}
}
if(flag==true)
{
prime_list.push_back(i);
}
}
int t;
cin>>t;
for (int times = 0; times < t; times++)
{
cin>> m >> n;
if (t) cout << endl;
if (m < 2)
m=2;
unsigned int j;
vector<unsigned int> req_list;
for(j=m;j<=n;j++)
{
req_list.push_back(j);
}
vector<unsigned int>::iterator k;
flag=true;
int p=0;
for(j=m;j<=n;j++)
{
flag=true;
float s = sqrt(static_cast<float>(j));
for(c=prime_list.begin();c<=prime_list.end();c++)
{
if((*c)!=j)
{
if((*c)>s)
break;
if(j%(*c)==0)
{
flag=false;
break;
}
}
}
if(flag==false)
{
req_list.erase (req_list.begin()+p);
p--;
}
p++;
}
for(k=req_list.begin();k<req_list.end();k++)
{
cout<<*k;
cout<<endl;
}
}
}
Your code is slow because you did not implement the Sieve of Eratosthenes algorithm. The algorithm works that way:
1) Create an array with size n-1, representing the numbers 2 to n, filling it with boolean values true (true means that the number is prime; do not forget we start counting from number 2 i.e. array[0] is the number 2)
2) Initialize array[0] = false.
3) Current_number = 2;
3) Iterate through the array by increasing the index by Current_number.
4) Search for the first number (except index 0) with true value.
5) Current_number = index + 2;
6) Continue steps 3-5 until search is finished.
This algorithm takes O(nloglogn) time.
What you do actually takes alot more time (O(n^2)).
Btw in the second step (where you search for prime numbers between n and m) you do not have to check if those numbers are prime again, ideally you will have calculated them in the first phase of the algorithm.
As I see in the site you linked the main problem is that you can't actually create an array with size n-1, because the maximum number n is 10^9, causing memory problems if you do it with this naive way. This problem is yours :)
I'd throw out what you have and start over with a really simple implementation of a sieve, and only add more complexity if really needed. Here's a possible starting point:
#include <vector>
#include <iostream>
int main() {
int number = 32000;
std::vector<bool> sieve(number,false);
sieve[0] = true; // Not used for now,
sieve[1] = true; // but you'll probably need these later.
for(int i = 2; i<number; i++) {
if(!sieve[i]) {
std::cout << "\t" << i;
for (int temp = 2*i; temp<number; temp += i)
sieve[temp] = true;
}
}
return 0;
}
For the given range (up to 32000), this runs in well under a second (with output directed to a file -- to the screen it'll generally be slower). It's up to you from there though...
I am not really sure that you have implemented the sieve of Erasthotenes. Anyway a couple of things that could speed up to some extent your algorithm would be: Avoid multiple rellocations of the vector contents by preallocating space (lookup std::vector<>::reserve). The operation sqrt is expensive, and you can probably avoid it altogether by modifying the tests (stop when the x*x > y instead of checking x < sqrt(y).
Then again, you will get a much better improvement by revising the actual algorithm. From a cursory look it seems as if you are iterating over all candidates and for each one of them, trying to divide with all the known primes that could be factors. The sieve of Erasthotenes takes a single prime and discards all multiples of that prime in a single pass.
Note that the sieve does not perform any operation to test whether a number is prime, if it was not discarded before then it is a prime. Each not prime number is visited only once for each unique factor. Your algorithm on the other hand is processing every number many times (against the existing primes)
I think one way to slightly speed up your sieve is the prevention of using the mod operator in this line.
if(i%(*c)==0)
Instead of the (relatively) expensive mod operation, maybe if you iterated forward in your sieve with addition.
Honestly, I don't know if this is correct. Your code is difficult to read without comments and with single letter variable names.
The way I understand the problem is that you have to generate all primes in a range [m,n].
A way to do this without having to compute all primes from [0,n], because this is most likely what's slowing you down, is to first generate all the primes in the range [0,sqrt(n)].
Then use the result to sieve in the range [m,n]. To generate the initial list of primes, implement a basic version of the sieve of Eratosthenes (Pretty much just a naive implementation from the pseudo code in the Wikipedia article will do the trick).
This should enable you to solve the problem in very little time.
Here's a simple sample implementation of the sieve of Eratosthenes:
std::vector<unsigned> sieve( unsigned n ) {
std::vector<bool> v( limit, true ); //Will be used for testing numbers
std::vector<unsigned> p; //Will hold the prime numbers
for( unsigned i = 2; i < n; ++i ) {
if( v[i] ) { //Found a prime number
p.push_back(i); //Stuff it into our list
for( unsigned j = i + i; j < n; j += i ) {
v[i] = false; //Isn't a prime/Is composite
}
}
}
return p;
}
It returns a vector containing only the primes from 0 to n. Then you can use this to implement the method I mentioned. Now, I won't provide the implementation for you, but, you basically have to do the same thing as in the sieve of Eratosthenes, but instead of using all integers [2,n], you just use the result you found. Not sure if this is giving away too much?
Since the SPOJ problem in the original question doesn't specify that it has to be solved with the Sieve of Eratosthenes, here's an alternative solution based on this article. On my six year old laptop it runs in about 15 ms for the worst single test case (n-m=100,000).
#include <set>
#include <iostream>
using namespace std;
int gcd(int a, int b) {
while (true) {
a = a % b;
if(a == 0)
return b;
b = b % a;
if(b == 0)
return a;
}
}
/**
* Here is Rowland's formula. We define a(1) = 7, and for n >= 2 we set
*
* a(n) = a(n-1) + gcd(n,a(n-1)).
*
* Here "gcd" means the greatest common divisor. So, for example, we find
* a(2) = a(1) + gcd(2,7) = 8. The prime generator is then a(n) - a(n-1),
* the so-called first differences of the original sequence.
*/
void find_primes(int start, int end, set<int>* primes) {
int an; // a(n)
int anm1 = 7; // a(n-1)
int diff;
for (int n = start; n < end; n++) {
an = anm1 + gcd(n, anm1);
diff = an - anm1;
if (diff > 1)
primes->insert(diff);
anm1 = an;
}
}
int main() {
const int end = 100000;
const int start = 2;
set<int> primes;
find_primes(start, end, &primes);
ticks = GetTickCount() - ticks;
cout << "Found " << primes.size() << " primes:" << endl;
set<int>::iterator iter = primes.begin();
for (; iter != primes.end(); ++iter)
cout << *iter << endl;
}
Profile your code, find hotspots, eliminate them. Windows, Linux profiler links.