Project euler 55 - c++

I was solving this problem (https://projecteuler.net/problem=55) and I couldn't get it right. The answer was 249 and my code was giving 136. I don't know what is wrong with my code. Here is my code:
#include <stdio.h>
long long reverse(long long n)
{
long long a=0, t=n, temp;
while (t)
{
temp=t%10;
a=a*10+temp;
t=t/10;
}
return a;
}
bool palindromic(long long n)
{
return reverse(n)==n;
}
bool lychrel(long long n)
{
long long k=n;
for (int i=0; i<50; i++)
{
k+=reverse(k);
if (palindromic(k)) return false;
}
return true;
}
int main()
{
int i, count=0;
for (i=1; i<10000; i++)
if (lychrel(i)) count++;
printf("%d", count);
return 0;
}
Thanks in advance!

When you apply the k+=reverse(k); it is overflowing for many of the Lychrel numbers.
For example 196 overflows after 40 iterations.
To be able to test if some number is a Lychrel you need more than the 64 bits the long long provides (63 bits to be more precise). You need to either use big integers or string manipulation if you want to do the "add" manually for 2 strings.
Check this answer for big integers: C++ Big Integer
If you want a quick and dirty fix, you can just test for overflow and consider that if it overflows, it needs "too many" iterations. This will not be a correct program (because it will not test for 50 iterations), but will still give you a correct result (because you will be able to find all the non Lychrel numbers below ten thousand before they overflow). It will for sure fail if you increase the limit from ten thousand to some bigger numbers.
To test for overflow you can just check if k + reverse(k) < k for example.

Related

random prime number generator over one million is only giving prime numbers half the time

I wish to find a random prime number over one million but my program only outputs one maybe half the time.
When it doesn't it will just output nothing.
unsigned long long int primeFinder(unsigned long long int base) {
int flag = 0;
unsigned long long int m = base / 2;
for (int i = 2; i <= m; i++) {
if (base%i == 0) {
flag = 1;
break;
}
}
if (flag == 1) {
base = base + 2;
primeFinder(base);
}
else {
return base;
}
}
int main() {
srand(time(NULL));
unsigned long long int base;
base = 1000000;
int number = rand() % 1000000;
base += number;
cout<<primeFinder(base);
}
I am assuming this is because of my if statement, but I am not too sure about how to ensure it will return a prime number. I am using visual studio express 2017.
The main problem is that you do not ensure that your starting point is an odd number. Since you recursively call primeFinder() incrementing the "base" by two, this will result in endless recursion when the starting point is an even number, i.e. about half the time.
An easy way to fix this is to change the initial call as follows:
cout<<primeFinder(base|1);
Next, you only need to check for divisors up to the square root. This can be done by calculating m as follows:
unsigned long long int m = llround(sqrt(base));
Remember to #include<cmath>.
Furthermore, I would recommend restructuring primeFinder() to avoid the recursion and have a separate function to determine if a given number is a prime:
bool isPrime(unsigned long long int n)
{
unsigned long long int m = llround(sqrt(n));
for (int i = 2; i <= m; i++) {
if (n%i == 0) {
return false;
}
}
return (n>1);
}
unsigned long long int primeFinder(unsigned long long int base)
{
while(!isPrime(base))
{
base += 2;
}
return base;
}
Finally, when we have a while loop or a recursion, it is always good to know the worst case of how many iterations the program may need to perform.
In this case the question is: starting from an odd number n, what is the maximal value of the next prime, i.e. the smallest prime, p, such that p >= n?
To the help comes Bertrand's Postulate (which despite the name is actually a theorem) saying that p<2*n. Thus for a given starting value of base, primeFinder() will run at most about base/2 iterations (since we increment the guess by 2 for each iteration).
Thus, the worst case complexity of primeFinder() is O(n^(3/2)).

Trying out a problem but code I had written is not giving output for large numbers. why?

I was trying this question.
The prime factors of 13195 are 5, 7, 13 and 29.What is the largest prime factor of the number 600851475143 ?
And I had written the following code:
#include<iostream>
#define num 600851475143
using namespace std;
int isprime(unsigned long long int n)
{
unsigned long long int c=0;
for(unsigned long long int i=2;i<n;i++)
{
if(n%i==0)
{
c++;
break;
}
}
if(c==0)
{
return 1;
}
else
{
return 0;
}
}
int main()
{
unsigned long long int a,i,n=num;
while(n-- && n>1)
{
if(isprime(n)==1 && num%n==0)
{
cout<<n;
break;
}
}
return 0;
}
The problem occurring with the code is it is working for 13195 and other small values. But not getting any output for 600851475143. Can anyone explain why it is not working for large value and also tell the changes that should be made in these to get the correct output.
The below code snippets are from c (but should run quite nice with c++ as well):
#include <stdio.h>
#define uIntPrime unsigned long long int
#define uIntPrimeFormat "llu"
uIntPrime findSmallestPrimeFactor(uIntPrime num)
{
uIntPrime limit = num / 2 + 1;
for(uIntPrime i=2; i<limit; i++)
{
if((num % i) == 0)
{
return i;
}
}
return num;
}
uIntPrime findLargestPrimeFactor(uIntPrime num)
{
uIntPrime largestPrimeFactor = 1; // start with the smallest possible value
while (num > 1) {
uIntPrime primeFactor = findSmallestPrimeFactor(num);
if (primeFactor > largestPrimeFactor) largestPrimeFactor = primeFactor;
num = num / primeFactor;
}
return largestPrimeFactor;
}
How can this work?
(first function:) Counting the numbers up from 2 means you are starting with prime factors on the lower end. (Numbers that are non-prime when counting are just not working out as fraction-less divisors and at the same time their prime number factor components were already probed because they are lower.)
(second function:) If a valid factor is found then the factor is pulled out from the number in question. Thus the search for the now smallest prime in the pulled-out number can repeat. (The conditional might probably be superfluous due to lower numbers are found first anyway - but it might resemble a search pattern you are familiar with - like in a minimum/maximum/other-criteria search. I am now leaving it up to you to proof it right or wrong with testing with your own main routine.)
The stop condition is about having the last factor extracted means dividing the value by itself and getting a value of 1 for num.
(There is for sure still much space for speeding this up!)

Sieve of Eratosthenes for large numbers c++

Just like this question, I also am working on the sieve of Eratosthenes. Also from the book "programming principles and practice using c++", chapter 4. I was able to implement it correctly and it is functioning exactly as the exercise asks.
#include <iostream>
#include <vector>
using namespace std;
int main() {
unsigned int amount = 0;
cin >> amount;
vector<int>numbers;
for (unsigned int i = 0; i <= amount; i++) {
numbers.push_back(i);
}
for (unsigned int p = 2; p < amount; p++) {
if (numbers[p] == 0)
continue;
cout << p << '\n';
for (unsigned int i = p + p; i <= amount; i += p) {
numbers[i] = false;
}
}
return 0;
}
Now, how would I be able to handle real big numbers in the amount input? The unsigned int type should allow me to enter a number of 2^32=4,294,967,296. But I can't, I run out of memory. Yes, I've done the math: storing 2^32 amount of int, 32 bits each. So 32/8*2^32=16 GiB of memory. I have just 4 GiB...
So what I am really doing here is setting non-primes to zero. So I could use a boolean. But still, they would take 8 bits, so 1 byte each. Theoretical I could go to the limit for unsigned int (8/8*2^32=4 GiB), using some of my swap space for the OS and overhead. But I have a x86_64 PC, so what about numbers larger than 2^32?
Knowing that primes are important in cryptography, there must be a more efficient way of doing this? And are there also ways to optimize the time needed to find all those primes?
In the sense of storage, you could use the std::vector<bool> container. Because of how it works, you have to trade in speed for storage. Because this implements one bit per boolean, your storage becomes 8 times as efficient. You should be possible to get numbers close to 8*4,294,967,296 if you have all your RAM available for this one program. Only thing you need to do, is use unsigned long long to unleash the availability of 64 bit numbers.
Note: Testing the program with the code example below, with an amount input of 8 billion, caused the program to run with a memory usage of approx. 975 MiB, proving the theoretical number.
You can also gain some time, because you can declare the complete vector at once, without iteration: vector<bool>numbers (amount, true); creates a vector of size equal to input amount, with all elements set to true. Now, you can adjust the code to set non-primes to false instead of 0.
Furthermore, once you have followed the sieve up to the square root of amount, all numbers that remain true are primes. Insert if (p * p >= amount) as an additional continue condition, just after you output the prime number. Also this is a humble improvement for your processing time.
Edit: In the last loop, p can be squared, because all numbers until the square of p are already proved not to be primes by previous numbers.
All together you should get something like this:
#include <iostream>
#include <vector>
using namespace std;
int main() {
unsigned long long amount = 0;
cin >> amount;
vector<bool>numbers (amount, true);
for (unsigned long long p = 2; p < amount; p++) {
if ( ! numbers[p])
continue;
cout << p << '\n';
if (p * p >= amount)
continue;
for (unsigned long long i = p * p; i <= amount; i += p) {
numbers[i] = false;
}
}
return 0;
}
You've asked a couple of different questions.
For primes up to 2**32, sieving is appropriate, but you need to work in segments instead of in one big blog. My answer here tells how to do that.
For cryptographic primes, which are very much larger, the process is to pick a number and then test it for primality, using a probabilistic test such as a Miller-Rabin test or a Baillie-Wagstaff test. This process isn't perfect, and occasionally a composite might be chosen instead of a prime, but such an occurrence is very rare.

Improving brute force solution to Project euler #25

I recently stumbled on this Project Euler Problem #25:
The 12th term, F12, is the first term to contain three digits.
What is the first term in the Fibonacci sequence to contain 1000 digits?
I just know C++98 and no other programming language. I have tried to solve it, making changes to get support of c++11.
Working:
#include <iostream>
#include<cstdio>
long len(long); //finding length
int main()
{
/* Ques: What is the first term in fibonacci series to contain 1000 digits? */
int ctr=2;
unsigned long first, second, third, n;
first=1;
second=1;
std::cout<<"\t **Project EULER Question 25**\n\n";
for(int i=2;;++i)
{
third=first+second;
// cout<<" "<<third;
int x=len(third);
// cout<<" Length: "<<x;
// cout<<"\n";
first=second;
second=third;
ctr++;
if(x>1000) // for small values, program works properly
{
std::cout<< " THE ANSWER: "<< ctr;
system("pause");
break;
}
}
}
long len(long num)
{
int ctr=1;
while(num!=0)
{
num=num/10;
if(num!=0)
{
ctr++;
}
}
return(ctr);
}
I know this is brute force, but can i make it more efficient so that i get the answer ?
Any help will be greatly appreciated.
EDIT:
By using Binet's Formula, as suggested by PaulMcKenzie and implementing it as:
#define phi (1+sqrt(5))/2
int main(void)
{
float n= ((999 + (1/2)*log10(5))/(log10(phi))); //Line 1
cout<<"Number is : "<<n;
return 0;
}
Output: 4780.187012
Changing Line 1, above, to :
float n= ((999 + log10(sqrt(5)))/(log10(phi)));
OUTPUT: 4781.859375
What could be possibly the error here?
unsigned long simply can't hold 1000-digit number. So you will get the overflow in your code when first and second will reach the unsigned long limit. If you want a brute force solution - consider use of something like biginteger library or write one by yourself.

fastest method for finding number of prime numbers between two large numbers x and y

here x,y<=10^12 and y-x<=10^6
i have looped from left to right and checked each number for a prime..this method is very slow when x and y are somewhat like 10^11 and 10^12..any faster approach?
i hv stored all primes till 10^6..can i use them to find primes between huge values like 10^10-10^12?
for(i=x;i<=y;i++)
{
num=i;
if(check(num))
{
res++;
}
}
my check function
int check(long long int num)
{
long long int i;
if(num<=1)
return 0;
if(num==2)
return 1;
if(num%2==0)
return 0;
long long int sRoot = sqrt(num*1.0);
for(i=3; i<=sRoot; i+=2)
{
if(num%i==0)
return 0;
}
return 1;
}
Use a segmented sieve of Eratosthenes.
That is, use a bit set to store the numbers between x and y, represented by x as an offset and a bit set for [0,y-x). Then sieve (eliminate multiples) for all the primes less or equal to the square root of y. Those numbers that remain in the set are prime.
With y at most 1012 you have to sieve with primes up to at most 106, which will take less than a second in a proper implementation.
This resource goes through a number of prime search algorithms in increasing complexity/efficiency. Here's the description of the best, that is PG7.8 (you'll have to translate back to C++, it shouldn't be too hard)
This algorithm efficiently selects potential primes by eliminating multiples of previously identified primes from consideration and
minimizes the number of tests which must be performed to verify the
primacy of each potential prime. While the efficiency of selecting
potential primes allows the program to sift through a greater range of
numbers per second the longer the program is run, the number of tests
which need to be performed on each potential prime does continue to
rise, (but rises at a slower rate compared to other algorithms).
Together, these processes bring greater efficiency to generating prime
numbers, making the generation of even 10 digit verified primes
possible within a reasonable amount of time on a PC.
Further skip sets can be developed to eliminate the selection of potential primes which can be factored by each prime that has already
been identified. Although this process is more complex, it can be
generalized and made somewhat elegant. At the same time, we can
continue to eliminate from the set of test primes each of the primes
which the skip sets eliminate multiples of, minimizing the number of
tests which must be performed on each potential prime.
You can use the Sieve of Eratosthenes algorithm. This page has some links to implementations in various languages: https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes.
Here is my implementation of Sieve of Erathostenes:
#include <string>
#include <iostream>
using namespace std;
const int k = 110000; //you can change this constant to whatever maximum int you would need to calculate
long int p[k]; //here we would store Sieve of Erathostenes from 2 to k
long int j;
void init_prime() //in here we set our array
{
for (int i = 2; i <= k; i++)
{
if (p[i] == 0)
{
j = i;
while (j <= k)
{
p[j] = i;
j = j + i;
}
}
}
/*for (int i = 2; i <= k; i++)
cout << p[i] << endl;*/ //if you uncomment this you can see the output of initialization...
}
string prime(int first, int last) //this is example of how you can use initialized array
{
string result = "";
for (int i = first; i <= last; i++)
{
if (p[i] == i)
result = result + to_str(i) + "";
}
return result;
}
int main() //I done this code some time ago for one contest, when first input was number of cases and then actual input came in so nocases means "number of cases"...
{
int nocases, first, last;
init_prime();
cin >> nocases;
for (int i = 1; i <= nocases; i++)
{
cin >> first >> last;
cout << prime(first, last);
}
return 0;
}
You can use the Sieve of Erathostenes to calculate factorial too. This is actually the fastest interpretation of the Sieve I could manage to create that day (it can calculate the Sieve of this range in less than a second)