I am coding in hackerrank and came across this problem:
https://www.hackerrank.com/challenges/power-calculation
My code works for small files and big numbers. As for the big files, it times out. Could someone make it more efficient.
My code:
z = []
def modexp(a, n, m):
bits = []
while n:
bits.append(n%2)
n /= 2
solution = 1
bits.reverse()
for x in bits:
solution = (solution*solution)%m
if x:
solution = (solution*a)%m
return solution
for _ in xrange(int(input())):
while True:
try:
x = raw_input()
sum =0
z = x.split(' ')
power = int(z[1])
limit = int(z[0])
for i in range(0,limit+1):
sum = sum%100 + modexp(i%100,power, pow(10,2))
if sum < 10:
print '%02d' % sum
if sum > 10:
print sum%100
except:
break
Sample data - input:
10
487348 808701
204397 738749
814036 784709
713222 692670
890568 452450
686541 933150
935447 202322
559883 847002
468195 111274
833627 238704
Sample output:
76
13
76
75
24
51
20
54
90
42
One can easily reduce the number of power evaluations by observing that its values mod 100 have a period of 100. Thus
decompose K = M*100+L by computing M=K/100; L=K%100;.
Then
for k=0 to L the power modexp(k%100,N,100) occurs M+1 times,
for k=L+1 to 99 it occurs M times in the sum.
Thus each power sum can be reduced to 99 power computations
One can reduce the effort to compute the powers even more by observing that increasing powers of the same number are periodic in the last two digits. Generally the sequence
1, a % m, a**2 % m, a**3 % m, a**4 % m, ...
becomes periodic after some point that is given by the highest multiplicity of a prime factor. One period length is given by the value of m in Euler's totient function.
The totient value of 100=2²·5² is phi(100)=(2-1)·2·(5-1)·5=40. The offset before the period sets in is at most 2, it follows that for all integers a
a**2 % 100 == a**42 % 100 = a**82 % 100 = ...
a**3 % 100 == a**43 % 100 = a**83 % 100 = ...
and so on.
Which means that for N>41 one can reduce the exponent to N=2+(N-2) % 40. (Indeed one can replace 40 by 20 in that reduction.)
And as a final remark that will not have much impact on running times, only on the complexity of the code:
There is a shorter way to implement modexp, this algorithm is also a standard exercise to identify loop invariants:
def modexp(a, n, m):
solution = 1
apower = a
while n:
if (n%2): solution = (solution*apower) % m
n /= 2
apower = (apower*apower) % m
return solution
Related
I found this problem in a cp contest which is over now so it can be answered.
Three primes (p1,p2,p3) (not necessarily distinct) are called special if (p1+p2+p3) divides p1*p2*p3. We have to find the number of these special pairs if the primes can't exceed 10^6
I tried brute force method but it timed out. Can there be any other method?
If you are timing out, then you need to do some smart searching to replace brute force. There are just short of 80,000 primes below a million so it is not surprising you timed out.
So, you need to start looking more carefully.
For example, any triple (2, p, p+2) where p+2 is also prime will meet the criteria:
2 + 3 + 5 = 10; 2 * 3 * 5 = 30; 30 / 10 = 3.
2 + 5 + 7 = 14; 2 * 5 * 7 = 70. 70 / 14 = 5.
...
2 + p + p+2 = 2(p+2); 2 * p * (p+2) = 2p(p+2); 2p(p+2) / 2(p+2) = p.
...
Are there other triples that start with 2? Are there triples that start with 3? What forms do p2 and p3 take if p1= 3? Run your program for triples up to 500 or so and look for patterns in the results. Then extrapolate those results to 10^6.
I assume you are using a Sieve to generate your initial list of primes.
I've experimented with this problem since you posted it. I've not solved it, but wanted to pass along what insight I have before I move onto something else:
Generating Primes is Not the Issue
With a proper sieve algorithm, we can generate all primes under 10**6 in a fraction of a second. (Less than 1/3 of a second on my Mac mini.) Spending time optimizing prime generation beyond this is time wasted.
The Brute Force Method
If we try to generate all permutations of three primes in Python, e.g.:
for prime_1 in primes:
for prime_2 in primes:
if prime_2 < prime_1:
continue
for prime_3 in primes:
if prime_3 < prime_2:
continue
pass
Or better yet, push the problem down to the C level via Python's itertools:
from itertools import combinations_with_replacement
for prime_1, prime_2, prime_3 in combinations_with_replacement(primes, 3):
pass
Then, our timings, doing no actual work except generating prime triples, looks like:
sec.
10**2 0.04
10**3 0.13
10**4 37.37
10**5 ?
You can see how much time increases with each order of magnitude. Here's my example of a brute force solution:
from itertools import combinations_with_replacement
def sieve_primes(n): # assumes n > 1
sieve = [False, False, True] + [True, False] * ((n - 1) // 2)
p = 3
while p * p <= n:
if sieve[p]:
for i in range(p * p, n + 1, p):
sieve[i] = False
p += 2
return [number for number, isPrime in enumerate(sieve) if isPrime]
primes = sieve_primes(10 ** 3)
print("Finished Sieve:", len(primes), "primes")
special = 0
for prime_1, prime_2, prime_3 in combinations_with_replacement(primes, 3):
if (prime_1 * prime_2 * prime_3) % (prime_1 + prime_2 + prime_3) == 0:
special += 1
print(special)
Avoid Generating Triples, but Still Brute Force
Here's an approach that avoids generating triples. We take the smallest and largest primes we generated, cube them, and loop over them with a custom factoring function. This custom factoring function only returns a value for those numbers that are made up of exactly three prime factors. For any number made up of more or less, it returns None. This should be faster than normal factoring as the function can give up early.
Numbers that factor into exactly three primes are easy to test for specialness. We're going to pretend our custom factoring function takes no time at all and simply measure how long it takes us to loop over all the numbers in question:
smallest_factor, largest_factor = primes[0], primes[-1]
for number in range(smallest_factor**3, largest_factor**3):
pass
Again, some timings:
sec.
10**2 0.14
10**3 122.39
10**4 ?
Doesn't look promising. In fact, worse than our original brute force method. And our custom factoring function in reality adds a lot of time. Here's my example of this solution (copy sieve_primes() from the previous example):
def factor_number(n, count):
size = 0
factors = []
for prime in primes:
while size < count and n % prime == 0:
factors.append(prime)
n //= prime
size += 1
if n == 1 or size == count:
break
if n > 1 or size < count:
return None
return factors
primes = sieve_primes(10 ** 3)
print("Finished Sieve:", len(primes), "primes")
special = 0
smallest_factor, largest_factor = primes[0], primes[-1]
for number in range(smallest_factor**3, largest_factor**3):
factors = factor_number(number, 3)
if factors:
if number % sum(factors) == 0:
special += 1
print(special)
I am trying to solve problem 23 in project euler in python2. I am getting the result as 4179935 but the correct answer is 4179871
This is the actual question
A perfect number is a number for which the sum of its proper divisors is exactly equal to the number. For example,
the sum of the proper divisors of 28 would be 1 + 2 + 4 + 7 + 14 = 28, which means that 28 is a perfect number.
A number n is called deficient if the sum of its proper divisors is less than n and it is called abundant if this sum exceeds n.
As 12 is the smallest abundant number, 1 + 2 + 3 + 4 + 6 = 16, the smallest number that can be written as the sum of
two abundant numbers is 24. By mathematical analysis, it can be shown that all integers greater than 28123 can be
written as the sum of two abundant numbers. However, this upper limit cannot be reduced any further by analysis
even though it is known that the greatest number that cannot be expressed as the sum of two abundant numbers is less than this limit.
Find the sum of all the positive integers which cannot be written as the sum of two abundant numbers.
import itertools
def find_non_abundant_sums(num):
non_abundant_number = set()
non_abundant_sums = set()
all_numbers = set(range(1,num+1))
for i in range(1,num+1):
divisor_sum = find_divisor_sum(i)
if(divisor_sum > i):
non_abundant_number.add(i)
for j in itertools.combinations(non_abundant_number,2):
if(j[0]+j[1] < num+1):
non_abundant_sums.add(j[0]+j[1])
print (sorted(non_abundant_sums))
return sum(all_numbers-non_abundant_sums)
def find_divisor_sum(num):
divisors = set()
for i in range(1,num):
if(num%i==0):
divisors.add(i)
return sum(divisors)
print (find_non_abundant_sums(28123))
expected: 4179871
actual: 4179935
Example:
Input: | Output:
5 –> 12 (1^2 + 2^2 = 5)
500 -> 18888999 (1^2 + 8^2 + 8^2 + 8^2 + 9^2 + 9^2 + 9^2 = 500)
I have written a pretty simple brute-force solution, but it has big performance problems:
#include <iostream>
using namespace std;
int main() {
int n;
bool found = true;
unsigned long int sum = 0;
cin >> n;
int i = 0;
while (found) {
++i;
if (n == 0) { //The code below doesn't work if n = 0, so we assign value to sum right away (in case n = 0)
sum = 0;
break;
}
int j = i;
while (j != 0) { //After each iteration, j's last digit gets stripped away (j /= 10), so we want to stop right when j becomes 0
sum += (j % 10) * (j % 10); //After each iteration, sum gets increased by *(last digit of j)^2*. (j % 10) gets the last digit of j
j /= 10;
}
if (sum == n) { //If we meet our problem's requirements, so that sum of j's each digit squared is equal to the given number n, loop breaks and we get our result
break;
}
sum = 0; //Otherwise, sum gets nullified and the loops starts over
}
cout << i;
return 0;
}
I am looking for a fast solution to the problem.
Use dynamic programming. If we knew the first digit of the optimal solution, then the rest would be an optimal solution for the remainder of the sum. As a result, we can guess the first digit and use a cached computation for smaller targets to get the optimum.
def digitsum(n):
best = [0]
for i in range(1, n+1):
best.append(min(int(str(d) + str(best[i - d**2]).strip('0'))
for d in range(1, 10)
if i >= d**2))
return best[n]
Let's try and explain David's solution. I believe his assumption is that given an optimal solution, abcd..., the optimal solution for n - a^2 would be bcd..., therefore if we compute all the solutions from 1 to n, we can rely on previous solutions for numbers smaller than n as we try different subtractions.
So how can we interpret David's code?
(1) Place the solutions for the numbers 1 through n, in order, in the table best:
for i in range(1, n+1):
best.append(...
(2) the solution for the current query, i, is the minimum in an array of choices for different digits, d, between 1 and 9 if subtracting d^2 from i is feasible.
The minimum of the conversion to integers...
min(int(
...of the the string, d, concatenated with the string of the solution for n - d^2 previously recorded in the table (removing the concatenation of the solution for zero):
str(d) + str(best[i - d**2]).strip('0')
Let's modify the last line of David's code, to see an example of how the table works:
def digitsum(n):
best = [0]
for i in range(1, n+1):
best.append(min(int(str(d) + str(best[i - d**2]).strip('0'))
for d in range(1, 10)
if i >= d**2))
return best # original line was 'return best[n]'
We call, digitsum(10):
=> [0, 1, 11, 111, 2, 12, 112, 1112, 22, 3, 13]
When we get to i = 5, our choices for d are 1 and 2 so the array of choices is:
min([ int(str(1) + str(best[5 - 1])), int(str(2) + str(best[5 - 4])) ])
=> min([ int( '1' + '2' ), int( '2' + '1' ) ])
And so on and so forth.
So this is in fact a well known problem in disguise. The minimum coin change problem in which you are given a sum and requested to pay with minimum number of coins. Here instead of ones, nickels, dimes and quarters we have 81, 64, 49, 36, ... , 1 cents.
Apparently this is a typical example to encourage dynamic programming. In dynamic programming, unlike in recursive approach in which you are expected to go from top to bottom, you are now expected to go from bottom to up and "memoize" the results those will be required later. Thus... much faster..!
So ok here is my approach in JS. It's probably doing a very similar job to David's method.
function getMinNumber(n){
var sls = Array(n).fill(),
sct = [], max;
sls.map((_,i,a) => { max = Math.min(9,~~Math.sqrt(i+1)),
sct = [];
while (max) sct.push(a[i-max*max] ? a[i-max*max].concat(max--)
: [max--]);
a[i] = sct.reduce((p,c) => p.length < c.length ? p : c);
});
return sls[sls.length-1].reverse().join("");
}
console.log(getMinNumber(500));
What we are doing is from bottom to up generating a look up array called sls. This is where memoizing happens. Then starting from from 1 to n we are mapping the best result among several choices. For example if we are to look for 10's partitions we will start with the integer part of 10's square root which is 3 and keep it in the max variable. So 3 being one of the numbers the other should be 10-3*3 = 1. Then we look up for the previously solved 1 which is in fact [1] at sls[0] and concat 3 to sls[0]. And the result is [3,1]. Once we finish with 3 then one by one we start over the same job with one smaller, up until it's 1. So after 3 we check for 2 (result is [2,2,1,1]) and then for 1 (result is [1,1,1,1,1,1,1,1,1,1]) and compare the length of the results of 3, 2 and 1 for the shortest, which is [3,1] and store it at sls[9] (a.k.a a[i]) which is the place for 10 in our look up array.
(Edit) This answer is not correct. The greedy approach does not work for this problem -- sorry.
I'll give my solution in a language agnostic fashion, i.e. the algorithm.
I haven't tested but I believe this should do the trick, and the complexity is proportional to the number of digits in the output:
digitSquared(n) {
% compute the occurrences of each digit
numberOfDigits = [0 0 0 0 0 0 0 0 0]
for m from 9 to 1 {
numberOfDigits[m] = n / m*m;
n = n % m*m;
if (n==0)
exit loop;
}
% assemble the final output
output = 0
powerOfTen = 0
for m from 9 to 1 {
for i from 0 to numberOfDigits[m] {
output = output + m*10^powerOfTen
powerOfTen = powerOfTen + 1
}
}
}
I want to find the minimum set of prime numbers which would sum to a given value e.g. 9 = 7 + 2 (not 3+3+3).
I have already generated a array of prime numbers using sieve of eratosthens
I am traversing the array in descending order to get the array largest prime number smaller than or equal to given number. This works great if the number is odd.
But fails for even numbers e.g 122 = 113 + 7 + 2 but 122 = 109 +13.
From Golbach's Conjecture we know that any even number can be represented as two sum of two prime numbers. So if a number is even we can directly return 2 as output.
But I am trying to figure out a way other than brute force to find minimum prime numbers.
Although your question didn't say so, I assume you are looking for the set of primes that has the smallest cardinality.
If n is even, then consider the primes p in order, 2, 3, 5, …; eventually n - p will be prime, so n is the sum of two primes. This process typically converges very quickly, with the smaller of the two primes seldom larger than 1000 (and usually much smaller than that).
If n is odd, and n - 2 is prime, then n is the sum of the primes 2 and n - 2.
If n is odd, and n - 2 is not prime, then n - 3 is even and can be written as the sum of two primes, as described above.
Thus you can always find two or three primes that sum to any target n greater than 3.
Try this out!
Not an ideal code but if you want to have a working solution :P
primes = [2,3,5,7]
D = 29
i = -1
sum_ = 0
count = 0
while sum_ != D :
sum_ = sum_ + primes[i]
count += 1
if (sum_ == D) :
break
elif D - sum_ == primes[i-1] :
count += 1
break
elif D - sum_ < ex[i-1] and (D-sum_ not in primes) :
sum_ = sum_ - primes[i]
count = count - 1
i = i - 1
print(count)
The prime factors of 13195 are 5, 7, 13 and 29.
What is the largest prime factor of the number 600851475143?
I solved this problem on Project Euler my own way, which was slow, and then I found this solution on someone's github account. I can't figure out why it works. Why are a number of factors removed, equal to an index? Any insight?
def Euler3(n=600851475143):
for i in range(2,100000):
while n % i == 0:
n //= i
if n == 1 or n == i:
return i
This function works by finding successive factors of its input. The first factor it finds will necessarily be prime. After a prime factor is found, it is divided out of the original number and the process continues. By the time we've divided them all out (leaving 1, or the current factor (i)) we've got the last (largest) one.
Let's add some tracing code here:
def Euler3(n=600851475143):
for i in range(2,100000):
while n % i == 0:
n //= i
print("Yay, %d is a factor, now we should test %d" % (i, n))
if n == 1 or n == i:
return i
Euler3()
The output of this is:
$ python factor.py
Yay, 71 is a factor, now we should test 8462696833
Yay, 839 is a factor, now we should test 10086647
Yay, 1471 is a factor, now we should test 6857
Yay, 6857 is a factor, now we should test 1
It is true that for a general solution, the top of the range should have been the square root of n, but for python, calling math.sqrt returns a floating point number, so I think the original programmer was taking a lazy shortcut. The solution will not work in general, but it was good enough for the Euler project.
But the rest of the algorithm is sound.
Consider how it solves for n=20:
iteration i=2
while true (20 % 2 == 0)
n = n//2 = 20//2 = 10
if (n == 1 or n == 2) false
while true (10 % 2 == 0)
n = n//2 = 10//2 = 5
if (n == 1 or n == 2) false
while false (5 % 2 == 0)
iteration i = 3
while false (5 % 3 == 0)
iteration i = 4
while false (5 % 4 == 0)
iteration i = 5
while true (5 % 5 == 0)
n = n//5 = 5//5 = 1
if (n == 1 or n == 5) true
return i, which is 5, which is the largest prime factor of 20
It is just removing factors, and since it already removes multiples of prime factors (the while loop), many values of i are really just wasted effort. The only values of i that have any chance of doing something within the loop are prime values of i. The n==i test covers the case of numbers like 25 that are squares of a prime number.
The range seems to limited though. It would not give the correct answer for 2 * (the next largest prime after 100,000.
No one has actually answered your question. The for loop tests each number i in turn. The test of the while loop is successful when i is a factor of n; in that case, it reduces n, then checks if it is finished by comparing i to 1 or n. The test is a while (and not just if) in case i divides n more than once.
Though clever, that's not the way integer factorization by trial division is normally written; it also won't work if n has a factor greater than 100000. I have an explanation on my blog. Here's my version of the function, which lists all the factors of n instead of just the largest:
def factors(n):
fs = []
while n % 2 == 0:
fs += [2]
n /= 2
if n == 1:
return fs
f = 3
while f * f <= n:
if n % f == 0:
fs += [f]
n /= f
else:
f += 2
return fs + [n]
This function handles 2 separately, then tries only odd factors. It also doesn't place a limit on the factor, instead stopping when the factor is greater than the square root of the remaining n, because at that point n must be prime. The factors are inserted in increasing order, so the last factor in the output list will be the largest, which is the one you want.