Primality test for numbers of form 10^n + k - primes

I have some numbers of form 10N + K, where N is about 1000, and K is really small (lower than 500). I want to test these numbers for primality. Currently I'm using Fermat's test by base 2, preceded by checking small factors (<10000).
However, this is rather slow for my purposes. Is there any algorithm quicker than that? Can this special form be exploited somehow?
Also, maybe if two numbers differ only in K, is it possible to test these two numbers a bit quicker?

If K has a factor of either 2 or 5 then 10^N + K is composite. This allows testing a small number quickly. Large primes are all such that P mod 6 = 1 or 5. You can use this to eliminate 2/3 of possible K values. With a little work you can set up a 2-4 wheel to avoid a lot of division:
increment <- either 2 or 4 as required
repeat
K <- K + increment
increment <- 6 - increment
if (K mod 5 = 0) then
continue
endif
until (isPrime(10^N + K) or (K > 500))
Trial factoring up to 10,000 if fine. Are you building a list of the primes up to 10,000 first? Use Eratosthenes' Sieve to create the list and just read off numbers.
Running a Fermat Test base 2 is a good start, it finds a lot of composites reasonably quickly.
After that you need to implement the probabilistic Miller-Rabin test, and run it enough times that it is more probable your hardware has failed rather than the number is composite.

Also, maybe if two numbers differ only in K, is it possible to test these two numbers a bit quicker?
To check the primes in a relatively small interval like [10N+K1, 10N+K2]
you can use the sieve of Erathostenes to check for divisibility by small numbers.
The remaining prime candidates can then be checked by a probabilistic test like Miller-Rabin.

Related

finding intersections in a given range?

assume array of N (N<=100000) elements a1, a2, .... ,an, and you are given range in it L, R where 1<=L<=R<=N, you are required to get number of values in the given range which are divisible by at least one number from a set S which is given also, this set can be any subset of {1,2,....,10}. a fast way must be used because it may ask you for more than one range and more than one S (many queries Q, Q<=100000), so looping on the values each time will be very slow.
i thought of storing numbers of values divisible by each number in the big set {1,2,....,10} in 10 arrays of N elements each, and do cumulative sum to get the number of values divisible by any specific number in any range in O(1) time, for example if it requires to get number of values divisible by at least one of the following: 2,3,5, then i add the numbers of values divisible by each of them and then remove the intersections, but i didn't properly figure out how to calculate the intersections without 2^10 or 2^9 calculations each time which will be also very slow (and possibly hugely memory consuming) because it may be done 100000 times, any ideas ?
Your idea is correct. You can use inclusion-exclusion principle and prefix sums to find the answer. There is just one more observation you need to make.
If there's a pair of numbers a and b in the set such that a divides b, we can remove b without changing the answer to the query (indeed, if b | x, then a | x). Thus, we always get a set such that no element divides any other one.
The number of such mask is smaller than 2^10. In facts, it's 102. Here's the code that computes it:
def good(mask):
for i in filter(lambda b: mask & (1 << (b - 1)), range(1, 11)):
if (any(i % j == 0 for j in filter(lambda b: mask & (1 << (b - 1)), range(1, i)))):
return False
return True
print(list(filter(good, range(1, 2 ** 10)))))
Thus, we the preprocessing requires approximately 100N operations and numbers to store (it looks reasonably small).
Moreover, there are most 5 elements in any "good" mask (it can be checked using the code above). Thus, we can answer each query using around 2^5 operations.

Algorithm of divisors

I am given a list of integers (up to 1000) which multiply to a given integer n .
I need to find the highest power among all the divisors of the integer n .
For instance : 4,7,8 multiply to 224 and the highest power will then be 5 since 224=2^2*7*2^3=2^5*7.
The problem is, the 1000 integers can be as large as 2^64, hence n is very large.
What is a great algorithm to solve this ?
Difficult. I'd start checking small primes first (in your example: 4, 7, 8. The product has a factor 2^5. You divide by the powers of two, leaving 1, 7, 1. Then you do the same for 3, 5, 7 etc. up to X).
Now you need to find a larger prime p > X that is a higher power than the highest you found. Spending lots of time to find prime factors that occur only once seems wasteful. You need primes that are factors of multiple numbers. Calculate the gcd of each pair of numbers and look at prime factors of these numbers. There are lots of details that need working out, that's why I started with "difficult".
Worst case you can't easily find any prime that occurs twice, so you need to check if each of the numbers contains the square of a prime as factor.
As an example: You checked for factors up to 1000, and you found that the highest power of a prime was 83^3. So now you need to find a larger prime that is a fourth power or show there is none. Calculate the pairwise gcd's (greatest common divisor). A large prime could be a divisor of multiple of these gcd's coming from four different numbers, or p could be factor of three gcd's, with p^2 a factor of one number etc.
To clarify the principle: Say you have two HUGE numbers x and y, and you want to know which is the highest power of a prime which divides xy. You could factor x and y and go from there. If they are both primes or products of two large primes, say x = p or pq, and y = r or rs, this takes a long time.
Now calculate the gcd of x and y. If the greatest common divisor is z > 1, then z is a factor of x and y, so z^2 is a factor of xy. If the greatest common divisor is 1, then x and y have no common factor. Since we don't need factors that are not square, we look for squares and higher factors. For that you only need to divide by factors up to x^(1/3) or y^(1/3).

Is there a more efficient way to generate Palindromes which are prime if the bounds are large?

I have solved this problem on USACO training about generating prime palindromes between a limit.I have quoted the problem transcript at the end. I solved it by generating all odd palindromes below the upper limit and checking each for prime printed them. The solution passed on the grader but is there an even efficient method than my noob generate all and check thing(for I really wish to learn more efficient strategies in competitive programming).
The number 151 is a prime palindrome because it is both a prime number and a palindrome (it is the same number when read forward as backward). Write a program that finds all prime palindromes in the range of two supplied numbers a and b (5 <= a < b <= 100,000,000); both a and b are considered to be within the range .
PROGRAM NAME: pprime
INPUT FORMAT
Line 1: Two integers, a and b
SAMPLE INPUT (file pprime.in)
5 500
OUTPUT FORMAT
The list of palindromic primes in numerical order, one per line.
SAMPLE OUTPUT (file pprime.out)
5
7
11
101
131
151
181
191
313
353
373
383
I guess I should also provide my algorithm for getting the output
Step 1. Take Input a and b
Step 2. Initialise a list of odd palindromes op
Step 3. Add 5, 7 and 11 to op
Step 4. Generate all the 3,5,7 digit odd palindromes and add to op
Step 5. Check for every element e of op
Step 5.1. If e>=a and e<=b
Step 5.1.1. If e is PRIME print e
Terminate the loop otherwise
Had the upper bound been larger this process would obviously have failed, therefore I am looking for a more efficient solution.
EDIT: I check for primes the usual way, as in
Given the number I've to check for prime is n.
if (n==1) return false;
if (n==2) return true;
for (int i = 2; i <= sqrt(n); ++i)
if (n%i == 0) return false;
return true;
Actually the way you checked for primality is quite efficient for small numbers, for checking the primality of bigger number, you can use primality tests, such as the Rabin Miller and the Baillie-PSW (BPSW or BSW) primality test. The thing is, those algorithm really looks magic, I've never even tried to understand how and why they work, but they do work pretty well ! With the later, I've been able to generate a 320 digits prime. You can easily find implementation of those algorithms online.

Sieve of Eratosthenes using precalculated primes

I've all prime numbers that can be stored in 32bit unsigned int and I want to use them to generate some 64bit prime numbers. using trial division is too slow even with optimizations in logic and compilation.
I'm trying to modify Sieve of Eratosthenes to work with the predefined list, as follow:
in array A from 2 to 4294967291
in array B from 2^32 to X inc by 1
find C which is first multiple of current prime.
from C mark and jump by current prime till X.
go to 1.
The problem is step 3 which use modulus to find the prime multiple, such operation is the reason i didn't use trail division.
Is there any better way to implement step 3 or the whole algorithm.
thank you.
Increment by 2, not 1. That's the minimal optimization you should always use - working with odds only. No need to bother with the evens.
In C++, use vector<bool> for the sieve array. It gets automatically bit-packed.
Pre-calculate your core primes with segmented sieve. Then continue to work by big enough segments that fit in your cache, without adding new primes to the core list. For each prime p maintain additional long long int value: its current multiple (starting from the prime's square, of course). The step value is twice p in value, or p offset in the odds-packed sieve array, where the i-th entry stands for the number o + 2i, o being the least odd not below the range start. No need to sort by the multiples' values, the upper bound of core primes' use rises monotonically.
sqrt(0xFFFFFFFFFF) = 1048576. PrimePi(1048576)=82025 primes is all you need in your core primes list. That's peanuts.
Integer arithmetics for long long ints should work just fine to find the modulo, and so the smallest multiple in range, when you first start (or resume your work).
See also a related answer with pseudocode, and another with C code.

Fastest way to find sum of digits on big numbers

I have some big numbers (again) and i need to find if the sum of the digits is an even number.
I tried this: finding the sum of the digits with a while loop and then checking if that sum % 2 equals 0 and it's working but it's too slow for big numbers, because i am given intervals of numbers and if the input is 1999999 19999999999 then my program fails, i cannot complete within the time limit which is 0,1 sec.
What to do ? Is there any other faster way to do this ?
EDIT: The input 1999999 19999999999 means it will start with 1999999 and check all the numbers like i wrote above until 19999999999, and because we are talking about big numbers (< 2^30) my program is not worthy.
You don't need to sum the digits. Think about it. The sum starts with zero, which is generally regarded as even (although you can special case this if you want).
Each even digit changes nothing. If the sum was odd, it stays odd, if it was even it stays even.
Each odd digit changes the sum from even to odd, or odd to even.
So, just count the number of odd digits. If the number is even, then the sum of all the digits is even. If the number is odd, then the sum of all the digits is odd.
Now, you only need to do this for the FIRST number in your range. What you need to do next is figure out how the evenness or oddness of the numbers change as you keep adding one.
I leave this as an exercise for the reader. Homework has to involve some work!
Hint: if you find that the sum of the digits of a given number n is odd, will the sum of the digits of the number n + 1 be odd or even?
Update: as #Mark pointed out, it is not so simple... but the anomalies appear only when n + 1 is a multiple of 10, i.e. (n + 1) % 10 == 0. Then the oddity does not change. However, out of these cases, every 10th is an exception when the oddity does change still (e.g. 199 -> 200). And so on... basically, depending on where the highest value 9 of n is, one can decide whether or not the oddity changes between n and n + 1. I admit it is a bit tedious to calculate, but still I am sure it is faster than just adding up all these digits...
Here is a hint, it may work -- you don't need to sum the digits you just need to know if the result will be odd or even -- if you start with the assumption your total is even, even numbers have no effect, odd number toggle (ie an odd number of odd digits make it odd).
Depending on the language there may be a faster way to perform the calculation without adding.
Also remember -- a number is odd or even based on its last binary digit.
Example:
In ASM you could XOR the low order bit to get the correct result
In FORTH this would not work so well...