Better algorithm to find nth prime number? - primes

Until now, I've been using sieve of Eratosthenes for generating all 'n' prime numbers.
However, I was wondering does there exist a better algorithm, or can we improve an existing one which performs better ??

For sufficiently large N (e.g. more than a million or so), the best algorithm is to use an approximation (e.g. Logarithmic Integral or Riemann's R function), then use a fast prime count method such as LMO, then sieve the small remainder. This is many orders of magnitude faster than sieving.
See https://math.stackexchange.com/questions/507178/most-efficient-algorithm-for-nth-prime-deterministic-and-probabilistic
There are at least two open source implementations.
https://metacpan.org/pod/ntheory
https://github.com/kimwalisch/primecount
The latter has progressed past the first, and is also multithreaded.
Add: Will Ness has also pointed out a nice post from Daniel Fischer that provides a walkthrough of different ways to solve this: Calculating and printing the nth prime number

Related

Sieve Of Eratosthenes in O(n)

I recently came across an article that claimed that it can find all primes less than n in O(n) using an efficient Sieve Of Eratosthenes. However I am unable to see how it is O(n).
https://www.geeksforgeeks.org/sieve-eratosthenes-0n-time-complexity/
Could anyone please help with that?
The normal Sieve of Eratosthenes is O(n log log n).
Paul Pritchard has done some work on sieves similar to the Sieve of Eratosthenes that run in O(n) and even in O(n / log log n). They are tricky to implement, and despite improved theoretical time complexity, the bookkeeping involved in running the sieves makes them slower than the normal Sieve of Eratosthenes.
I discuss a simple version of Pritchard's sieve at my blog.
It is a version of the Gries and Misra (1978) sieve, which is an O(n) sieve. A better description can be found here:
(external link) Sieve of Eratosthenes Having Linear Time Complexity.
For a more theoretical look at this type of sieve, from an expert in the field, see Pritchard's paper:
(external link) Linear Prime-Number Sieves: A Family Tree (1987, PDF).
Pritchard is well known for his sub-linear sieve algorithm and paper as well as other early contributions.
The version at GfG uses a lot of extra memory. The version at CP uses a little less. Both are huge compared to typical byte or bit implementations of the SoE. At 10^9, it is over 60x more memory used than a simple bit array monolithic SoE, and also half the speed, even when using uint32_t types.
So in practice it is slower than a simple 4-line monolithic SoE, which is usually where we start before getting into the interesting optimizations (segmented sieves, wheels, etc.). If you actually want the factor array, then that's useful. It's also useful for learning and experimentation, though the GfG article doesn't actually do much other than give the code. The CP page does go over a bit of the history and some memory/speed analysis.
The algorithm at your link is a variation of Algorithm 3.3 in Paul Pritchard's paper "Linear Prime-Number Sieves: a Family Tree". The reason the algorithm is linear, i.e. O(n), is that each composite is removed once, because a composite c has a unique form p*f where p=lpf(c), and it is removed when the outer loop variable if f, and the inner loop variable j is such that p[j]=p.
Incidentally, the code is inelegant. There is no need for two arrays; SPF suffices. Also, the first test (on j) in the inner loop is unnecessary.
Many other linear sieves are presented in Pritchard's paper, one of which is due to Gries and Misra, which is an entirely different algorithm. The algorithm at your link is often mis-attributed to Gries and Misra.

Fastest prime test (can be probabilistic)

I am looking for the fastest algorithm to check if a number is a prime. The algorithm doesn't have to be deterministic as long as the chance of it failing is very small. Preferably it should be possible to control the possibility of failure by some parameter like "iteration count".
It would be enough for the algorithm to work for integers <= 10^18, but it would be better if it worked for all integers representable by a C++ unsigned long long assuming it being 64 bits (18,446,744,073,709,551,615).
There are already some questions like this one, but they require the algorithm to be deterministic, while for me its fine if it is probabilistic, as long as its "mostly accurate".
As others said, consider Miller-Rabin tests.
Here is a link for testing numbers less than 2^64: https://www.techneon.com/
You have to test at most three different bases per candidate. To get something probabilistic but about three time faster, just check one randomly chosen out of those three.
I believe Miller-Rabin primality testing algorithm fits your needs perfectly.
Some resources:
Miller-Rabin Wikipedia
Implementation and extra information

The fastest way for dividing large integers [duplicate]

I need to divide numbers represented as digits in byte arrays with non standard amount of bytes. It maybe 5 bytes or 1 GB or more. Division should be done with numbers represented as byte arrays, without any conversions to numbers.
Divide-and-conquer division winds up being a whole lot faster than the schoolbook method for really big integers.
GMP is a state-of-the-art big-number library. For just about everything, it has several implementations of different algorithms that are each tuned for specific operand sizes.
Here is GMP's "division algorithms" documentation. The algorithm descriptions are a little bit terse, but they at least give you something to google when you want to know more.
Brent and Zimmermann's Modern Computer Arithmetic is a good book on the theory and implementation of big-number arithmetic. Probably worth a read if you want to know what's known.
The standard long division algorithm, which is similar to grade school long division is Algorithm D described in Knuth 4.3.1. Knuth has an extensive discussion of division in that section of his book. The upshot of this that there are faster methods than Algorithm D but they are not a whole lot faster and they are a lot more complicated than Algorithm D.
If you determined to get the fastest possible algorithm, you can resort to what is known as the SRT algorithm.
All of this and more is covered by the way on the Wikipedia Division Algorithm.

Fast primality test with 100% certainty?

I'm using GMP (with MPIR) for arbitrary size datatypes. I also use its primality test function, which uses Miller-Rabin method, but it is not accurate. This is what I want to fix.
I was able to confirm that the number 18446744073709551253 is a prime by using brute-force, with the sqrt approach.
Is there any way of checking large numbers being prime or not, with 100% accuracy?
It should not use too much memory/storage space, few megabytes is acceptable.
It should be faster than the sqrt method I used.
It should work for numbers that are at least 64bit in size, or larger.
Finally, it should be 100% accurate, no maybes!
What are my options ?
I could live with the brute force method (for 64bit numbers) though, but out of interest, I want faster & larger. Also, the 64bit number check was too slow: total 43 seconds!
For very large numbers, the AKS primality test is a deterministic primality test that runs in time O(log7.5n log log n), where n is the number of interest. This is exponentially faster than the O(√n) algorithm. However, the algorithm has large constant factors, so it's not practical until your numbers get rather large.
Hope this helps!
As a general point 100% certainty is not possible on a physical computer since there is a small but finite possibility that some component has failed invisibly and that the answer given at the end is not correct. Given that fact, then you can run enough probabilistic Miller-Rabin tests that the probability of the number being composite is far less than the probability that your hardware has failed. It is not difficult to test up to a 1 in 2^256 level of certainty:
boolean isPrime(num)
limit <- 256
certainty <- 0
while (certainty < limit)
if (millerRabin returns notPrime)
return false
exit
else
certainty <- certainty + 2
endif
endwhile
return true
end isPrime
This will test that the number is prime, up to a certainty of 1 in 2^256. Each M-R test adds a factor of four to the certainty. I have seen the resulting primes called "industrial strength primes", good enough for all practical purposes, but not quite for theoretical mathematical certainty.
For small n, trial division works; the limit there is probably somewhere around 10^12. For somewhat larger n, there are various studies (see works of Gerhard Jaeschke and Zhou Zhang) that calculate the smallest pseudoprime for various collections of Miller-Rabin bases; that will take you to about 10^25. After that, things get hard.
The "big guns" of primality proving are the APRCL method (it may be called Jacobi sums or Gaussian sums) and the ECPP method (based on elliptic curves). Both are complex, so you will want to find an implementation, don't write your own. These methods can both handle numbers of several hundred digits.
The AKS method is proven polynomial time, and is easy to implement, but the constant of proportionality is very high, so it is not useful in practice.
If you can factor n-1, or even partially factor it, Pocklington's method can determine the primality of n. Pocklington's method itself is quick, but the factoring may not be.
For all of these, you want to be reasonably certain that a number is prime before you try to prove it. If your number is not prime, all these methods will correctly determine that, but first they will waste much time trying to prove that a composite number is prime.
I have implementations of AKS and Pocklington at my blog.
The method of proving depends on the type of prime number you are trying to prove (for example, the Mersenne primes have special methods for proving primality that work only with them) and the size in decimal digits. If you are looking at hundreds of digits, then there is only one solution, albeit an inadequate one: The AKS algorithm. It is provably faster than other primality proving algorithms for large enough primes, but by the time it becomes useful, it will take so long that it really isn't worth the trouble.
Primality proving for big numbers is still a problem that is not yet sufficiently solved. If it was, the EFF awards would all be awarded and cryptography would have some problems, not for the list of primes, but for the methods used to find them.
I believe that, in the near future, a new algorithm for proving primality will arise that doesn't depend on a pre-generated list of primes up to the square root of n, and that doesn't do a brute-force method for making sure that all primes (and a lot of non-primes as well) under the square root are used as witnesses to n's primality. This new algorithm will probably depend on math concepts that are much simpler than those used by analytic number theory. There are patterns in the primes, that much is certain. Identifying those patterns is a different matter entirely.

How to optimize solution of nonlinear equations?

I have nonlinear equations such as:
Y = f1(X)
Y = f2(X)
...
Y = fn(X)
In general, they don't have exact solution, therefore I use Newton's method to solve them. Method is iteration based and I'm looking for way to optimize calculations.
What are the ways to minimize calculation time? Avoid calculation of square roots or other math functions?
Maybe I should use assembly inside C++ code (solution is written in C++)?
A popular approach for nonlinear least squares problems is the Levenberg-Marquardt algorithm. It's kind of a blend between Gauss-Newton and a Gradient-Descent method. It combines the best of both worlds (navigates well the search space for for ill-posed problems and converges quickly). But there's lots of wiggle room in terms of the implementation. For example, if the square matrix J^T J (where J is the Jacobian matrix containing all derivatives for all equations) is sparse you could use the iterative CG algorithm to solve the equation systems quickly instead of a direct method like a Cholesky factorization of J^T J or a QR decomposition of J.
But don't just assume that some part is slow and needs to be written in assembler. Assembler is the last thing to consider. Before you go that route you should always use a profiler to check where the bottlenecks are.
Are you talking about a number of single parameter functions to solve one at a time or a system of multi-parameter equations to solve together?
If the former, then I've often found that a finding a better initial approximation (from where the Newton-Raphson loop starts) can save more execution time than polishing the loop itself, because convergence in the loop can be slow initially but is fast later. If you know nothing about the functions then finding a decent initial approximation is hard, but it might be worth trying a few secant iterations first. You might also want to look at Brent's method
Consider using Rational Root Test in parallel. If impossible to use values of absolute precision then use closest to zero results as the best fit to continue by Newton method.
Once single root found, you may decrease the equation grade by dividing it with monom (x-root).
Dividing and rational root test are implemented here https://github.com/ohhmm/openmind/blob/sh/omnn/math/test/Sum_test.cpp#L260