Miller-Rabin test: bug in my code - d

I've written a Miller-Rabin primality test based on the following pseudo code:
Input: n > 2, an odd integer to be tested for primality;
k, a parameter that determines the accuracy of the test
Output: composite if n is composite, otherwise probably prime
write n − 1 as 2s·d with d odd by factoring powers of 2 from n − 1
LOOP: repeat k times:
pick a randomly in the range [2, n − 1]
x ← ad mod n
if x = 1 or x = n − 1 then do next LOOP
for r = 1 .. s − 1
x ← x2 mod n
if x = 1 then return composite
if x = n − 1 then do next LOOP
return composite
return probably prime
The code I have rarely gets past 31 (if I put it in a loop to test numbers from 2 to 100). There must be something wrong but I can't see what it is.
bool isProbablePrime(ulong n, int k) {
if (n < 2 || n % 2 == 0)
return n == 2;
ulong d = n - 1;
ulong s = 0;
while (d % 2 == 0) {
d /= 2;
s++;
}
assert(2 ^^ s * d == n - 1);
outer:
foreach (_; 0 .. k) {
ulong a = uniform(2, n);
ulong x = (a ^^ d) % n;
if (x == 1 || x == n - 1)
continue;
foreach (__; 1 .. s) {
x = (x ^^ 2) % n;
if (x == 1) return false;
if (x == n - 1) continue outer;
}
return false;
}
return true;
}
I've also tried the variant
...
foreach (__; 1 .. s) {
x = (x ^^ 2) % n;
if (x == 1) return false;
if (x == n - 1) continue outer;
}
if ( x != n - 1) return false; // this is different
...
I have a different version of the test that works correctly but it uses modpow. I'd like to have a version that stays closer to the pseudo code that's part of the rossetta.org task description.
Edit: Re: overflow problem. I suspected something like that. I'm still puzzled why the Ruby version doesn't have that problem. It probably handles it differently under the hood.
If I use BigInt, the code does work, but becomes a lot slower than when I use modpow. So I guess I can't get away from that. It's a pity Phobos doesn't have a modpow built-in, or I must have overlooked it.
ulong x = ((BigInt(a) ^^ d) % BigInt(n)).toLong();

In this statement
ulong x = (a ^^ d) % n;
the quantity (a ^^ d) is probably overflowing before the mod operation can take place. The modpow version wouldn't suffer from this problem, since that algorithm avoids the need for arbitrarily large intermediate values.

Related

modulo formula in C++

I Have this formula:
(n - 1)! ((n (n - 1))/2 + ((n - 1) (n - 2))/4)
2<=n<=100000
I would like to modulate the result of this from this formula by any modulo, but for the moment let's assume that it is constant, MOD = 999999997. Unfortunately I can't just calculate the result and modulate it, because unfortunately I don't have variables larger than 2^64 at my disposal, so the main question is. What factors to modulate by MOD to get the results%MOD ?
Now let's assume that n=19. What is in brackets is equal to 247.5
18! = 6402373705728000.
(6402373705728000 * 247.5)mod999999997 = 921442488.
Unfortunately, in case I modulate 18! first, the result will be wrong, because (18!)mod999999997 = 724935119. (724935119 * 247.5)mod9999997 = 421442490.
How to solve this problem?
I think the sum could be break down. The only tricky part here is that (n - 1)(n - 2)/4 may have a .5 decimal., as n(n-1) / 2 will always be integer.
S = (n - 1)! * ((n (n - 1))/2 + ((n - 1) (n - 2))/4)
= [(n-1)! * (n*(n-1)/2)] + [(n-1)! * (n-1)(n-2)/4]
= A + B
A is easy to do. With B, if (n-1)(n-2) % 4 == 0 then there's nothing else either, else you can simplified to X/2, as (n-1)(n-2) is also divisible by 2.
If n = 2, it's trivial, else if n > 2 there's always a 2 in the representation of (N-1)! = 1x2x3x ... xN. In that case, simply calculate ((N-1)!/2) = 1x3x4x5x ... xN.
Late example:
N = 19
MOD = 999999997
--> 18! % MOD = 724935119 (1)
(18!/2) % MOD = 862467558 (2)
n(n-1)/2 = 171 (3)
(n-1)(n-2)/2 = 153 (4)
--> S = (1)*(3) + (2)*(4) = 255921441723
S % MOD = 921442488
On another note, if mod is some prime number, like 1e9+7, you can just apply Fermat's little theorem to calculate multiplicative inverse as such:
(a/b) % P = [(a%P) * ((b^(P-2)) % P)] % P (with P as prime, a and b are co-prime to P)
You will have to use 2 mathematical formulas here:
(a + b) mod c == (a mod c + b mod c) mod c
and
(a * b) mod c == (a mod c * b mod c) mod c
But those are only valid for integers. The nice part here is that formula can only be integer for n >= 2, provided you compute it as:
(((n - 1)! * n * (n - 1))/2) + (((n - 1)! * (n - 1) * (n - 2))/4)
1st part is integer | 2nd part is too
for n == 2, first part boils down to 1 and second is 0
for n > 2 either n or n-1 is even so first part is integer, and again eithe n-1 of n-2 is even and (n-1)! is also even so second part is integer. As your formula can be rewritten to only use additions and multiplications it can be computed.
Here is a possible C++ code (before unsigned long long is required):
#include <iostream>
template<class T>
class Modop {
T mod;
public:
Modop(T mod) : mod(mod) {}
T add(T a, T b) {
return ((a % mod) + (b % mod)) % mod;
}
T mul(T a, T b) {
return ((a % mod) * (b % mod)) % mod;
}
int fact_2(T n) {
T cr = 1;
for (T i = 3; i <= n; ++i) {
cr = mul(cr, i);
}
return cr;
}
};
template<class T>
T formula(T n, T mod) {
Modop<T> op = mod;
if (n == 2) {
return 1;
}
T second, first = op.mul(op.fact_2(n - 1), op.mul(n, n - 1));
if (n % 2 == 0) {
second = op.mul(op.fact_2(n - 1), op.mul((n - 2)/ 2, n - 1));
}
else {
second = op.mul(op.fact_2(n - 1), op.mul(n- 2, (n - 1) / 2));
}
return op.add(first, second);
}
int main() {
std::cout << formula(19ull, 999999997ull) << std::endl;
return 0;
}
First of All , for n=2 we can say that the result is 1.
Then, the expression is equal to: (n*(n-1)(n-1)!)/2 + (((n-1)(n-2)/2)^2)*(n-3)! .
lemma: For every two consecutive integer number , one of them is even.
By lemma we can understand that n*(n-1) is even and also (n-1)*(n-2) is even too. So we know that the answer is an integer number.
First we calculate (n*(n-1)(n-1)!)/2 modulo MOD. We can calculate (n(n-1))/2 that can be saved in a long long variable like x, and we get the mod of it modulo MOD:
x = (n*(n-1))/2;
x %= MOD;
After that for: i (n-1 -> 1) we do:
x = (x*i)%MOD;
And we know that both of 'x' and 'i' are less than MOD and the result of
multiplication can be save in a long long variable.
And likewise we do the same for (((n-1)(n-2)/2)^2)(n-3)! .
We calculate (n-1)*(n-2)/2 that can be save in a long long variable like y, and we get the mod of it modulo MOD:
y = ((n-1)*(n-2))/2;
y %= MOD;
And after that we replace (y^2)%MOD on y because we know that y is less than MOD and y*y can be save in a long long variable:
y = (y*y)%MOD;
Then like before for: i (n-3 -> 1) we do:
y = (y*i)%MOD;
And finally the answer is (x+y)%MOD

Finding roots but not asymptotes of a function

I writing a program to numerically find the roots of functions with irrational roots by various methods.
For methods such as linear interpolation, you need to find the approximate range in which a root lies, for this I wrote this code:
bool fxn1 = false;
bool fxn2 = false;
vector<float> root_list;
if(f_x(-100) < 0)
{
fxn2 = true;
}
for(float i = -99.99; i < 100.01; i += 0.01)
{
fxn1 = fxn2;
if(f_x(i) < 0)
{
fxn2 = true;
}
else
{
fxn2 = false;
}
if((fxn1 == false && fxn2 == true) || (fxn1 == true && fxn2 == false))
{
root_list.push_back(i-0.01);
root_list.push_back(i);
}
}
However, for non-continuous functions (i.e. functions with asymptotes), this code will also be triggered when the function swaps from positive to negative values either side of the asymptote.
Is there a way to get the program to tell the difference between a root and an asymptote?
Thanks in advance
If the function, f(x), is converging on a point inside [a,b] then the half-way point (a + b) / 2 should be closer to zero than a or b.
This observation leads to the following procedure:
Let mid = (a + b) / 2
If |f(mid)| < |f(a)| AND |f(mid)| < |f(b)| Then
Algorithm has converged to a root
Else
Algorithm has converged to an asymptote
End
In this pseudo code |.| denotes floating-point absolute value.
Finding numerically a root only make sense if the function has nice properties, and at least is continuous. What would you think about this one:
f: x -> f(x) defined by:
2 * i < x < 2 * i + 1 (i element of Z) : f(x) = x
2 - i + 1 < x < 2 * i (i element of Z) : f(x) = -x
x = i (i element of Z) : f(x) = 1
It is perfectly defined on R, is bounded on any bounded interval, has positive and negative values on any interval of size > 1, and is continuous on any non integer point, but it has no root.
It is simply because the rule that a root must exist on segment ]x, y[ if x < 0 < y or y < 0 < x only applies if the function is continuous on the interval.
And good luck if you want to numerically test for continuity of a function...

Cut rectangle in minimum number of squares

I'm trying to solve the following problem:
A rectangular paper sheet of M*N is to be cut down into squares such that:
The paper is cut along a line that is parallel to one of the sides of the paper.
The paper is cut such that the resultant dimensions are always integers.
The process stops when the paper can't be cut any further.
What is the minimum number of paper pieces cut such that all are squares?
Limits: 1 <= N <= 100 and 1 <= M <= 100.
Example: Let N=1 and M=2, then answer is 2 as the minimum number of squares that can be cut is 2 (the paper is cut horizontally along the smaller side in the middle).
My code:
cin >> n >> m;
int N = min(n,m);
int M = max(n,m);
int ans = 0;
while (N != M) {
ans++;
int x = M - N;
int y = N;
M = max(x, y);
N = min(x, y);
}
if (N == M && M != 0)
ans++;
But I am not getting what's wrong with this approach as it's giving me a wrong answer.
I think both the DP and greedy solutions are not optimal. Here is the counterexample for the DP solution:
Consider the rectangle of size 13 X 11. DP solution gives 8 as the answer. But the optimal solution has only 6 squares.
This thread has many counter examples: https://mathoverflow.net/questions/116382/tiling-a-rectangle-with-the-smallest-number-of-squares
Also, have a look at this for correct solution: http://int-e.eu/~bf3/squares/
I'd write this as a dynamic (recursive) program.
Write a function which tries to split the rectangle at some position. Call the function recursively for both parts. Try all possible splits and take the one with the minimum result.
The base case would be when both sides are equal, i.e. the input is already a square, in which case the result is 1.
function min_squares(m, n):
// base case:
if m == n: return 1
// minimum number of squares if you split vertically:
min_ver := min { min_squares(m, i) + min_squares(m, n-i) | i ∈ [1, n/2] }
// minimum number of squares if you split horizontally:
min_hor := min { min_squares(i, n) + min_squares(m-i, n) | i ∈ [1, m/2] }
return min { min_hor, min_ver }
To improve performance, you can cache the recursive results:
function min_squares(m, n):
// base case:
if m == n: return 1
// check if we already cached this
if cache contains (m, n):
return cache(m, n)
// minimum number of squares if you split vertically:
min_ver := min { min_squares(m, i) + min_squares(m, n-i) | i ∈ [1, n/2] }
// minimum number of squares if you split horizontally:
min_hor := min { min_squares(i, n) + min_squares(m-i, n) | i ∈ [1, m/2] }
// put in cache and return
result := min { min_hor, min_ver }
cache(m, n) := result
return result
In a concrete C++ implementation, you could use int cache[100][100] for the cache data structure since your input size is limited. Put it as a static local variable, so it will automatically be initialized with zeroes. Then interpret 0 as "not cached" (as it can't be the result of any inputs).
Possible C++ implementation: http://ideone.com/HbiFOH
The greedy algorithm is not optimal. On a 6x5 rectangle, it uses a 5x5 square and 5 1x1 squares. The optimal solution uses 2 3x3 squares and 3 2x2 squares.
To get an optimal solution, use dynamic programming. The brute-force recursive solution tries all possible horizontal and vertical first cuts, recursively cutting the two pieces optimally. By caching (memoizing) the value of the function for each input, we get a polynomial-time dynamic program (O(m n max(m, n))).
This problem can be solved using dynamic programming.
Assuming we have a rectangle with width is N and height is M.
if (N == M), so it is a square and nothing need to be done.
Otherwise, we can divide the rectangle into two other smaller one (N - x, M) and (x,M), so it can be solved recursively.
Similarly, we can also divide it into (N , M - x) and (N, x)
Pseudo code:
int[][]dp;
boolean[][]check;
int cutNeeded(int n, int m)
if(n == m)
return 1;
if(check[n][m])
return dp[n][m];
check[n][m] = true;
int result = n*m;
for(int i = 1; i <= n/2; i++)
int tmp = cutNeeded(n - i, m) + cutNeeded(i,m);
result = min(tmp, result);
for(int i = 1; i <= m/2; i++)
int tmp = cutNeeded(n , m - i) + cutNeeded(n,i);
result = min(tmp, result);
return dp[n][m] = result;
Here is a greedy impl. As #David mentioned it is not optimal and is completely wrong some cases so dynamic approach is the best (with caching).
def greedy(m, n):
if m == n:
return 1
if m < n:
m, n = n, m
cuts = 0
while n:
cuts += m/n
m, n = n, m % n
return cuts
print greedy(2, 7)
Here is DP attempt in python
import sys
def cache(f):
db = {}
def wrap(*args):
key = str(args)
if key not in db:
db[key] = f(*args)
return db[key]
return wrap
#cache
def squares(m, n):
if m == n:
return 1
xcuts = sys.maxint
ycuts = sys.maxint
x, y = 1, 1
while x * 2 <= n:
xcuts = min(xcuts, squares(m, x) + squares(m, n - x))
x += 1
while y * 2 <= m:
ycuts = min(ycuts, squares(y, n) + squares(m - y, n))
y += 1
return min(xcuts, ycuts)
This is essentially classic integer or 0-1 knapsack problem that can be solved using greedy or dynamic programming approach. You may refer to: Solving the Integer Knapsack

C++ variables always coming out as zero

I'm running a simple for loop with some if statements. In this for loop, 3 variables are to be given a value depending on the index value in the for loop. It seems fairly simple, however, when I run the code, the values always come out as zero and I have no idea why this is happening. My for loop is provided below. I appreciate any suggestions.
double A [N+1];
double r;
double s;
double v;
for(int i = 2; i < N+1; i++)
{
if(i == 2)
{
r = 1/2/i/(i-1);
s = -1/2/(i*i - 1);
v = 1/4/i/(i+1);
}
else if(i <= N-2 && i > 2)
{
r = 1/4/i/(i-1);
s = -1/2/(i*i - 1);
v = 1/4/i/(i+1);
}
else if(i <= N-4 && i > N-2)
{
r = 1/4/i/(i-1);
s = 0;
v = 1/4/i/(i+1);
}
else
{
r = 1/4/i/(i-1);
s = 0;
v = 0;
}
A[i] = r*F[i-2] + s*F[i] + v*F[i+2];
cout << r << s << v << endl;
}
It’s happening because you’re using integer division. An example:
r = 1/2/i/(i-1);
This is the same as:
r = ((1 / 2) / i) / (i - 1);
Which is the same as:
r = (0 / i) / (i - 1);
… which is the same as:
r = 0 / (i - 1);
… which is 0.
Because 1 / 2 is 0 in integer arithmetic. To fix this, use floating point values.
Three things:
else if(i <= N-4 && i > N-2) makes no sense, that condition cannot hold
all your divisions are integer divisions - to fix, convert one of the numbers to a double.
as a result of 1, when i = N-1, and i = N, then the last branch is taken where you force two variables to 0 anyway!
1, 2 and 4 are integers. In integerland 1/2 = 0 and 1/4 = 0
With integers, 1/2 is zero. I would suggest (for a start) changing constants like 2 into 2.0 to ensure they're treated as doubles.
You may also want to (though it may not be necessary) cast all your i variables to floating point values as well, just for completeness, such as:
r = 1.0 / 2.0 / (double)i / ((double)i - 1.0);
The fact that r is a double in no way affects the calculations done on the right of the =. It only affects the final bit (the actual assignment).
1/2, 1/4 and -1/2 will always be zero because of the integer division.So try with 1.0/2.0, 1.0/4.0 and -1.0/2.0 to get it sorted out quickly. But follow the basics and do not use many magic numbers inside a code. Consider creating constants for them and use .

alternative to if statement in c++

i am working on an algorithm and want to make my code more efficient.my code uses simple arithmetic and comparison statements.however,i want to replace if statements,as they could be time consuming.this code would be run over a million times,so even the slightest improvement is appreciated.please answer!here is the code-
int_1024 sqcalc(int_1024 s,int_1024 f){
f=f*20;
s=s-81;
s=s-(f*9);
if(s>=0){
return 9;
}
s=s+f;
s=s+17;
if(s>=0){
return 8;
}
s=s+f;
s=s+15;
if(s>=0){
return 7;
}
s=s+f;
s=s+13;
if(s>=0){
return 6;
}
s=s+f;
s=s+11;
if(s>=0){
return 5;
}
s=s+f;
s=s+9;
if(s>=0){
return 4;
}
s=s+f;
s=s+7;
if(s>=0){
return 3;
}
s=s+f;
s=s+5;
if(s>=0){
return 2;
}
s=s+f;
s=s+3;
if(s>=0){
return 1;
}
s=s+f;
s=s+1;
if(s>=0){
return 0;
}
}
i wish to replace if checks,since i 'think' they make the algorithm slow.any suggestions?int_1024 is a ttmath variable with 1000's of bits,so saving on it might be a good option?division or multiplication for such a large number might be slow,so i tried using addition,but to no avail.help please.
I don't know if it is any faster, but it is considerably shorter (and easier to analyze).
int k[] = { 17, 15, 13, 11, 9, 7, 5, 3, 1 };
int r = 0;
f *= 20;
s -= 81;
s -= f * 9;
while (s < 0) {
s += f;
s += k[r];
if (++r == 9) break;
}
if (s >= 0) return 9-r;
Edit:
In fact, the original poster came up with a clever way to optimize this loop by pre-computing the sum of constants in the k array, and compared s against the sums, rather than incrementally adding them to s.
Edit:
I followed moonshadow's analysis technique, but arrived at a different equation. Original TeX formatting replaced with ASCII art (I tried to get MathJax to render the TeX for me, but it wasn't working):
S[0] = s >= 0 => 9 - 0
S[1] = S[0] + f + 19 - 2*1 >= 0 => 9 - 1
S[2] = S[1] + f + 19 - 2*2 >= 0 => 9 - 2
...
S[i] = S[i-1] + f + 19 - 2*i >= 0 => 9 - i
So to calculate S[n]:
S[n] = S[n-1] + f + 19 - 2n
.-- n
=> S[n] = s + > (f + 19 - 2*i)
`-- i=1 .-- n
=> S[n] = s + n(f + 19) - 2 > i
`-- i=1
=> S[n] = s + n(f + 19) - n(n+1)
2
=> S[n] = s + n(f + 18) - n
So, the inequality S[n] >= 0 is a quadratic equation in n. Assuming s < 0, we want n to be the ceiling of the solution to the quadratic.
+-- --+
| _____________ |
| / 2 |
| f + 18 - . / (f + 18) + 4s |
| ` |
n = | --------------------------- |
| 2 |
So the routine would look something like:
f *= 180;
s -= 81;
s -= f;
if (s >= 0) return 9;
f /= 9;
f += 18;
s *= 4;
int1024_t ff = f;
ff *= f;
ff += s;
ff = ff.Sqrt();
f -= ff;
f += f.Mod2();
return 9 - f/2;
However, I am not sure the expense of performing these operations on your big integer objects is worth implementing to replace the simple loop shown above. (Unless you expect to extend the function and would require a much longer loop.)
To be faster than the loop, the big integer square root implementation would have to always converge within 4 iterations to beat the average expected 4.5 iterations of the existing while loop. However the ttmath implementation does not seem to be calculating an integer square root. It seems to calculate a floating point square root and then rounding the result, which I would guess would be much slower than the loop.
First of all, I note that if the condition of the final if() is false, the return value is undefined. You probably want to fix that.
Now, the function starts with
f=f*20;
s=s-81;
s=s-(f*9);
if(s>=0){
return 9;
}
and the rest looks incredibly repetitive. Let's see if we can use that repetition. Let's build a table of inequalities - values of s vs the eventual result:
s + (f+17) >= 0: 8
s + (f+17) + (f+15) >= 0: 7
s + (f+17) + (f+15) + (f+13) >= 0: 6
.
.
s + (f+17) + (f+15) + (f+13) + ... + (f+1) >= 0: 0
So, each line tests to see if s + some multiple of f + some constant is greater than 0. The value returned, the constant and the multiple of f all look related. Let's try expressing the relationship:
(s + ((9-n)*f) + (2*n)-1 >= 0)
Let's rearrange that so n is on one side.
(s + (9*f) - (n*f) + (2*n)-1 >= 0)
(s + (9*f) +1 >= (n*f) - (2*n))
(s + (9*f) +1 >= n*(f - 2))
n <= ((s + (9*f) +1) / (f - 2)
Now, the function has a range of return values for different inputs. In fact, we are interested in values of n in the range 0..8: the function supplied is undefined for inputs that would yield n < 0 (see above). The preamble ensures we never see inputs that would yield n > 8.
So we can just say
int_1024 sqcalc(int_1024 s,int_1024 f){
f=f*20;
s=s-81;
s=s-(f*9);
if(s>=0){
return 9;
}
return (s + (9*f) +1) / (f - 2);
}
and for all cases where the result is not undefined, the behaviour should be the same as the old version, without needing tons of conditionals or a loop.
Demonstration of accuracy is at http://ideone.com/UzMZs.
According to the OP's comment, the function is trying to find all values that satisfy the inequality:
N * ((20 * F) + N) <= S
For all N, given an F and S.
Using algebra, this comes out to:
1) N^2 + 20Fn - S <= 0 (where N^2 is N*N or sqr(N))
The OP should use some constants for F and N and solve algebraically (sp?) or search the web for "C++ find root quadratic equation".
One a function is selected, then profile the function and optimize if necessary.
i tried solving the quadratics,and it makes the function slower for larger digits.following the answer by #user315052,i made this code.
int_1024 sqcalc(int_1024 s,int_1024 f){
int k[] = { 0, 17, 32, 45, 56, 65, 72, 77, 80, 81 };
f=f*20;
s=((f*9)+81)-s;
int i=0;
while(s>k[i]){
s-=f;
i++;
}
return 9-i;
}
in this code,instead of subtracting a number and then comparing with zero,i directly compare it with the number.by far,this produces the fastest results.i could do binary search though....