Solve set of modular equations in C++ - c++

I am working on Quadratic Sieve algorithm in c++. And after Gaussian elimination I need to solve set of modular equations such as, for example :
(1) b + c = 0 mod 2
(2) a + c = 0 mod 2
Here the symbol = is used to mean "is congruent to". I am processing matrix as shown here:https://math.stackexchange.com/questions/289348/matrix-processing-in-the-quadratic-sieve?rq=1. If anyone has any ideas how to implement such a function that will solve these equations I would appreciate it.

You can rewrite this system in matrix notation:
M . X = S
|0 1 1|.|a| = |0|
|1 0 1| |b| |0|
|c|
Then you solve it as usual using Gaussian elimination. The small difference is that you only work with the values 0 and 1 and that substracting a row is the same as adding the row (in Z/2Z, -a = a)

Related

3D Reconstruction: Solving Equations for 3D Points from Uncalibrated Images

This is a pretty straightforward question (I hope). The following is from 3D reconstruction from Multiple Images, Moons et al (Fig 2-13, p. 348):
Projective 3D reconstruction from two uncalibrated images
Given: A set of point correspondences m1 in I1 and m2 in I2 between two uncalibrated images I1 and I2 of a static scene.
Aim: A projective 3D reconstruction ^M of the scene.
Algorithm:
Compute an estimate ^F for the fundamental matrix
Compute the epipole e2 from ^F
Compute the 3x3-matrix
^A = −(1/||e2||2) [e2]x ^F
For each pair of corresponding image points m1 and m2, solve the following system of linear equations for ^M :
^p1 m1 = ^M and ^p2 m2 = ^A ^M + e2
( ^p1 and ^p2 are non-zero scalars )
[I apologize for the formatting. I don't know how to put hats over characters.]
I'm pretty much OK up until step 4. But it's been 30+ years since my last linear algebra class, and even then I'm not sure I knew how to solve something like this. Any help or references would be greatly appreciated.
By the way, this is sort of a follow-on to another post of mine:
Detecting/correcting Photo Warping via Point Correspondences
This is just another way to try to solve the problem.
Given a pair of matching image points m1 and m2, the two corresponding rays from the optical centers are unlikely to intersect perfectly due to noise in the measurements. Consequently a solution to the provided system should instead be found in the (linear) least square sense i.e. find x = argmin_x | C x - d |^2 with (for instance):
/ 0 \ / \
| I -m1 0 | | M |
C x = | 0 | | |
| 0 | | p1 |
| A 0 -m2 | \ p2 /
\ 0 /
and
/ 0 \
| 0 |
d = | 0 |
| |
| -e2 |
\ /
The problem has 5 unknowns for 6 equations.
A possible alternative formulation exploits the fact that m1 and m2 are collinear with M so m1 x M = 0 and m2 x (A M + e2) = 0 yielding the linear least squares problem x = argmin_x | C x - d |^2 with:
/ [m1]x \ / \
C = | | | M |
\ [m2]x A / \ /
and
/ 0 \
d = | |
\ -m2 x e2 /
where [v]x is the 3 x 3 matrix of the cross product with v. The problem has 3 unknowns for 6 equations which can be reduced to 4 only by keeping non-linearly dependent ones.

Eigen use of diagonal matrix

Using Eigen, I have a Matrix3Xd (3 rows, n columns). I would like to get the squared norm of all columns
to be clearer, lets say I have
Matrix3Xd a =
1 3 2 1
2 1 1 4
I would like to get the squared norm of each column
squaredNorms =
5 10 5 17
I wanted to take advantage of matrix computation instead of going through a for loop doing the computation myself.
What I though of was
squaredNorms = (A.transpose() * A).diagonal()
This works, but I am afraid of performance issues: A.transpose() * A will be a nxn matrix (potentially million of elements), when I only need the diagonal.
Is Eigen clever enough to compute only the coefficients I need?
What would be the most efficient way to achieve squareNorm computation on each column?
The case of (A.transpose() * A).diagonal() is explicitly handled by Eigen to enforce lazy evaluation of the product expression nested in a diagonal-view. Therefore, only the n required diagonal coefficients will be computed.
That said, it's simpler to call A.colwise().squaredNorm() as well noted by Eric.
This will do what you want.
squaredNorms = A.colwise().squaredNorm();
https://eigen.tuxfamily.org/dox/group__QuickRefPage.html
Eigen provides several reduction methods such as: minCoeff() , maxCoeff() , sum() , prod() , trace() *, norm() *, squaredNorm() *, all() , and any() . All reduction operations can be done matrix-wise, column-wise or row-wise .

What does lu_factorize return?

boost::number::ublas contains the M::size_type lu_factorize(M& m) function. Its name suggests that it performs the LU decomposition of a given matrix m, i.e. should produce two matrices that m = L*U. There seems to be no documentation provided for this function.
It is easy to deduce that it returns 0 to indicate successful decomposition, and a non-zero value when the matrix is singular. However, it is completely unclear where is the result. Taking the matrix by reference suggests that it works in-place, however it should produce two matrices (L and U) not one. So what does it do?
There is no documentation in boost, but looking at the documentation of SciPy's lu_factor one can see, that it's not uncommon to return one result for the LU decomposition.
This is enough, because in a typical approach to LU decomposition, L's diagonal consists of ones only, as presented in this answer from Mathematics, for example.
So, it is possible to fit both L and U into one matrix, putting L in result's lower part, omitting the diagonal (which is assumed to contain only ones), and U in the upper part. For example, for a 3x3 problem the result is:
u11 u12 u13
m = l21 u22 u23
l31 l32 u33
which implies:
1 0 0
L = l21 1 0
l31 l32 1
and
u11 u12 u13
U = 0 u22 u23
0 0 u33
Inspecting boost's void lu_substitute(const M& m, vector_expression<E>& e) function, from the same namespace seems to confirm this. It solves the equation LUx = e, where both L and U are contained in its m argument in two steps.
First solve Lz = e for z, where z = Ux, using lower part of m:
inplace_solve(m, e, unit_lower_tag ());
then, having computed z = Ux (with e modified in place), Ux = e can be solved, using upper part of m:
inplace_solve(m, e, upper_tag ());
inplace_solve is mentioned in the documentation, and it:
Solves a system of linear equations with triangular form, i.e. A is triangular.
So everything seems to make sense.
The boost doesn't have document of LU factorization (a lower triangular matrix L and upper triangular matrix U), but the source code shared with the public.
If the code is hard to follow, please check the webpage by Nick Higham. It had an detailed explanation. Here are an example from the link:
Let's say we need to solve Ax = b.
  (1) Make LU from input matrix, A
[3 -1 1  1]
[-1  3 1 -1] ->
[-1 -1 3  1]
[1  1 1  3]
Low
[1     0    0    0]
[-1/3   1   0   0]
[-1/3 -1/2 1 0]
[1/3    1/2  0 1]
Upper
[3    -1   1   1]
[0 8/3 4/3 -2/3]
[0   0   4    1]
[0   0   0    3]
   This example looks straight forward to human but algorithm wise could be numerous steps. This is why LU Factorization came. Methodically, Relation with Gaussian Elimination, Schur Complements, and Block Implementations are some.
  (2) Solve the triangular systems Ly = b and Ux = y, since then b = L(Ux).

(Pseudo)-Inverse of N by N matrix with zero determinant

I would like to take the inverse of a nxn matrix to use in my GraphSlam.
The issues that I encountered:
.inverse() Eigen-library (3.1.2) doesn't allow zero values, returns NaN
The LAPACK (3.4.2) library doesn't allow to use a zero determinant, but allows zero values (used example code from Computing the inverse of a matrix using lapack in C)
Seldon library (5.1.2) wouldn't compile for some reason
Did anyone successfully implemented an n x n matrix inversion code that allows negative, zero-values and a determinant of zero? Any good library (C++) recommendations?
I try to calculate the omega in the following for GraphSlam:
http://www.acastano.com/others/udacity/cs_373_autonomous_car.html
Simple example:
[ 1 -1 0 0 ]
[ -1 2 -1 0 ]
[ 0 -1 1 0 ]
[ 0 0 0 0 ]
Real example would be 170x170 and contain 0's, negative values, bigger positive values.
Given simple example is used to debug the code.
I can calculate this in matlab (Moore-Penrose pseudoinverse) but for some reason I'm not able to program this in C++.
A = [1 -1 0 0; -1 2 -1 0; 0 -1 1 0; 0 0 0 0]
B = pinv(A)
B=
[0.56 -0.12 -0.44 0]
[-0.12 0.22 -0.11 0]
[-0.44 -0.11 0.56 0]
[0 0 0 0]
For my application I can (temporarily) remove the dimension with zero's.
So I am going to remove the 4th column and the 4th row.
I can also do that for my 170x170 matrix, the 4x4 was just an example.
A:
[ 1 -1 0 ]
[ -1 2 -1 ]
[ 0 -1 1 ]
So removing the 4th column and the 4th row wouldn’t bring a zero determinant.
But I can still have a zero determinant if my matrix is as above.
This when the sum of each row or each column is zero. (Which I will have all the time in GraphSlam)
The LAPACK-solution (Moore-Penrose Inverse based) worked if the determinant was not zero (used example code from Computing the inverse of a matrix using lapack in C). But failed as a "pseudoinverse" with a determinant of zero.
SOLUTION: (all credits to Frank Reininghaus), using SVD(singular value decomposition)
http://sourceware.org/ml/gsl-discuss/2008-q2/msg00013.html
Works with:
Zero values (even full 0 rows and full 0 columns)
Negative values
Determinant of zero
A^-1:
[0.56 -0.12 -0.44]
[-0.12 0.22 -0.11]
[-0.44 -0.11 0.56]
If all you want is to solve problem of the form Ax=B (or equivalently compute products of the form A^-1 * b), then I recommend you not to compute the inverse or pseudo-inverse of A, but directly solve for Ax=b using an appropriate rank-revealing solver. For instance, using Eigen:
x = A.colPivHouseholderQr().solve(b);
x = A.jacobiSvd(ComputeThinU|ComputeThinV).solve(b);
Your Matlab command does not calculate the inverse in your case because the matrix has determinat zero. The pinv commmand calculates the Moore-Penrose pseudoinverse. pinv(A) has some of, but not all, the properties of inv(A).
So you are not doing the same thing in C++ and in Matlab!
Previous
As in my comment. Now as answer. You must make sure that you invert invertible matrices. That means
det A != 0
Your example matrix has determinant equals zero. This is not an invertible matrix. I hope you don't try on this one!
For example a given matrix has determinant zero if there is a full row or column of zero entries.
Are you sure it's because of the zero/negative values, and not because your matrix is non-invertible?
A matrix only has an inverse if its determinant is nonzero (mathworld link), and the matrix example you posted in the question has a zero determinant and so it has no inverse.
That should explain why those libraries do not allow you to take the inverse of the matrix given, but I can't say if the same reasoning holds for your full size 170x170 matrix.
If your matrixes is kind of covariance or weight matrices you can use "generalized cholesky inversion" instead of SVD. The results will be more acceptable for practical use

How to solve Linear Diophantine equations in programming?

I have read about Linear Diophantine equations such as ax+by=c are called diophantine equations and give an integer solution only if gcd(a,b) divides c.
These equations are of great importance in programming contests. I was just searching the Internet, when I came across this problem. I think its a variation of diophantine equations.
Problem :
I have two persons,Person X and Person Y both are standing in the middle of a rope. Person X can jump either A or B units to the left or right in one move. Person Y can jump either C or D units to the left or right in one move. Now, I'm given a number K and I have to find the no. of possible positions on the rope in the range [-K,K] such that both the persons can reach that position using their respective movies any number of times. (A,B,C,D and K are given in question).
My solution:
I think the problem can be solved mathematically using diophantine equations.
I can form an equation for Person X like A x_1 + B y_1 = C_1 where C_1 belongs to [-K,K] and similarly for Person Y like C x_2 + D y_2 = C_2 where C_2 belongs to [-K,K].
Now my search space reduces to just finding the number of possible values for which C_1 and C_2 are same. This will be my answer for this problem.
To find those values I'm just finding gcd(A,B) and gcd(C,D) and then taking the lcm of these two gcd's to get LCM(gcd(A,B),gcd(C,D)) and then simply calculating the number of points in the range [1,K] which are multiples of this lcm.
My final answer will be 2*no_of_multiples in [1,K] + 1.
I tried using the same technique in my C++ code, but it's not working(Wrong Answer).
This is my code :
http://pastebin.com/XURQzymA
My question is: can anyone please tell me if I'm using diophantine equations correctly ?
If yes, can anyone tell me possible cases where my logic fails.
These are some of the test cases which were given on the site with problem statement.
A B C D K are given as input in same sequence and the corresponding output is given on next line :
2 4 3 6 7
3
1 2 4 5 1
3
10 12 3 9 16
5
This is the link to original problem. I have written the original question in simple language. You might find it difficult, but if you want you can check it:
http://www.codechef.com/APRIL12/problems/DUMPLING/
Please give me some test cases so that I can figure out where am I doing wrong ?
Thanks in advance.
Solving Linear Diophantine equations
ax + by = c and gcd(a, b) divides c.
Divide a, b and c by gcd(a,b).
Now gcd(a,b) == 1
Find solution to aU + bV = 1 using Extended Euclidean algorithm
Multiply equation by c. Now you have a(Uc) + b (Vc) = c
You found solution x = U*c and y = V * c
The problem is that the input values are 64-bit (up to 10^18) so the LCM can be up to 128 bits large, therefore l can overflow. Since k is 64-bit, an overflowing l indicates k = 0 (so answer is 1). You need to check this case.
For instance:
unsigned long long l=g1/g; // cannot overflow
unsigned long long res;
if ((l * g2) / g2 != l)
{
// overflow case - l*g2 is very large, so k/(l*g2) is 0
res = 0;
}
else
{
l *= g2;
res = k / l;
}