Solving equation. Counting (x,y) - c++

I'm having a trouble with my math:
Assume that we have a function: F(x,y) = P; And my question is: what would be the most efficient way of counting up suitable (x,y) plots for this function ? It means that I don't need the coordinates themself, but I need a number of them. P is in a range: [0 ; 10^14]. "x" and "y" are integers. Is it solved using bruteforce or there are some advanced tricks(math / programming language(C,C++)) to solve this fast enough ?
To be more concrete, the function is: x*y - ((x+y)/2) + 1.

x*y - ((x+y)/2) + 1 == P is equivalent to (2x-1)(2y-1) == (4P-3).
So, you're basically looking for the number of factorizations of 4P-3. How to factor a number in C or C++ is probably a different question, but each factorization yields a solution to the original equation. [Edit: in fact two solutions, since if A*B == C then of course (-A)*(-B) == C also].
As far as the programming languages C and C++ are concerned, just make sure you use a type that's big enough to contain 4 * 10^14. int won't do, so try long long.

You have a two-parameter function and want to solve it for a given constant.
This is a pretty big field in mathematics, and there are probably dozens of algorithms of solving your equation. One key idea that many use is the fact that if you find a point where F<P and then a point F>P, then somewhere between these two points, F must equal P.
One of the most basic algorithms for finding a root (or zero, which you of course can convert to by taking F'=F-P) is Newton's method. I suggest you start with that and read your way up to more advanced algorithms. This is a farily large field of study, so happy reading!
Wikipedia has a list of root-finding algorithms that you can use as a starting place.

Related

C++/ Help me understand the logic

I was told to solve this problem:
given a1, ..., an are real numbers. Need to calculate min(a1, -a1a2, a1a2a3, ...,(-1)^(n+1) a1a2,... an)
but I cannot understand the logic of the task. Could you tell me what I should do?
For example, what is (-l)^n+1? I've never seen it before.
What you should do is:
use the n real numbers of input to ...
... calculate the n numbers defined by the quoted formula (though you only need one value at a time to be more efficient)
while doing so keep track of the smallest number you encounter, that is the final result
concerning the (-1)^(n+1), it is reasonable to assume (as e.g. in the comment by Weather Vane and others) that it means powers of -1 (in a lazy and unexplained but non-C++ syntax)
note that you can easily calculate one value from the previous one by simple multiplication
probably you should do all of that by writing a program, an assumption based on the fact that you are asking on StackOverflow and tag a programming language

Enumerate some extreme points near optimum solution

I am looking for a simple way to obtain a lot of "good" solutions in a LP problem (not MIP) with CPLEX, and not only (one of the) optimal basic solution(s). By "good" solutions I mean that the corresponding objective values are not so far from the real optimal value. Such pool of solutions could help the decision-maker...
More precisely, given a certain polyedron Ax<=b with x>=0 and an objective function z=cx I want to maximize, after running the LP, I can obtain the optimal value z*. Then I want to enumerate all the extreme points of the polyhedron given by the set of constraints
Ax <= b
cx >= z* - epsilon
x >= 0
when epsilon is a given tolerance.
I know that CPLEX offers way to generate solution pool (see here), but it will not function because this method is for MIP : it enumerates all the solutions of an IP (or one solution for every given set of fixed integer variables if the problem is a MIP).
An interesting efficient way is to visit the adjacent solutions of the optimal basic solution, i.e. all the adjacent extreme points : if I suppose the polyhedron is not degenerative, for each pair of basic variable x_B and non-basic variable x_N, I compute the basic solution obtained when x_B leaves the basis and x_N enters in the basis. Then I throw the solutions with cx < z*-epsilon, and for the others I repeat the procedure. [I know that I could improve this algorithm, but this is the general idea].
The routine CPPXpivot of the Callable Library could help to do this pivoting operation, but I did not find an equivalent in the C++ API (concert technology). Does someone know if such an equivalent exist, or could propose me an other way to answer my original problem ?
Thanks a lot :) !
RĂ©mi L.
There is one interesting way to make this suitable for use with the Cplex solution pool. Use binary variables to encode the current basis, e.g. basis[k] = 0 meaning nonbasic and basis[k] = 1 indicating variable (or row) k is basic. Of course we have sum(k, basis[k]) = m (number of rows). Finally we have x[k] <= basis[k] * upperbound[k] (i.e. if nonbasic then zero -- assuming positive variables). When we add this to the LP model we end up with a MIP and can enumerate (all or some, optimal or near optimal) bases using the Cplex solution pool. See here and here.

Finding an optimal solution to a system of linear equations in c++

Here's the problem:
I am currently trying to create a control system which is required to find a solution to a series of complex linear equations without a unique solution.
My problem arises because there will ever only be six equations, while there may be upwards of 20 unknowns (usually way more than six unknowns). Of course, this will not yield an exact solution through the standard Gaussian elimination or by changing them in a matrix to reduced row echelon form.
However, I think that I may be able to optimize things further and get a more accurate solution because I know that each of the unknowns cannot have a value smaller than zero or greater than one, but it is free to take on any value in between them.
Of course, I am trying to create code that would find a correct solution, but in the case that there are multiple combinations that yield satisfactory results, I would want to minimize Sum of (value of unknown * efficiency constant) over all unknowns, i.e. Sigma[xI*eI] from I=0 to n, but finding an accurate solution is of a greater priority.
Performance is also important, due to the fact that this algorithm may need to be run several times per second.
So, does anyone have any ideas to help me on implementing this?
Edit: You might just want to stick to linear programming with equality and inequality constraints, but here's an interesting exact solution that does not incorporate the constraint that your unknowns are between 0 and 1.
Here's a powerpoint discussing your problem: http://see.stanford.edu/materials/lsoeldsee263/08-min-norm.pdf
I'll translate your problem into math to make things a bit easier to figure out:
you have a 6x20 matrix A and a vector x with 20 elements. You want to minimize (x^T)e subject to Ax=y. According to the slides, if you were just minimizing the sum of x, then the answer is A^T(AA^T)^(-1)y. I'll take another look at this as soon as I get the chance and see what the solution is to minimizing (x^T)e (ie your specific problem).
Edit: I looked in the powerpoint some more and near the end there's a slide entitled "General norm minimization with equality constraints". I am going to switch the notation to match the slide's:
Your problem is that you want to minimize ||Ax-b||, where b = 0 and A is your e vector and x is the 20 unknowns. This is subject to Cx=d. Apparently the answer is:
x=(A^T A)^-1 (A^T b -C^T(C(A^T A)^-1 C^T)^-1 (C(A^T A)^-1 A^Tb - d))
it's not pretty, but it's not as bad as you might think. There's really aren't that many calculations. For example (A^TA)^-1 only needs to be calculated once and then you can reuse the answer. And your matrices aren't that big.
Note that I didn't incorporate the constraint that the elements of x are within [0,1].
It looks like the solution for what I am doing is with Linear Programming. It is starting to come back to me, but if I have other problems I will post them in their own dedicated questions instead of turning this into an encyclopedia.

Linear form of function (a/b) for ampl/cplex

I am trying to solve a minimisation problem and I want to minimise an expression
a/b
Where both a & b are variables. Hence this is not a linear problem...
How can I transform this function into an other one (being a linear one).
There is a detailed section on how to handle ratios in Linear Programming on the lpsolve site. It should be general enough to apply to AMPL and CPLEX as well.
There are several ways to do this, but the simplest to explain requires that you solve a series of linear programs. First, remove the objective and add a constraint
a <= c * b
Where c is a known upper bound on the solution. Then do a binary search on c you can a range where c_l, c_u where the problem is infeasible for
a <= c_l * b
but feasible for
a <= c_u * b
The general form of the obj should be a linear fractional function, something like f_{0}(x)=(c^Tx+d)/(e^Tx+f). For your case, X=(a,b),c=(1,0),(e=0,1),d=f=0.
To solve this kind of opt, something called linear fractional programming can be used. it's like linear constrainted version of linear fractional function and Charnes-Cooper transformation is applied to transform into a LP. You can find the main idea from wiki. Many OR books talk more about this such as pp53, pp165 in the Boyd's "convex optimization" (free to download).

Removing unsolvable equations from an underdetermined system

My program tries to solve a system of linear equations. In order to do that, it assembles matrix coeff_matrix and vector value_vector, and uses Eigen to solve them like:
Eigen::VectorXd sol_vector = coeff_matrix
.colPivHouseholderQr().solve(value_vector);
The problem is that the system can be both over- and under-determined. In the former case, Eigen either gives a correct or uncorrect solution, and I check the solution using coeff_matrix * sol_vector - value_vector.
However, please consider the following system of equations:
a + b - c = 0
c - d = 0
c = 11
- c + d = 0
In this particular case, Eigen solves the three latter equations correctly but also gives solutions for a and b.
What I would like to achieve is that only the equations which have only one solution would be solved, and the remaining ones (the first equation here) would be retained in the system.
In other words, I'm looking for a method to find out which equations can be solved in a given system of equations at the time, and which cannot because there will be more than one solution.
Could you suggest any good way of achieving that?
Edit: please note that in most cases the matrix won't be square. I've added one more row here just to note that over-determination can happen too.
I think what you want to is the singular value decomposition (SVD), which will give you exact what you want. After SVD, "the equations which have only one solution will be solved", and the solution is pseudoinverse. It will also give you the null space (where infinite solutions come from) and left null space (where inconsistency comes from, i.e. no solution).
Based on the SVD comment, I was able to do something like this:
Eigen::FullPivLU<Eigen::MatrixXd> lu = coeff_matrix.fullPivLu();
Eigen::VectorXd sol_vector = lu.solve(value_vector);
Eigen::VectorXd null_vector = lu.kernel().rowwise().sum();
AFAICS, the null_vector rows corresponding to single solutions are 0s while the ones corresponding to non-determinate solutions are 1s. I can reproduce this throughout all my examples with the default treshold Eigen has.
However, I'm not sure if I'm doing something correct or just noticed a random pattern.
What you need is to calculate the determinant of your system. If the determinant is 0, then you have an infinite number of solutions. If the determinant is very small, the solution exists, but I wouldn't trust the solution found by a computer (it will lead to numerical instabilities).
Here is a link to what is the determinant and how to calculate it: http://en.wikipedia.org/wiki/Determinant
Note that Gaussian elimination should also work: http://en.wikipedia.org/wiki/Gaussian_elimination
With this method, you end up with lines of 0s if there are an infinite number of solutions.
Edit
In case the matrix is not square, you first need to extract a square matrix. There are two cases:
You have more variables than equations: then you have either no solution, or an infinite number of them.
You have more equations than variables: in this case, find a square sub-matrix of non-null determinant. Solve for this matrix and check the solution. If the solution doesn't fit, it means you have no solution. If the solution fits, it means the extra equations were linearly-dependant on the extract ones.
In both case, before checking the dimension of the matrix, remove rows and columns with only 0s.
As for the gaussian elimination, it should work directly with non-square matrices. However, this time, you should check that the number of non-empty row (i.e. a row with some non-0 values) is equal to the number of variable. If it's less you have an infinite number of solution, and if it's more, you don't have any solutions.