How to check, without an object function, if the constraints are feasible? - c++

My professor gave me a binary linear programming problem, but this problem is slightly different from optimization problems I used to solve(i.e. this is probably not maximizing or minimizing the object function.)
The problem is as follows,
Given a matrix M, for entries m_ij != 0, there are corresponding x_ijk variables.
Entries m_ij = 0 can be ignored.
x_ijk is either 0 or 1, and I want to try 5 x_ijk variables for each m_ij (that is, x_ij1, x_ij2, x_ij3, x_ij4, and x_ij5. One of them is 1 and the others are 0) are enough to satisfy some conditions(a set of inequalities).
More simply, this is to check if the set of constraints involving 5 x_ijk variables for each m_ij is a valid(or feasible) constraints.
I have solved some optimization problems, but I have never solved a problem without an objective function.
What should I set as my objective function here?
0? nothing?
I might be using lp_solve or CPLEX.
Thank you in advance for your advice!

That is correct, you can set an arbitrary constant value as an objective function.
Most of the solvers I have tried allow an empty objective function. Simply leave it out from your model.
Depending on the solver and the API you are using, it can happen that you have to set the coefficients of all variables in the objective to zero.
Don't worry, it has to work.
In response to your comment: Yes, constraint programming tools can provide better performance on feasibility problems than LP solvers (such as CPLEX). I have played with the IBM ILOG CPLEX CP Optimizer a few months ago, it is free for Academic users. Both the LP solver and the CP solver failed on my problems. Don't expect a miracle from constraint programming.
Keep in mind the that time needed to solve a constraint program grows exponentially with the size of the problem in the worse case. Sooner or later, your problems will most likely become unsolvable with either tool.
Just for your information: in the end, the constraint programming solver will call the LP solver (for example CPLEX).
My advice is: try the tool you already have / use the problem formulation that is more natural to you. Check whether the tool can solve your problem. Switch tool only if the tool fails and you cannot improve your model.

Related

Is there a simple way to reduce the value of positive slack variables in MILP?

Recently, I have been learning optimization, and my optimization problem, (minimization), is encoded in a MILP solver which tells me it's infeasible for my model. Hence, I introduced a few positive/negative slack variables.
Now, I get a feasible solution, but the positive slack variables are way bigger than what I can accept.
So, I gave penalties/weights to those variables (multiplied by large numbers), hoping that the MILP solver would reduce the variables, but that didn't work (I got the same solution)
Is there any approach to follow, in general, when the slack is too large?
Is there a better way to pick the slack variables, in general?
A common pitfall for people new to mathematical programming/optimization is that variables are non-negative by default, that is, they always have an implied lower bound of 0. Your mathematical model may not specify this explicitly, so those variables might need to be declared as free (with a lower bound of -infinity).
In general, you should double-check your model (as LP file) and compare it to the mathematical formulation.
Add both to the objective with a penalty coefficient.
Or add some upper bounds to the slacks.

How can I best use my objective function to quickly find *a* feasible solution (Gurobi)?

I have a working ILP Gurobi model (exclusively binary variables). Reducing runtime and finding a feasible solution is of far more value to me than the optimal solution. Reducing my SolutionLimit to 1 does help. I realized that my objective function is summing up hundreds of thousands of variables together. If I don't truly care about optimality, can I somehow simplify my objective function to reduce the burden on the solver?
Here is my current objective function:
m.setObjective(quicksum(h[x,y,c,p,t] + v[x,y,c,p,t]
for x in range(0,Nx)
for y in range(0,Ny)
for c in range(0,C)
for p in range(0,P)
for t in range(0,T)), GRB.MINIMIZE)
I don't want to nitpick but there is no such thing as a "more optimal" solution - "optimal" is already the superlative. In case you are really only looking for a feasible solution without regard for the objective function, you should follow Erwin's advice and don't set an objective function at all. It's hard to believe, though, that your current objective function is completely meaningless, so a better approach is probably to reduce the objective function to include only a few variables and also to set a higher MIPGap to terminate the solve earlier.

How can I choose the right numerical solution from NEQNF?

I'm using a function (NEQNF manual page here) which I call using
call neqnf(SYSTEM_OF_EQUATIONS, x, xguess=x_GUESS, itmax = 10000)
where SYSTEM_OF_EQUATIONS is the subroutine that contains equations
f(1)=...x(2)...x(1)...
f(2)=...x(1)...x(4)...
f(3)=...x(3)...x(4)...
f(4)=...x(1)...x(5)...
f(5)=...x(1)...x(5)...
from IMSL libraries on Fortran that lets me to solve a non-linear system with five unknowns in five equations. Because there exists more than one solution (couple of five numbers, real or complex, that solve my system), how can I choose which couple to "use" as solution?
I link an online solver with already entered a piece of my system (only two unknowns in two equations, other variables are constant in this example) which easily show you that there exists more than one solution.
example
To conclude my issue I can say that I have to choose the couple of variables which let other variables to be positive, so an easy check is the way to choose the couple.
I don't think the question has anything to do with programming, but I will show how I understand the problem.
You supply an initial guess. Then the method just converges to some solution by a modification of a Newton method.
You can choose the root by the placement of the initial guess. However, the convergence pattern can be very unpredictable (even fractal - https://en.wikipedia.org/wiki/Newton_fractal ) and it may be very difficult to choose the particular root using the initial guess.

SCIP and Branch and Price

I have a general question about SCIP. I need to use the SCIP as a Branch and Price framework for my problem, I code in c++ so I used the VRP example as a template. On some of the instances, the code stops at the fractional solution and returns that as a optimal solution, I think something is wrong, do I have to set some parameters in order to tell SCIP look for integer solution or I made a mistake, I believe it should not stop and instead branch on the fractional solution until it reaches the integer solution (without any other negative reduced cost column). I also solve the subproblem optimally! any commenets?!
If you define your variables to be continous and just add a pricer, SCIP will solve the master problem to optimality (i.e., solve the restricted master, add improving columns, solve the updated restricted master, and so on, until no more improving columns were found).
There is no reason for SCIP to check if the solution is integral, because you explicitly said that you don't mind whether the values of the variables are integral or not (by defining them to be continuous). On the other hand, if you define the variables to be of integral (or binary) type, SCIP will do exactly as I described before, but at the end check whether all integral variables have an integral value and branch if this is not the case.
However, you should note that all branching rules in SCIP do branching on variables, i.e., they take an integer variable with fractional value and split its domain; a binary variable would be fixed to 0 and 1 in the two child nodes. This is typically a bad idea for branch-and-price: first of all, it's quite unbalanced. You have a huge number of variables out of which only few will have value 1 in the end, most will be 0. Fixing a variable to 1 therefore has a high impact, while fixing it to 0 has almost no impact. But more importantly, you need to take the branching decision into account in your pricing problem. If you fixed a variable to 0, you have to keep the pricer from generating a copy of the forbidden column (which would probably improve the LP solution, because it was part of the former optimal solution). In order to to this, you might need to look for the 2nd (or later k)-best solution. Since you are solving the pricing problems as a MIP with SCIP, you might just add a constraint forbidding this solution (logicor (linear) for binary variables or bounddisjunction (not linear) for general integer variables).
I would recommend to implement your own branching rule, which takes into account that you are doing branch-and-price and branches in a way that is more balanced and does not harm your pricing too much. For an example, check out the Ryan&Foster branching rule, which is the standard for binary problems with a set-partitioning master structure. This rule is implemented in Binpacking as well as the Coloring example shipped with SCIP.
Please also check out the SCIP FAQ, where there is a whole section about branch-and-price which also covers the topic branching (in particular, how branching decisions can be stored and enforced by a constraint handler, which is something you need to do for Ryan&Foster branching): http://scip.zib.de/doc/html/FAQ.php
There were also a lot of questions about branch-and-price on the SCIP mailing list
http://listserv.zib.de/mailman/listinfo/scip/. If you want to search it, you can use google and search for "site:listserv.zib.de scip search-string"
Finally, I would like to recommend to have a look at the GCG project: http://www.or.rwth-aachen.de/gcg/
It is an extension of SCIP to a generic branch-cut-and-price solver, i.e., you do not need to implement anything, you just put in an original formulation of your model, which is then reformulated by a Dantzig-Wolfe decomposition and solved via branch-cut-and-price. You can supply the structure for the reformulation, pricing problems are solved as a MIP (as you do it also), and there are also different branching rules. GCG is also part of the SCIP optimization suite and can be easily built within the suite.

Optimization Routine in Fortran 90

I am doing (trying to do) numerical optimization in Fortran 90, on a Windows 7 machine with the gfortran compiler. I have a function, pre-written by someone else, which returns the loglikelihood of a function, given a large set of parameters (about 60 parameters in total) passed in. I am trying to replicate someone's results, so I know the final parameter values, but I was to try and re-estimate them and, eventually, extend their model and use different data. I've been trying the uobyqa.f90 routine available here, which has not been particularly successful thus far.
My questions are: First, for an optimization problem with a large number of parameters (over 60), can anyone suggest the best freely available routine? Derivatives are not available, and would be costly to estimate numerically, hence trying the uobyqa routine first. Also, would implementing parallelization aid significantly in solving this problem? And, if so, could anyone suggest an optimization routine that already implements parallelization using openmp?
Thanks!
I don't have a good suggestion for a specific optimization strategy, but the NLopt package has a few derivative-free optimizers that can handle larger numbers of variables. Worth checking out. I've found the Fortran interface to be very easy to use.
Do a regular (published academic) literature search on this question first.
Maybe try including "LAPACK" with your other search terms (e.g. "optimization", "uobyqa", etc) to see relavant work by other parties.