Pyomo takes time after IPOPT solver call? - pyomo

When calling ipopt solver after building a concrete QP model, Pyomo runs internal code before calling the solver. Reading other questions on this topic I understand that Pyomo is converting the model to the format that ipopt can understand. But unfortunately Pyomo takes long time before ipopt starts solving the model. Is there a way to reduce the time for the actual solver call?

I believe that Pyomo default behavior is to write an *.nl file, then to call IPOPT to process that file and produce a *.sol file. Pyomo then parses back in the *.sol file. File IO for creation of the *.nl file can be a limiting factor for larger models. The solution would be to use an in-memory interface rather than writing the *.nl file, which I believe is a work in progress.
It is also possible that you have room for efficiency improvements in model construction. You can check by seeing how long it takes to reach the solve() statement vs. execution of the solve() itself.

Related

SCIP/OR-Tools: Stop solver after certain objective value has been reached

I am using or-tools with a SCIP to solve some Mixed Integer Linear Program. In other solvers, I know there are options to stop the solver once a certain objective value has been reached, for example the BestObjStop option in GUROBI. Is there a similar option in SCIP as well? If so, is this option accessible via or-tools in C++?
Currently, there is no function implemented that directly does this in SCIP. There is a workaround that should work though.
You can set the objective limit (function SCIPsetObjlimit) to only accept solutions that are better than the objective stop you want. Then you set SCIP to stop as soon as you found a solution (parameter limits/bestsol set to 1).
I am not an or-tools user, so I am not sure how this should best be achieved in or-tools. This recent thread has an answer that tells you how to set specific SCIP parameters from or-tools. Looking at the code on github, it at least seems to me that you can set the objective limit when you call the Solve method with the right GScipParameters. Maybe an or-tools expert can improve this answer.

Adding clauses directly to the z3 solver

I have an AIG (and-inverter graph) which I keep modifying and whose satisfiability I need to check in an incremental manner using Z3. I can generate a CNF representation of the AIG and would ideally like to feed these clauses directly to the solver and call it repeatedly from my code. Is there some way that I can directly add clauses (or an AIG) to Z3 solver through C/C++ APIs?
Yes, you can simply assert new assertions, which are internally translated into clauses.
Note that for many incremental solving problems, Z3 does not use an off-the-shelf, dedicated SAT-solver, but it's own SMT solver that incorporates some of the features of SAT solvers, but not all, and which natively handles non-Boolean problems. So, it's not necessarily the case that hacking the solver to inject clauses directly will translate into significantly improved performance.
Z3 also has a dedicated Boolean-only SAT solver and if you're solving purely Boolean problems, this solver is likely much faster. You can force it to use this solver by replacing (check-sat) with (check-sat-using sat), or by running the tactic called 'sat'. The implementation of this solver is in sat_solver.h/.cpp, which would be the prime place to start looking around, if you'd like to hack it.
Z3 also uses it's own implementation of AIGs as a pre-processing step in some tactics, see aig_tactic.h/.cpp.

Computation time of LP Relaxation of an IP higher than optimizing the IP itself

This is a follow up of my previous question on LP Relaxation of a MIP using SCIP.
Though I'm able to compute a LP Relaxation solution of my MIP by simply passing the MIP (in CPLEX format) to SoPlex, I observe that the computation time taken by SoPlex is higher than optimizing the MIP using SCIP itself (testing for smaller inputs).
As SCIP uses SoPlex internally before solving the MIP, how is this possible?
Moreover, my LP Relaxation result is actually giving integer solutions, and the same objective value as the MIP. Am I making a mistake in LP Relaxation? Or is it some property of my problem/formulation?
I'm referring to the total computation time printed by the solvers (not computed myself).
This behaviour most likely comes from SCIPs presolving routines, which shrink and reformulate the input MIP. You can verify this by looking at the SCIP output after starting the optimization, where SCIP prints the number of removed variables, removed constraints etc.
Sometimes, Integer formulations allow for stronger problem reductions.
If your problem contains, e.g., binary variables, a lot of them might get fixed when probing is performed: SCIP iteratively fixes the binary variables to 0 or 1, and if one fixation leads to an inconsistency, the variable gets fixed.
This behavior can be explained by the different presolving steps. SCIP's presolving is usually faster and removes more rows and columns than that of SoPlex. Please have a look at the respective information in the statistics. You can display the SCIP statistics in the interactive shell by typing display statistics, whereas SoPlex prints more info with the command line parameter -q (if you're using SoPlex 2.0).
Another thing you may try is parameter tuning. Have you tested different pricers (-p3 for devex, -p4 for steepest edge) or scalers (-g1 -g3 or -g4) in SoPlex? Run SoPlex without a problem to solve and it will show available parameters.

Optimization Routine in Fortran 90

I am doing (trying to do) numerical optimization in Fortran 90, on a Windows 7 machine with the gfortran compiler. I have a function, pre-written by someone else, which returns the loglikelihood of a function, given a large set of parameters (about 60 parameters in total) passed in. I am trying to replicate someone's results, so I know the final parameter values, but I was to try and re-estimate them and, eventually, extend their model and use different data. I've been trying the uobyqa.f90 routine available here, which has not been particularly successful thus far.
My questions are: First, for an optimization problem with a large number of parameters (over 60), can anyone suggest the best freely available routine? Derivatives are not available, and would be costly to estimate numerically, hence trying the uobyqa routine first. Also, would implementing parallelization aid significantly in solving this problem? And, if so, could anyone suggest an optimization routine that already implements parallelization using openmp?
Thanks!
I don't have a good suggestion for a specific optimization strategy, but the NLopt package has a few derivative-free optimizers that can handle larger numbers of variables. Worth checking out. I've found the Fortran interface to be very easy to use.
Do a regular (published academic) literature search on this question first.
Maybe try including "LAPACK" with your other search terms (e.g. "optimization", "uobyqa", etc) to see relavant work by other parties.

Is there any free ITERATIVE linear system solver in c++ that allows me to feed in an arbitrary initial guess?

I am looking for an iterative linear system solver to calculate a continuously changing field. For the simulation to work properly, I need to re-calculate the field (maybe several times) for every time step. Fortunately, I have a good initial guess for each time step, so it is better I can feed it into an iterative solver. And the coefficient matrix is very dense.
The problem is I checked several iterative solvers online like Gmm++, IML++, ITL, DUNE/ISTL and so on. They are either for sparse systems or don't provide interfaces for inputting initial guesses (I might be wrong since I didn't have time to go through all the documents).
So I have two questions:
1 Is there any such c++ solver available online?
2 Since the coefficient matrix can be as large as thousands * thousands, could a direct solver be quicker than an iterative solver with a really good initial guess?
Great Thanks!
He
If you check the header for Conjugate Gradient in IML++ (http://math.nist.gov/iml++/cg.h.txt), you'll see that you can very easily provide the initial guess for the solution in the very variable where you'd expect to get the solution.