I attempt to solve an integer linear program (ILP) using the solver IBM ILOG CPLEX in C++. The solver states that the problem is infeasible and points out the index of a violated constraint. My question concerns the identification and analysis of this constraint in C++.
A manual approach to analyzing the constraint would be to export the problem to a text file using the function extractModel and look up the violated constraint in this file.
Preferably, I would like to get the index of the violated constraint in C++ and get as much information about this conflict as possible.
Currently, I am using the conflict refiner but do not get any useful information out of it. Specifically, I keep an IloRangeArray of all constraints I ever add to the model, call refineConflict for this array and then use the function getConflict to query (possibly) violated constraints. The result is that all constraints I ever added are possibly violated and no constraint is proved to be violated.
How can I access the index of the one constraint reported in the error message that states that the problem is infeasible?
Also, am I using the conflict refiner incorrectly? E.g. am I doing something wrong when I make copies of constraints that I add to the model in a separate array? (The copy constructor and assignment operator of certain classes in Cplex seem to have non-standard behavior that I do not understand.)
Any help is appreciated.
I've not tried to use the conflict refiner API. Probably should look into it... but I use the conflict refiner a lot in the standalone interactive CPLEX. I am not aware of any issues of keeping copies of the constraints in your own code - I have done it before in CPLEX & Concert with C++. It may be a conceptual misunderstanding of what the conflict refiner does...
Remember that it is very rare to have a single identifiable infeasible constraint. It is much more common that there is a set of constraints that cannot be satisfied together, but if any of that set of constraints is removed then the rest are then feasible. This is usually called the "irreducible infeasible set".
Think for example of three constraints:
a >= b + 1
b >= c + 1
c >= a + 1
Clearly these three constraints cannot be satisfied simultaneously, but take any one away and the other two are then OK. It can be very hard to decide which constraint is wrong in some cases, and really depends on a deeper understanding of the problem and its model.
Anyway, try exporting the model as an LP, MPS or SAV format file and read it into the standalone CPLEX optimiser. Then optimise it - it should also fail with a reported infeasibility. Then run the conflict refiner and then display the computed (irreducible) infeasible set:
read fred.lp
optimize
conflict
display conflict all
I find that MPS files are better at preserving the full precision of the problem and are probably more portable to try with other solvers, but LP files are much more human-readable. The SAV file format is supposed to be the most accurate copy of what CPLEX has in memory, but is very opaque and rather CPLEX-specific. If your problem is clearly infeasible the LP format is probably nicer to work with, but if the problem is borderline infeasible you may get different behaviour from the LP file. It would probably help you a great deal if you name all your variables ad constraints too. Maybe just do the naming in debug builds or add a flag to control whether or not to do the additional naming.
Related
I am a phd candidate in data mininig, and i have to create a global constraint with ORtools for a data mining purpose.
The problem is that there is a lack of documentation about creating your own global constraint with CP-Sat in the internet, and i don't know how to start.
It is obviously possible, but very tedious, and very complex.
Writing a new constraint implies:
extending the proto to support the constraint
writing the input validation
writing the solution checker
writing the loading (into CP-SAT engine) code
writing the presolve rules
writing the propagation code. Which is complex as every deduction needs to be fully explained.
writing the linearization/cut generation code
The last 3 items are extremely error prone, and very hard to debug, as the effect of cuts and explanations are delayed, and sometimes never used.
For these reason, I recommend expanding the constraint into smaller ones. In fact, most of the CP constraints are expanded (alldiff, element, table, reservoir, inverse, automaton, some products, some modulos).
You can also submit a feature request for a new constraint. It can happen if it is useful/general enough.
Thanks
I am writing a clojure library, and I am thinking on using clojure.spec. Is it good practice to use spec/valid? on functions input at runtime? or should I use destracturing along with type hints? I am concerned, if there will be a performance penalty, and if it's considered bad use of spec, and generally bad practice.
It is always appropriate to check for valid inputs "where appropriate". What does that mean? That is an open question.
In general, the most "dangerous" inputs will be coming from the outside world, at the boundaries of your program. This means input from the web/browser, or perhaps from another service.
The "safest" inputs are deep inside your program, where (presumably) you have more control and confidence. For example, you would (probably) never check the inputs to the + function to ensure each arg was a number. And, even if an invalid arg slipped by (a string, perhaps), the JVM would throw an exception right away.
For sure, you should have maximal checking when running unit tests, so at least you know your code is working properly in the "normal" path. I like to use Plumatic Schema for this as it is easy to have validation always run during unit tests, but is off by default during "production" runs.
Having said that, I often place assert statements at the beginning of functions where a bad input would be difficult to recognize or would really cause hard-to-diagnose problems later on.
Both of these techniques have saved me many times!
Note that even a strongly typed language like Java won't save you here. Typed variables may ensure that an input is a number, but it could be invalid like sending a negative value to the sqrt function. Other functions may have even more restrictive inputs, such as only being valid for prime numbers, etc. Types cannot capture these detailed requirements. Only good design and good testing can prevent these problems.
Types & asserts cannot prevent all problems, but they are like guard rails on the road (or seatbelts & airbags). They may not be able to prevent a crash, but they can provide early warning and reduce the severity of the consequences.
P.S. You can see how I like to balance the trade-off between safety & cost of checking in a libary I wrote. Be sure to see the file _bootstrap.clj which is a trick to turn on the Plumatic Schema tests only during unit tests.
I have a LP problem with some hard constraints and some soft constraints. I know slack variables can be used to emulate soft constraints (add slack variables in soft constraints and have a penalty to objective function). But this increases the number of variables in my LP.
Is there any other way to add soft-constraints in gurobi?
Gurobi Optimizer has no special feature for soft constraints. You should add them via slack or surplus variables. And even if it did, it would simply add the slack or surplus variables to your model.
Too long to fit as a comment so I post it here.
One thing you may want to try is multiple or hierarchical objectives, which Gurobi allows you to have (see here).
This can be similar to having soft constraints (this might be useful).
Do not worry too much about increasing the number of variables: in itself is not a problem in most cases.
I am learning Nunit-2.6.3 by reading the Documentation. I am having a few doubts about it.
What is the difference between the classical model and the constraint model assertion?
Which model of assertions is the best one, and why?
The main difference is syntactic. It's the difference between (classic):
Assert.AreEqual("expected", someString);
And (constraint)
Assert.That(someString, Is.EqualTo("expected"));
Classic mode has been around longer and some people believe that it's more explicit and easier to follow.
Other people believe the constraint based approach is closer to the way that you might say the constraint if you were explaining it to somebody else.
If you're just getting started, then probably the constraint based assertions are the better ones to learn, since they're the direction that NUnit appears to be trying to head in. They're also closer to FluentAssertions. The constraint based assertions also has more explicit support for extension through the use of the IResolveConstraint interface.
You should however probably gain an awareness of the classic assertions since there's a good chance that different places you encounter code may use either depending on what they used first.
Although the syntax is different, what they're doing is very similar, so if you understand one set of assertions, converting them back and forth is pretty straightforward.
I have an annoying dependency problem between components, and i would like to hear several ways to resolve it.
Basically i have 3 components that depend almost acyclically from each other, except for a small dependency between the first and the last component. Concretely, this is a JIT compiler but hopefully it is a widely occuring type of abstract dependency which may happen in other circumstances.
The components are basically in sequence of flow dependency; source/AST generation, code generation and runtime. As it is clear from the diagram, errors generated at runtime should be able to communicate Ids that can be correlated to source location items. The tricky part is that this Id is not necessarily an integer type (although it can be). Until now, SourceItemID was a type internal to the Source component, but now it seems it needs to be defined outside of it.
What would be optimal patterns to use here? I was thinking in maybe templatizing the runtime error type with the desired Source location id.
The simplest solution is to define all the types and common behavior that is used by your modules in an independent unit (possibly a single header), that all the real processing units use.
For minimum overhead/headaches and compatibility issues (these shared types could be useful elsewhere at some point for communication with other apps/plugins/whatever), try to keep those types POD if you can.
"Templatizing" things is not trivial. It is very powerful and expressive, but if you're looking at removing dependencies, my opinion is: try to see if you can make things simpler first.