If condition in Cplex objective function - linear-programming

I am new to Cplex. I'm solving an integer programming problem, but I have a problem with objective function.
Problem is that I have some project, that have a due date D, and if project is tardy, than I have a tardiness penalty b, so it looks like b*(cn-D). Where cn is a real complection time of a project, and it is decision variable.
It must look like this
if (cn-D)>=0 then b*(cn-D)==0
I tried to use "if-then" constraint, but seems that it isn't working with decision variable.
I looked on question similar to this, but but could not find solution. Please, help me define correct objective function.

The standard way to model this is:
min sum(i, penalty(i)*Tardy(i))
Tardy(i) >= CompletionTime(i) - DueDate(i)
Tardy(i) >= 0
Tardy is a non-negative variable and can never become negative. The other quantities are:
penalty: a constant indicating the cost associated with job i being tardy one unit of time.
CompletionTime: a variable that holds the completion time of job i
DueDate: a constant with the due date of job i.
The above measures the sum. Sometimes we also want to measure the count: the number of jobs that are tardy. This is to prevent many jobs being tardy. In the most general case one would have both the sum and the count in the objective with different weights or penalties.
There is a virtually unlimited number of papers showing MIP formulations on scheduling models that involve tardiness. Instead of reinventing the wheel, it may be useful to consult some of them and see what others have done to formulate this.

Related

C++/ Help me understand the logic

I was told to solve this problem:
given a1, ..., an are real numbers. Need to calculate min(a1, -a1a2, a1a2a3, ...,(-1)^(n+1) a1a2,... an)
but I cannot understand the logic of the task. Could you tell me what I should do?
For example, what is (-l)^n+1? I've never seen it before.
What you should do is:
use the n real numbers of input to ...
... calculate the n numbers defined by the quoted formula (though you only need one value at a time to be more efficient)
while doing so keep track of the smallest number you encounter, that is the final result
concerning the (-1)^(n+1), it is reasonable to assume (as e.g. in the comment by Weather Vane and others) that it means powers of -1 (in a lazy and unexplained but non-C++ syntax)
note that you can easily calculate one value from the previous one by simple multiplication
probably you should do all of that by writing a program, an assumption based on the fact that you are asking on StackOverflow and tag a programming language

Pyomo - How to consider all scenarios in the objective function within the ReferenceModel?

I am facing a problem in defining the objective function shown in Figure: Objective Function.
In here, p_n = 1/#Scenarios and W-n: Final Wealth in Scenario n. Scenarios are equiprobable, so p_n is known in advance.
The problem is: How can I define the sum of the cross-product of p_n and W_n over EVERY scenario, if the Deterministic ReferenceModel considers only one scenario? In other terms, given that the set of scenarios is added only after having defined the objective function in the deterministic model, how can I consider all the scenarios?
Moreover, I have to count the number of times in which the Final Wealth is lower than the Target Wealth, in a given pre-specified stage. How can I set the counter, if the Deterministic ReferenceModel considers just one scenario?
I am a new Pyomo user, and a quantitative finance student with no much knowledge about Operation Research, so I apologize in advance if the questions are obvious or silly.
Thank you!

DoCPLEX Solving LP Problem Partially at a time

I am working on Linear Programming Problem with 800K Constraints and the problem takes 20 mins to solve but if I solve the problem for half horizon it just takes 1 min. Is there a way in DoCPLEX where I can solve for partial horizon and then use the solution to solve for other half of the problem without using a for-loop
Three suggestions:
load your problem as LP or SAV into cplex interactive optimizer and run display problem stats. This might show (or rule out) precision issues (ill-conditioned problem). Also it will output number of nonzeros
set datacheck parameters to 2, this might detect numerical issues in data
have you tried different LP algorithms? Using the lpmethod parameter you could try primal, dual or barrier algorithm to see whether one runs faster on your problem.
Reference:
https://www.ibm.com/support/knowledgecenter/SSSA5P_12.10.0/ilog.odms.cplex.help/CPLEX/Parameters/topics/LPMETHOD.html
In DOcplex:
model.parameters.datacheck = 2
model.parameters.lpmethod = 4 # for barrier
From your answers, I can think of the following:
if you are in pure LP (is this true?) I see no point in rounding numbers (but yes, that would help in a MIP, try rounding coefficients whose fractional part is say less than 1e-7: 4.0000001 -> 4)
1e+14 conditioning denotes serious modeling issue: a common source is mixing different objectives with coefficients. Have you tried multi-objective to avoid that?
Another source is big_M formulations, to which you should prefer indicator constraints. If you are not in these two cases, then try to renormalize the data to keep in a smaller condition range...
Finally, you might try setting markowitz tolerance to 0.99, to add extra cautiouness in simplex factorizations, but behavior may vary from one dataset to the other...

Difference LP/MIP and CP

what is the difference between Constraint Programming (CP) and Linear Programming (LP) or Mixed Integer Programming (MIP) ? I know what LP and MIP is but dont understand the difference to CP - or is CP just the same as MIP and LP ? I am a but confused on this ...
This may be a little exhaustive, but I will try to provide all the information to cover a good scope of this topic.
I'll start with an example and the corresponding information will make more sense.
**Example**: Say we need to sequence a set of tasks on a machine. Each task i has a specific fixed processing time pi. Each task can be started after its release date ri , and must be completed before its deadline di. Tasks cannot overlap in time. Time is represented as a discrete set of time points, say {1, 2,…, H} (H stands for horizon)
MIP Model:
Variables: Binary variable xij represents whether task i starts at time period j
Constraints:
Each task starts on exactly one time point
* ∑j xij = 1 for all tasks i
Respect release date and deadline
j*xij = 0 for all tasks i and (j < ri ) or (j > di - pi )
Tasks cannot overlap
Variant 1:
∑i xij ≤ 1 for all time points j we also need to take processing times into account; this becomes messy
Variant 2:
introduce binary variable bi representing whether task i comes before task k must be linked to xij; this becomes messy
MIP models thus consists of linear/quadratic optimization functions, linear/ quadratic optimization constraints and binary/integer variables.
CP model:
Variables:
Let starti represent the starting time of task i takes a value from domain {1,2,…, H} - this immediately ensures that each task starts at exactly one time point
Constraints:
Respect release date and deadline
ri ≤ starti ≤ di - pi
Tasks cannot overlap:
for all tasks i and j (starti + pi < startj) OR (starti + pi < starti)
and that is it!
You could probably say that the structure of the CP models and MIP models are the same: using decision variables, objective function and a set of constraints. Both MIP and CP problems are non-convex and make use of some systematic and exhaustive search algorithms.
However, we see the major difference in modeling capacity. With CP we have n variables and one constraint. In MIP we have nm variables and n+m constraints. This way to map global constraints to MIP constraints using binary variables is quite generic
CP and MIP solves problems in a different way. Both use a divide and conquer approach, where the problem to be solved is recursively split into sub problems by fixing values of one variable at a time. The main difference lies in what happens at each node of the resulting problem tree. In MIP one usually solves a linear relaxation of the problem and uses the result to guide search. This is a branch and bound search. In CP, logical inferences based on the combinatorial nature of each global constraint are performed. This is an implicit enumeration search.
Optimization differences:
A constraint programming engine makes decisions on variables and values and, after each decision, performs a set of logical inferences to reduce the available options for the remaining variables' domains. In contrast, an mathematical programming engine, in the context of discrete optimization, uses a combination of relaxations (strengthened by cutting-planes) and "branch and bound."
A constraint programming engine proves optimality by showing that no better solution than the current one can be found, while an mathematical programming engine uses a lower bound proof provided by cuts and linear relaxation.
A constraint programming engine doesn't make assumptions on the mathematical properties of the solution space (convexity, linearity etc.), while an mathematical programming engine requires that the model falls in a well-defined mathematical category (for instance Mixed Integer Quadratic Programming (MIQP).
In deciding how you should define your problem - as MIP or CP, Google Optimization tools guide suggests: -
If all the constraints for the problem must hold for a solution to be feasible (constraints connected by "and" statements), then MIP is generally faster.
If many of the constraints have the property that just one of them needs to hold for a solution to be feasible (constraints connected by "or" statements), then CP is generally faster.
My 2 cents:
CP and MIP solves problems in a different way.  Both use a divide and conquer approach, where the problem to be solved is recursively split into sub problems by fixing values of one variable at a time.  The main difference lies in what happens at each node of the resulting problem tree.  In MIP one usually solves a linear relaxation of the problem and uses the result to guide search.  This is a branch and bound search.  In CP, logical inferences based on the combinatorial nature of each global constraint are performed.
There is no one specific answer to which approach would you use to formulate your model and solve the problem. CP would probably work better when the number of variables increase by a lot and the problem is difficult to formulate the constraints using linear equalities. If the MIP relaxation is tight, it can give better results - If you lower bound doesn't move enough while traversing your MIP problem, you might want to take higher degrees of MIP or CP into consideration. CP works well when the problem can be represented by Global constraints.
Some more reading on MIP and CP:
Mixed-Integer Programming problems has some of the decision variables constrained to integers (-n … 0 … n) at the optimal solution. This makes it easier to define the problems in terms of a mathematical program. MP focuses on special class of problems and is useful for solving relaxations or subproblems (vertical structure).
Example of a mathematical model:
Objective: minimize cT x
   Constraints: A x = b (linear constraints)
l ≤ x ≤ u (bound constraints)
some or all xj must take integer values (integrality constraints)
Or the model could be define by Quadratic functions or constraints, (MIQP/ MIQCP problems)
Objective: minimize xT Q x + qT x
   Constraints: A x = b (linear constraints)
l ≤ x ≤ u (bound constraints)
xT Qi x + qiT x ≤ bi (quadratic constraints)
some or all x must take integer values (integrality constraints)
The most common algorithm used to converge MIP problems is the Branch and Bound approach.
CP:
CP stems from a problems in AI, Operations Research and Computer Science, thus it is closely affiliated to Computer Programming.- Problems in this area assign symbolic values to variables that need to satisfy certain constraints.- These symbolic values have a finite domain and can be labelled with integers.- CP modelling language is more flexible and closer to natural language.
Quoted from one of the IBM docs, constraint Programming is a technology where:
business problems are modeled using a richer modeling language than what is traditionally found in mathematical optimization
problems are solved with a combination of tree search, artificial intelligence and graph theory techniques
The most common constraint(global) is the "alldifferent" constraint, which ensures that the decision variables assume some permutation (non-repeating ordering) of integer values. Ex. If the domain of the problem is 5 decision variables viz. 1,2,3,4,5, they can be ordered in any non-repetitive way.
The answer to this question depends on whether you see MIP and CP as algorithms, as problems, or as scientific fields of study.
E.g., each MIP problem is clearly a CP problem, as the definition of a MIP problem is to find a(n optimal) solution to a set of linear constraints, while the definition of a CP problem is to find a(n optimal) solution to a set of (non-specified) constraints. On the other hand, many important CP problems can straightforwardly be converted to sets of linear constraints, so seeing CP problems through a MIP perspective makes sense as well.
Algorithmically, CP algorithms historically tend to involve more search branching and complex constraint propagation, while MIP algorithms rely heavily on solving the LP relaxation to a problem. There exist hybrid algorithms though (e.g., SCIP, which literally means "Solving Constraint Integer Programs"), and state-of-the-art solvers often borrow techniques from the other side (e.g., no-good learning and backjumps originated in CP, but are now present in MIP solvers as well).
From a scientific field of study point of view, the difference is purely historical: MIP is part of Operations Research, originating at the end of WWII out of a need to optimize large-scale "operations", while CP grew out of logic programming in the field of Artificial Intelligence to model and solve problems declaratively. But there is a good case to be made that both these fields study the same problem. Note that there even is a big shared conference: CPAIOR.
So all in all, I would say MIP and CP are the same in most respects, except on the main techniques used in typical algorithms for each.
LP and MIP are solved using mathematical programming, while there are specific methods to solve constraint programming problems. The following reference is helpful in understanding the differences:
http://ibmdecisionoptimization.github.io/docplex-doc/mp_vs_cp.html

Finding an optimal solution to a system of linear equations in c++

Here's the problem:
I am currently trying to create a control system which is required to find a solution to a series of complex linear equations without a unique solution.
My problem arises because there will ever only be six equations, while there may be upwards of 20 unknowns (usually way more than six unknowns). Of course, this will not yield an exact solution through the standard Gaussian elimination or by changing them in a matrix to reduced row echelon form.
However, I think that I may be able to optimize things further and get a more accurate solution because I know that each of the unknowns cannot have a value smaller than zero or greater than one, but it is free to take on any value in between them.
Of course, I am trying to create code that would find a correct solution, but in the case that there are multiple combinations that yield satisfactory results, I would want to minimize Sum of (value of unknown * efficiency constant) over all unknowns, i.e. Sigma[xI*eI] from I=0 to n, but finding an accurate solution is of a greater priority.
Performance is also important, due to the fact that this algorithm may need to be run several times per second.
So, does anyone have any ideas to help me on implementing this?
Edit: You might just want to stick to linear programming with equality and inequality constraints, but here's an interesting exact solution that does not incorporate the constraint that your unknowns are between 0 and 1.
Here's a powerpoint discussing your problem: http://see.stanford.edu/materials/lsoeldsee263/08-min-norm.pdf
I'll translate your problem into math to make things a bit easier to figure out:
you have a 6x20 matrix A and a vector x with 20 elements. You want to minimize (x^T)e subject to Ax=y. According to the slides, if you were just minimizing the sum of x, then the answer is A^T(AA^T)^(-1)y. I'll take another look at this as soon as I get the chance and see what the solution is to minimizing (x^T)e (ie your specific problem).
Edit: I looked in the powerpoint some more and near the end there's a slide entitled "General norm minimization with equality constraints". I am going to switch the notation to match the slide's:
Your problem is that you want to minimize ||Ax-b||, where b = 0 and A is your e vector and x is the 20 unknowns. This is subject to Cx=d. Apparently the answer is:
x=(A^T A)^-1 (A^T b -C^T(C(A^T A)^-1 C^T)^-1 (C(A^T A)^-1 A^Tb - d))
it's not pretty, but it's not as bad as you might think. There's really aren't that many calculations. For example (A^TA)^-1 only needs to be calculated once and then you can reuse the answer. And your matrices aren't that big.
Note that I didn't incorporate the constraint that the elements of x are within [0,1].
It looks like the solution for what I am doing is with Linear Programming. It is starting to come back to me, but if I have other problems I will post them in their own dedicated questions instead of turning this into an encyclopedia.