nonzero Reduced cost: Is lower or upper bound active? - c++

Reduced cost give the dual variable corresponding to a box constraint of a variable as was pointed out in the related answer to this question: LP: postive reduced costs corresponding to positive variables?
How do I know whether the lower or upper bound is the active constraint? Of course, I could check whether the difference between the variable value and its bounds is smaller than some epsilon. However, the choice of such an epsilon is unclear to me, because a model could potentially attempt to fix a variable by setting the lower bound equal to the upper bound. In such a case, no epsilon could unambiguously indicate which of the bounds is the active one.
Does cplex provide the information which bound is active in its C++ api? Does any other LP solver do so? Is there another trick to figure out the active bound?
Thank you.

To a large extent you can look at the sign. The rule for reduced costs is:
Basic Non-basic-at-LB Non-basic-at-UB
minimization 0 >= 0 <= 0
maximization 0 <= 0 >= 0
Some problems with this:
Not all solvers may follow this (especially when maximizing).
Degeneracy may make things difficult
Most solvers will give you access to the basis status. E.g. Cplex has BasisStatus which gives you exactly the basis status of a variable: Basic, AtLower, AtUpper or FreeOrSuperbasic.

Related

How to optimize nonlinear funtion with some constraint in c++

I want to find a variable in C++ that allows a given nonlinear formula to have a maximum value in a constraint.
It is to calculate the maximum value of the formula below in the given constraint in C++.
You can also use the library.(e.g. Nlopt)
formula : ln(1+ax+by+c*z)
a, b, c are numbers input by the user
x, y, z are variables to be derived
variable constraint is that x, y, z are positive and x+y+z<=1
This can actually be transformed into a linear optimization problem.
max ln(1+ax+by+cz) <--> max (ax+by+cz) s.t. ax+by+cz > -1
This means that it is a linear optimization problem (with one more constraint) that you can easily handle with whatever C++ methods together with your Convex Optimization knowledge.
Reminders to write a good code:
Check the validity of input value.
Since the input value can be negative, you need to consider this circumstance which can yield different results.
P.S.
This problem seems to be Off Topic on SO.
If it is your homework, it is for your own good to write the code yourself. Besides, we do not have enough time to write that for you.
This should have been a comment if I had more reputation.

linear programming line removal optimal solution

How would I prove the constraints in line 35.19 are redundant in this linear program in that if we remove them in lines (35.17) – (35.20), any optimal solution to the linear program must satisfy x(v)≤1 for each v∈V.
LP
I think I need to use the relaxation version of linear program but am not sure.
Beyond that I am not sure how I would prove this.
You're already looking at a relaxed version of the program, otherwise your variables would be constrained to being members of a set.
Likely you need to consider the dual of this program, which will convert >= constraints into <= constraints.
Your claim is (almost) true assuming that the weights w are non-negative (otherwise, if some w_v is negative then removing x_v <= 1 will allow x_v go to infinity, and the problem will be unbounded).
Now, assume that in an optimal solution there exists one x_v = 1 + ε, with ε > 0. If we change this solution to one in which x_v = 1, then the problem is still feasible, and the objective value is no worse than what it used to be before, so it is also optimal.
This proves that there exist optimal solutions in which x_v <= 1 (if bounded optimal solutions exist at all).
Thus, while it is not true that every optimal solution has x_v <= 1, it is true that every finite optimal solution can be converted to a solution with x_v <= 1 without loss of generality, and in this sense the constraints are redundant.
For reasons I do not consider here, regular solvers will most likely return a solution in which x_v <= 1 anyway, because of the way they work (they return basic solutions).
I hope this helps.

Enumerate some extreme points near optimum solution

I am looking for a simple way to obtain a lot of "good" solutions in a LP problem (not MIP) with CPLEX, and not only (one of the) optimal basic solution(s). By "good" solutions I mean that the corresponding objective values are not so far from the real optimal value. Such pool of solutions could help the decision-maker...
More precisely, given a certain polyedron Ax<=b with x>=0 and an objective function z=cx I want to maximize, after running the LP, I can obtain the optimal value z*. Then I want to enumerate all the extreme points of the polyhedron given by the set of constraints
Ax <= b
cx >= z* - epsilon
x >= 0
when epsilon is a given tolerance.
I know that CPLEX offers way to generate solution pool (see here), but it will not function because this method is for MIP : it enumerates all the solutions of an IP (or one solution for every given set of fixed integer variables if the problem is a MIP).
An interesting efficient way is to visit the adjacent solutions of the optimal basic solution, i.e. all the adjacent extreme points : if I suppose the polyhedron is not degenerative, for each pair of basic variable x_B and non-basic variable x_N, I compute the basic solution obtained when x_B leaves the basis and x_N enters in the basis. Then I throw the solutions with cx < z*-epsilon, and for the others I repeat the procedure. [I know that I could improve this algorithm, but this is the general idea].
The routine CPPXpivot of the Callable Library could help to do this pivoting operation, but I did not find an equivalent in the C++ API (concert technology). Does someone know if such an equivalent exist, or could propose me an other way to answer my original problem ?
Thanks a lot :) !
Rémi L.
There is one interesting way to make this suitable for use with the Cplex solution pool. Use binary variables to encode the current basis, e.g. basis[k] = 0 meaning nonbasic and basis[k] = 1 indicating variable (or row) k is basic. Of course we have sum(k, basis[k]) = m (number of rows). Finally we have x[k] <= basis[k] * upperbound[k] (i.e. if nonbasic then zero -- assuming positive variables). When we add this to the LP model we end up with a MIP and can enumerate (all or some, optimal or near optimal) bases using the Cplex solution pool. See here and here.

Another way to calculate double type variables in c++?

Short version of the question: overflow or timeout in current settings when calculating large int64_t and double, anyway to avoid these?
Test case:
If only demand is 80,000,000,000, solved with correct result. But if it's 800,000,000,000, returned incorrect 0.
If input has two or more demands (means more inequalities need to be calculated), smaller value will also cause incorrectness. e.g., three equal demands of 20,000,000,000 will cause the problem.
I'm using COIN-OR CLP linear programming solver to solve some network flow problems. I use int64_t when representing the link bandwidth. But CLP uses double most of time and cannot transfer to other types easily.
When the values of the variables are not that large (typically smaller than 10,000,000,000) and the constraints (inequalities) are relatively few, it will give the solution I want it to. But if either of the above factors increases, the tool will stop and return a 0 value solution. I think the reason is the calculation complexity is over its maximum, so program breaks at some trivial point (it uses LP simplex method).
The inequality is some kind of:
totalFlowSum <= usePercentage * demand
I changed it to
totalFlowSum - usePercentage * demand <= 0
Since totalFLowSum and demand are very large int64_t, usePercentage is double, if the constraints like this are too many (several or even more), or if the demand is larger than 100,000,000,000, the returned solution will be wrong.
Is there any way to correct this, like increase the break threshold or avoid this level of calculation magnitude?
Decrease some accuracy is acceptable. I have a possible solution is that 1,000 times smaller on inputs and 1,000 time larger on outputs. But this is kind of naïve and may cause too much code modification in the program.
Update:
I have changed the formulation to
totalFlowSum / demand - usePercentage <= 0
but the problem still exists.
Update 2:
I divided usePercentage by 1000, making its coefficient from 1 to 0.001, it worked. But if I also divide totalFlowSum/demand by 1000 simultaneously, still no result. I don't know why...
I changed the rhs of equalities from 0 to 0.1, the problem is then solved! Since the inputs are very large, 0.1 offset won't impact the solution at all.
I think the reason is that previous coeffs are badly scaled, so the complier failed to find an exact answer.

Linear programming - multi-objective optimization. How to build single function

I have LP problem
It is similar to assignment problem for workers
I have three assignments with constraint scores(Constraint 1 is more important than constraint 2)
Constraint1 Constraint2
assign1 590 5
assign2 585 580
assign3 585 336
My current greedy solver will compare assignments by the first constraint. The best one becomes a solution. Solver will compare Constraint 2 if and only if he found two assignments with the same score value for previous constraint and so on.
For a given example in the first round assign2 and assign3 will be chosen because they have the same lowest constraint score. After second round solver choses assign3. So I need a cost function which will represent that behavior.
So I expect the
assign1_cost > assign2_cost > assign3_cost.
Is it posible to do?
I believe that I can apply some weighted math function or something like that.
There's two ways to do this simply, that I know of:
Assume that you can put an upper bound on each objective function (besides the first). Then you rewrite your objective as follows: (objective_1 * max_2 + objective_2) * max_3 + objective_3 and so on, and minimize said objective. For your example, say you know that objective_2 will be less than 1000. You can then solve, minimizing objective_1*1000 + objective_2. This may fail with some solvers if your upper bounds are too large.
This method requires n solver iterations, where n is the number of objectives, but is general - it doesn't require you to know upper bounds beforehand. Solve, minimizing objective_1 and ignoring the other objective functions. Add a constraint that objective_1 == <value you just got>. Solve, minimizing objective_2 and ignoring the remaining objective functions. Repeat until you've gone through every objective function.