How would I prove the constraints in line 35.19 are redundant in this linear program in that if we remove them in lines (35.17) – (35.20), any optimal solution to the linear program must satisfy x(v)≤1 for each v∈V.
LP
I think I need to use the relaxation version of linear program but am not sure.
Beyond that I am not sure how I would prove this.
You're already looking at a relaxed version of the program, otherwise your variables would be constrained to being members of a set.
Likely you need to consider the dual of this program, which will convert >= constraints into <= constraints.
Your claim is (almost) true assuming that the weights w are non-negative (otherwise, if some w_v is negative then removing x_v <= 1 will allow x_v go to infinity, and the problem will be unbounded).
Now, assume that in an optimal solution there exists one x_v = 1 + ε, with ε > 0. If we change this solution to one in which x_v = 1, then the problem is still feasible, and the objective value is no worse than what it used to be before, so it is also optimal.
This proves that there exist optimal solutions in which x_v <= 1 (if bounded optimal solutions exist at all).
Thus, while it is not true that every optimal solution has x_v <= 1, it is true that every finite optimal solution can be converted to a solution with x_v <= 1 without loss of generality, and in this sense the constraints are redundant.
For reasons I do not consider here, regular solvers will most likely return a solution in which x_v <= 1 anyway, because of the way they work (they return basic solutions).
I hope this helps.
Related
i want binary number that only have 0's at the beginning or end, for instance,
1111111
01111110
001111111
000111000
but no:
01001
0011101
do they have an specific name or property to get them?
I'm looking for something like linear integer optimization conditions, my solutions must have this form, but i can't think of any condition i can add to ensure that
Regards,
This is something which is not nice to formulate within mixed-integer programs. Most problems involving this are more suitable for alternative methods (SAT-solving, SMT-solving, Constraint-programming).
It can be done of course, but the solver will have some work as the formulation is non-trivial and introduces a lot of binary-variables (and the basic approach of MIP-solvers won't work amazingly here; bad integrality-gap).
I won't give you a complete solution, but some basic idea on how to formulate this and i also indicate how hard and cumbersome it is (there are alternative formulations; actually infinite many; but nothing much more simple).
The basic idea here is the following:
your binary number is constructed from N binary-variables
let's call them x
you introduce N auxiliary binary-variables l (left)
l[i] == 1 implicates: every l[j] with i<j is 0
you introduce N auxiliary binary-variables r (right)
r[i] == 1 implicates: every r[j] with i>j is 0
you add the following constraint for each position k:
x[k] == 0 implicates: l[i] == 1 for i < k OR r[i] == 1 for i>k
idea::
if there is a zero somewhere, either all on the left-side or all on the right-side are zeroes (or means: at least one side; but can be both)
To formulate this, you need two more ideas:
A: How to formulate the equality-check?
B: How to formulate the implication?
Remark: a -> b == not a or b (propositional calculus)
(this was wrongly stated earlier and corrected by OP)
These are common in MIP and you will find the solution in many integer-programming books, tutorial and papers. Here is an example (start with indicator-variables).
Another small common formulation:
if a is binary, b is binary:
a OR b is equivalent to: a+b >= 1 (the latter is a linear-expression ready to use for MIP)
Remark: The formulas in my idea-setting above might be wrong in regards to indices (i vs. i-1, vs. i+1) and binary-relations (<vs. <=). You will need to do the actual math yourself and just learn from the idea itself!
Remark 2: This kind of constraint is cumbersome in MIP, but more easily formulated within SAT and CP.
I am looking for a simple way to obtain a lot of "good" solutions in a LP problem (not MIP) with CPLEX, and not only (one of the) optimal basic solution(s). By "good" solutions I mean that the corresponding objective values are not so far from the real optimal value. Such pool of solutions could help the decision-maker...
More precisely, given a certain polyedron Ax<=b with x>=0 and an objective function z=cx I want to maximize, after running the LP, I can obtain the optimal value z*. Then I want to enumerate all the extreme points of the polyhedron given by the set of constraints
Ax <= b
cx >= z* - epsilon
x >= 0
when epsilon is a given tolerance.
I know that CPLEX offers way to generate solution pool (see here), but it will not function because this method is for MIP : it enumerates all the solutions of an IP (or one solution for every given set of fixed integer variables if the problem is a MIP).
An interesting efficient way is to visit the adjacent solutions of the optimal basic solution, i.e. all the adjacent extreme points : if I suppose the polyhedron is not degenerative, for each pair of basic variable x_B and non-basic variable x_N, I compute the basic solution obtained when x_B leaves the basis and x_N enters in the basis. Then I throw the solutions with cx < z*-epsilon, and for the others I repeat the procedure. [I know that I could improve this algorithm, but this is the general idea].
The routine CPPXpivot of the Callable Library could help to do this pivoting operation, but I did not find an equivalent in the C++ API (concert technology). Does someone know if such an equivalent exist, or could propose me an other way to answer my original problem ?
Thanks a lot :) !
Rémi L.
There is one interesting way to make this suitable for use with the Cplex solution pool. Use binary variables to encode the current basis, e.g. basis[k] = 0 meaning nonbasic and basis[k] = 1 indicating variable (or row) k is basic. Of course we have sum(k, basis[k]) = m (number of rows). Finally we have x[k] <= basis[k] * upperbound[k] (i.e. if nonbasic then zero -- assuming positive variables). When we add this to the LP model we end up with a MIP and can enumerate (all or some, optimal or near optimal) bases using the Cplex solution pool. See here and here.
Short version of the question: overflow or timeout in current settings when calculating large int64_t and double, anyway to avoid these?
Test case:
If only demand is 80,000,000,000, solved with correct result. But if it's 800,000,000,000, returned incorrect 0.
If input has two or more demands (means more inequalities need to be calculated), smaller value will also cause incorrectness. e.g., three equal demands of 20,000,000,000 will cause the problem.
I'm using COIN-OR CLP linear programming solver to solve some network flow problems. I use int64_t when representing the link bandwidth. But CLP uses double most of time and cannot transfer to other types easily.
When the values of the variables are not that large (typically smaller than 10,000,000,000) and the constraints (inequalities) are relatively few, it will give the solution I want it to. But if either of the above factors increases, the tool will stop and return a 0 value solution. I think the reason is the calculation complexity is over its maximum, so program breaks at some trivial point (it uses LP simplex method).
The inequality is some kind of:
totalFlowSum <= usePercentage * demand
I changed it to
totalFlowSum - usePercentage * demand <= 0
Since totalFLowSum and demand are very large int64_t, usePercentage is double, if the constraints like this are too many (several or even more), or if the demand is larger than 100,000,000,000, the returned solution will be wrong.
Is there any way to correct this, like increase the break threshold or avoid this level of calculation magnitude?
Decrease some accuracy is acceptable. I have a possible solution is that 1,000 times smaller on inputs and 1,000 time larger on outputs. But this is kind of naïve and may cause too much code modification in the program.
Update:
I have changed the formulation to
totalFlowSum / demand - usePercentage <= 0
but the problem still exists.
Update 2:
I divided usePercentage by 1000, making its coefficient from 1 to 0.001, it worked. But if I also divide totalFlowSum/demand by 1000 simultaneously, still no result. I don't know why...
I changed the rhs of equalities from 0 to 0.1, the problem is then solved! Since the inputs are very large, 0.1 offset won't impact the solution at all.
I think the reason is that previous coeffs are badly scaled, so the complier failed to find an exact answer.
Here's the problem:
I am currently trying to create a control system which is required to find a solution to a series of complex linear equations without a unique solution.
My problem arises because there will ever only be six equations, while there may be upwards of 20 unknowns (usually way more than six unknowns). Of course, this will not yield an exact solution through the standard Gaussian elimination or by changing them in a matrix to reduced row echelon form.
However, I think that I may be able to optimize things further and get a more accurate solution because I know that each of the unknowns cannot have a value smaller than zero or greater than one, but it is free to take on any value in between them.
Of course, I am trying to create code that would find a correct solution, but in the case that there are multiple combinations that yield satisfactory results, I would want to minimize Sum of (value of unknown * efficiency constant) over all unknowns, i.e. Sigma[xI*eI] from I=0 to n, but finding an accurate solution is of a greater priority.
Performance is also important, due to the fact that this algorithm may need to be run several times per second.
So, does anyone have any ideas to help me on implementing this?
Edit: You might just want to stick to linear programming with equality and inequality constraints, but here's an interesting exact solution that does not incorporate the constraint that your unknowns are between 0 and 1.
Here's a powerpoint discussing your problem: http://see.stanford.edu/materials/lsoeldsee263/08-min-norm.pdf
I'll translate your problem into math to make things a bit easier to figure out:
you have a 6x20 matrix A and a vector x with 20 elements. You want to minimize (x^T)e subject to Ax=y. According to the slides, if you were just minimizing the sum of x, then the answer is A^T(AA^T)^(-1)y. I'll take another look at this as soon as I get the chance and see what the solution is to minimizing (x^T)e (ie your specific problem).
Edit: I looked in the powerpoint some more and near the end there's a slide entitled "General norm minimization with equality constraints". I am going to switch the notation to match the slide's:
Your problem is that you want to minimize ||Ax-b||, where b = 0 and A is your e vector and x is the 20 unknowns. This is subject to Cx=d. Apparently the answer is:
x=(A^T A)^-1 (A^T b -C^T(C(A^T A)^-1 C^T)^-1 (C(A^T A)^-1 A^Tb - d))
it's not pretty, but it's not as bad as you might think. There's really aren't that many calculations. For example (A^TA)^-1 only needs to be calculated once and then you can reuse the answer. And your matrices aren't that big.
Note that I didn't incorporate the constraint that the elements of x are within [0,1].
It looks like the solution for what I am doing is with Linear Programming. It is starting to come back to me, but if I have other problems I will post them in their own dedicated questions instead of turning this into an encyclopedia.
Sorry if the question is very naive.
I will have to check the below condition in my code
0 < x < y
i.e code similar to if(x > 0 && x < y)
The basic problem at system level is - currently, for every call (Telecom domain terminology), my existing code is hit (many times). So performance is very very critical, Now, I need to add a check for boundary checking (at many location - but different boundary comparison at each location).
At very normal level of coding, the above comparison would look very naive without any issue. However, when added over my statistics module (which is dipped many times), performance will go down.
So I would like to know the best possible way to handle the above scenario (kind of optimal way for limits checking technique). Like for example, if bit comparison works better than normal comparison or can both the comparison be evaluation in shorter time span?
Other Info
x is unsigned integer (which must be checked to be greater than 0 and less than y).
y is unsigned integer.
y is a non-const and varies for every comparison.
Here time is the constraint compared to space.
Language - C++.
Now, later if I need to change the attribute of y to a float/double, would there be another way to optimize the check (i.e will the suggested optimal technique for integer become non-optimal solution when y is changed to float/double).
Thanks in advance for any input.
PS : OS used is SUSE 10 64 bit x64_64, AIX 5.3 64 bit, HP UX 11.1 A 64.
As always, profile first, optimize later. But, given that this is actually an issue, these could be things to look into:
"Unsigned and greater than zero" is the same as "not equal to zero", which is usually about as fast as a comparison gets. So a first optimization would be to do x != 0 && x < y.
Make sure that you do the comparison that is most likely to fail the first one, to maximize the gain from short circuiting.
If possible, use compiler directives to tell the compiler about the most likely code path. This will optimize instruction prefetching etc. I.e. for GCC look at something like this, done in the kernel.
I don't think tricks with subtraction and comparison against zero, etc. will be of any gain. If that is the most effective way to do a less-than comparison, you can be sure your compiler already knows about it.
This eliminates a compare and branch at the expense of two adds; it should be faster:
(x-1) < (y-1)
It works as long as y is guaranteed non-zero.
You probably don't need to change y to a float or a double; you should endeavor to stay in integer for as much as you can. Instead of representing y as seconds, try microseconds or milliseconds (depending on the resolution you need).
Anyway- I suspect you can change
if (x > 0 && x < y)
;
to
if ((unsigned int)x < (unsigned int)y)
;
but that's probably not going to actually speed anything up. Checking against zero is often one or two instructions (depending on ISA) so the read from memory is certainly the bottleneck here.
After you've profiled your code and determined that this is actually where the performance problems are, you could investigate tweaking the branch predictor, since that's somewhere a lot of time can be wasted if it's regularly mispredicting. Different compilers do it differently, but some have an intrinsic like __expect(x < 0);, which will tell the predictor to assume that's usually the case.