I want to solve an ILP using a branch and bound algorithm for a graph labelling problem. My ILP has an objective function and constraints for every possible pair of nodes from the graph. So I have n^2 / 2 possible constraints, and the constraints have following shape:
for two vertices i, j \in V(G) add the constraints:
|X_i - X_j| >= k + 1 - d(i,j)
its obvious that this isn't linear. So i want to implement my algorithm that uses binary branch and bound:
start with empty LP only with the objective function
start with empty branch tree with start node
then at every node in the branch tree, create two child nodes.
in the first child node
add the constraints X_i - X_j >= k + 1 - d(i,j)
and constraint X_i >= X_j
for the second child node
add the constraint -(X_i - X_j) >= k + 1 - d(i,j)
and the constraint X_j >= X_i
repeat this step and possibly bound, until we find best solution
If I do this branch and bound algorithm, and search for the best solution, I would get the best occupancy of which variable should be greater than which.
Then I would get a LP, with constant constraints and I would get the optimal integer solution.
My question now is, what is the best way of doing that in c++ and cplex?
Should I always create a new IloModel in CPLEX? Or can I do it all in the same IloEnv?
I know that there are built in options, to specify the branch and bound in CPLEX, but I would like to do it "manually", so I can also implement functions that selects the "best" next constraint to add.
Thank you very much in advance!
Within CPLEX you also have Constraint Programmming and with OPL for example you can directly write
forall( ordered c1, c2 in Cells: distance[c1,c2] > 0) {
forall(t1 in 1..nbTrans[c1]) {
forall(t2 in 1..nbTrans[c2] )
abs(freq[<c1,t1>] - freq[<c2,t2>]) >= distance[c1,c2];
}
}
which is part of a frequency allocation model
The same can be done in C++
Related
A construction company has 6 projects, for each they need $d_i$ workers. The company has no workers at the beginning of project 1.
Each new worker must take a safety course that costs 300, and 50 more for each worker.
If there is no new worker there is no course.
Firing a worker does not cost any money, and a workers can't be rehired.
Given that the salary of a worker is 100 per project, formulate a linear programming problem that minimizes the workers costs.
What I tried:
Let $x_i$ be the number of new workers for project $i$.
Let $y_i$ be the number of old workers remaining from previous projects until project $i$ (all the workers hired - all the workers that were fired)
Let $z_i$ be an indicator such that $z_i =0 \iff x_i>0$
The function I'm trying to solve is:
$\min(\sum_{i=1}^6 150x_i + 300(1-z_i) + 100y_i)$
s.t:
\begin{align}
x_i,y_i,z_i &\ge 0 \\
z_i &\ge 1-x_i \\
y_i + x_i &\ge d_i \\
y_i &\ge y_{i-1} + x_i
\end{align}
Something feels not right to me. The main reason is that I tried to use matlab to solve this and it failed.
What did I do wrong? How can I solve this question?
When I see this correctly you have two small mistakes in your constraints.
The first appears when you use z_i >= 1-x_i. This allows z_i to take the value 1 all the time, which will never give you the extra cost of 300. You need to upper bound z_i such that z_i will not be 1 when you have x_i>0. For this constraint you need something called big M. For sufficiently large M you would then use z_i <= 1-x_i/M. This way when x_i=0 you can have z_i=1, otherwise the right hand side is smaller than 1 and due to integrality z_i has to be zero. Note that you usually want to choose M as tight as possible. So in your case d_i might be a good choice.
The second small mistake lays in y_i >= y_{i-1} + x_i. This way you can increase y_i over y_{i-1} without having to set any x_i. To force x_i to increase you need to flip the inequality. Additionally by the way you defined y_i this inequality should refer to x_{i-1}. Thus you should end up with y_i <= y_{i-1} + x_{i-1}. Additionally you need to take care of corner cases (i.e. y_1 = 0)
I think with these two changes it should work. Let me know whether it helped you. And if it still doesn't work I might have missed something.
Consider the classic network flow problem where the constraint is that the outflow from a vertex is equal to the sum of the inflows to it. Consider having a more specific constraint where the flow can be split between edges.
I have two questions:
How can I use a decision variable to identify that node j is receiving items from multiple edges?
How to create another equation to determine the cost (2 unit of time per item) of joining x number of items from different edges in the sink node?
This is a tricky modeling question. Let's go by parts.
Consider having a more specific constraint where the flow can be split between edges
I here assume that you have a classic flow constraint modeled as a real variable set y_ij. Therefore, the flow can be split between two or more arcs.
How can I use a decision variable to identify that node j is receiving items from multiple edges?
You need to create an additional binary variable z_ij to represent your flow. You must also create the following constraint:
Next, you will need another additional integer variable set, let's say p_i and an additional constraint
Then, p_i will store the number of ingoing arcs in a node j which are used to send flow. Since you will try to minimize the cost of joining arcs (I think), you need to use the <=.
How to create another equation to determine the cost(2 unit of time per item) of joining x number of items from different edges in the sink node?
For this, you can use the value of p_i and multiply by the predefined cost of joining the flow.
Assume I have the following array of objects:
Object 0:
[0]=1.1344
[1]=2.18
...
[N]=1.86
-----------
Object 1 :
[0]=1.1231
[1]=2.16781
...
[N]=1.8765
-------------
Object 2 :
[0]=1.2311
[1]=2.14781
...
[N]=1.5465
--------
Object 17:
[0]=1.31
[1]=2.55
...
[N]=0.75
How can I compare those objects?
You can see that object 0 and object 1 are very similar but object 17 not like any of them.
I would like to have algorithm tha twill give me all the similar object in my array
You tag this question with Algorithm (and I am not expert in C++) so lets give a pseudo code.
First, you should set a threshold which define 2 var with different under that threshold as similar. Second step will be to loop over all pair of elements and check for similarity.
Consider A to be array with n objects and m to be number of fields in each object.
threshold = 0.1
for i in (0, n):
for j in (i+1,n):
flag = true;
for k in (1,m):
if (abs(A[i][k] - A[j][k]) > threshold)
flag = false // if the absolute value of the diff is above the threshold object are not similar
break // no need to continue checks
if (flag)
print: element i and j similar // and do what ever
Time complexity is O(m * n^2).
Notice that you can use the same algorithm to sort the objects array - declare compare function as the max diff between field and then sort accordingly.
Hope that helps!
Your problem essentially boils down to nearest neighbor search which is a well researched problem in data mining.
There are diffent approaches to this problem.
I would suggest to decide first what number of similar elements you want OR to set a given threshold for the similarity. Than you have to iterate through all the vectors and compute a distance function between the query vector and each vector in the database.
I would suggest you to use Euclidean distance in your case since you have real nominal data.
You can read more about the topic of nearest neighbor search and Euclidean distancehere and here. Good luck!
What you need is a classifier, for your problem there are 2 algorithms depends on what you wanted.
If you need to find which object is most similar to the choosen object-m, you can use nearest neighbor algorithm or else if you need to find similar sets of objects you can use k-means algorithm to find k sets.
First let me explain the problem I'm trying to solve. I'm integrating my code with 3rd party library which does quite complicated financial predictions. For the purposes of this question let's just say I have a blackbox which returns y when I pass in x.
Now, what I need to do is find input (x) for a given output (y). Since I know lowest and highest possible input values I wrote the following algorithm:
define starting input range (minimum input value to maximum input value)
divide the range into two equal parts and find output for a middle value
find which half output falls into
repeat steps 2 and 3 until range is too small to divide any further
This algorithm does the job nicely, I don't see any problems with it. However, is there a faster way to solve this problem?
It sounds like x and y are strongly correlated (i.e. as x increases, so does y), as otherwise your divide and conquer algorithm wouldn't work.
Assumuing this is the case, and you could work out a correlation factor, then you might be able to multiply the midpoint by the correlation factor to potentially hone in the expected value quicker.
Please note that I've not tested this idea at all, but it's something to think about. Possible improvements would be to make the correlationFactor a moving average, or precompute it based on, say, the deciles between xLow and xHigh.
Also, this assumes that calling f(x) is relatively inexpensive. If it is expensive, then the increased number of calls to f(x) would dwarf any savings. In fact - I'm starting to think this is a stupid idea...
Hopefully the following pseudo-code illustrates what I mean:
DivideAndConquer(xLow, xHigh, correlationFactor, expectedValue)
xMid = (xHigh - xLow) * correlationFactor
// Add some range checks to make sure that xMid is within xLow and xHigh!!
y = f(xMid)
if (y == expectedValue)
return expectedValue
elseif (y < expectedValue)
correlationFactor = (xMid - xLow) / (f(xMid) - f(xLow))
return DivideAndConquer(xLow, xMid, correlationFactor, expectedValue)
else
correlationFactor = (xHigh - xMid) / (f(xHigh) - f(xMid))
return DivideAndConquer(xMid, xHigh, correlationFactor, expectedValue)
I need to do a partition of approximately 50000 points into distinct clusters. There is one requirement: the size of every cluster cannot exceed K. Is there any clustering algorithm that can do this job?
Please note that upper bound, K, of every cluster is the same, say 100.
Most clustering algorithms can be used to create a tree in which the lowest level is just a single element - either because they naturally work "bottom up" by joining pairs of elements and then groups of joined elements, or because - like K-Means, they can be used to repeatedly split groups into smaller groups.
Once you have a tree, you can decide where to split off subtrees to form your clusters of size <= 100. Pruning an existing tree is often quite easy. Suppose that you want to divide an existing tree to minimise the sum of some cost of the clusters you create. You might have:
f(tree-node, list_of_clusters)
{
cost = infinity;
if (size of tree below tree-node <= 100)
{
cost = cost_function(stuff below tree-node);
}
temp_list = new List();
cost_children = 0;
for (children of tree_node)
{
cost_children += f(child, temp_list);
}
if (cost_children < cost)
{
list_of_clusters.add_all(temp_list);
return cost_children;
}
list_of_clusters.add(tree_node);
return cost;
}
One way is to use hierarchical K-means, but you keep splitting each cluster which is larger than K, until all of them are smaller.
Another (in some sense opposite approach) would be to use hierarchical agglomerative clustering, i.e. a bottom up approach and again make sure you don't merge cluster if they'll form a new one of size > K.
The issue with naive clustering is that you do indeed have to calculate a distance matrix that holds the distance of A from every other member in the set. It depends whether you've pre-processed the population or your amalgamating the clusters into typical individuals then recalculating the distance matrix again.