I am new in Scilab, I am planning to use it to do the linear programming. I googled and found this code:
Aeq = [
2 0 6
0 3 5
];
beq = [8;15;30];
c = [30;50];
[xopt]=karmarkar(Aeq,beq,c);
But it seems there is a problem with the karmarkar. Can anyone tell me how to fix my problem? I tried to use help karmarkar because I think it might be the same with Matlab but it didn't work.
The parameters Aeq and beq represent linear equality constraints. Each row of Aeq has the coefficients of an equation, and the corresponding row of beq is the right hand side. Therefore, the number of rows in Aeq and beq must be the same.
Also, the number of rows of c must be equal to the number of variables you have, in this case three.
To summarize: when you have m constraints and n variables, the matrices have the following sizes:
Aeq is m by n
beq is m by 1
c is n by 1
Working example:
Aeq = [
2 0 6
0 3 5
]
beq = [8;15]
c = [30;50;70]
[xopt]=karmarkar(Aeq,beq,c)
Related
I have the following statements for a MILP:
Variables: c (can be 1 or 0), αj (real numbers with 0 <= αj <= 1)
I have a linear inequality system for aj:
∑vj * αj = 0 (with vj = constants)
∑αj = c
and I have the following logic:
If there exists a solution for c = 1, the formulation should be infeasible
If there exists the only one solution c = 0 (each αj must be 0) the formulation should be feasible
I need some more equations or changes so that the logic above holds.
First idea:
When I use an additional constraint c = 1 the MILP finds a solution for c = 1 und no solution for c = 0. This helps to identify if c can be 1 but this flips the feasible solution space since the solver breaks when c = 0 which should be the feasible one. Adding the constraint c = 0 will not help, since it is not enough that c = 0 is one potential solution, it must be the only one valid solution.
Second idea:
When I use the objective function max(c) i can conclude that
IF max(c) = 1 THEN not feasible (or IF max(c) = 0 THEN feasible)
However I don't want to use c in the objective function.
Is there any other possibility to change the formulation so that the logic above holds?
I have two data frames namely shares and Log_Returns. I want to multiply first digit of shares with 1st column of Log_Returns. 2nd digit of shares with 2nd column and so on.
shares
0
0 0.319297
1 99.680703
Log_return
HIGH MID
0 -1.998061 -1.991331
1 -0.014573 -1.981635
2 -2.015117 -1.978619
3 0.028488 -2.000455
I have tried the following code but got the output in series.
for j in range(Log_return.shape[1]):
i = j
for k in range(len(Log_return)):
print(shares.iloc[i, 0] * Log_return.iloc[k, j])
-0.6379746350243671
-0.004652966683445539
-0.6434205307519527
0.009096189190505598
-198.49724749816215
-197.53080296760479
-197.23014837824016
-199.40677745293362
I want someone to help me for finding this results.
Log_return
HIGH MID
B -0.637975 -198.497247
C -0.004653 -197.530803
D -0.643421 -197.230148
E 0.009096 -199.406777
using numpy:
import numpy as np
np.multiply(log_return, shares.T)
should give the desired output.
So I am trying to calculate this formula but the results are strange. The elements are extremely large so I am not sure where I went wrong. I have attached a photo of the formula:
and here is my code:
*calculating mu_sum and sigma_sum;
T_hat=180;
mu_sum_first_part={0,0,0,0};
mu_sum_second_part={0,0,0,0};
mu_sum={0,0,0,0};
*calculating mu_sum;
do i = 0 to T_hat;
term=(T_hat - i)*(B0**i)*a;
mu_sum_first_part = mu_sum_first_part + term;
end;
do i=1 to T_hat;
term =B0**i;
mu_sum_second_part = mu_sum_second_part + term;
end;
mu_sum = mu_sum_first_part + mu_sum_second_part*zt;
print mu_sum;
*calculating sigma_sum;
term=I(4);
sigma_sum=sigma;
do j=1 to T_hat;
term = term + B0**j;
sigma_sum = sigma_sum + (term*sigma*(term`));
end;
print sigma_sum;
I know this is long but please help!!
First thing that jumps out at me is your loop first term in mu has 1 too many:
do i = 0 to T_hat;
term=(T_hat - i)*(B0**i)*a;
mu_sum_first_part = mu_sum_first_part + term;
end;
Should be:
do i = 0 to T_hat-1;
term=(T_hat - i)*(B0**i)*a;
mu_sum_first_part = mu_sum_first_part + term;
end;
There is nothing mathematically wrong with your program. When you are raising a matrix to the 180th power, you should not be surprised to see very large or very small values. For example, if you let
B0 = {
0 1 0 0,
0 0 1 0,
0 0 0 1,
0 1 1 1
};
then elements of B0**T are O( 1E47 ). If you divide B0 by 2 and raise the result to the 180th power, then the elements are O( 1E-8 ).
Presumably these formulas are intended for matrices B0 that have a special structure, such as ||B0**n|| --> 0 as n --> infinity. Otherwise the power series won't converge. I suggest you double-check that the B0 you are using satisfies the assumptions of the reference.
You didn't ask about efficiency, but you would be wise to compute the truncated power series by using Horner's method in SAS/IML, rather than explicitly forming powers of B0.
I have an optimization problem that I need to solve. It's a binary linear programming problem, so all of the decision variables are equal to 0 or 1. I need certain combinations of these decision variables to add up to either 0 or 2+, they cannot sum to 1. I'm struggling with how to accomplish this in PROC OPTMODEL.
Something like this is what I need:
con sum_con: x+y+z~=1;
Unfortunately, this just throws a syntax error... Is there any way to accomplish this?
See below for a linear reformulation. However, you may not need it. In SAS 9.4m2 (SAS/OR 13.2), your expression works as written. You just need to invoke the (experimental) CLP solver:
proc optmodel;
/* In SAS/OR 13.2 you can use your code directly.
Just invoke the experimental CLP solver */
var x binary, y binary, z binary;
con sum_con: x+y+z~=1;
solve with clp / findall;
print {i in 1 .. _NSOL_} x.sol[i]
{i in 1 .. _NSOL_} y.sol[i]
{i in 1 .. _NSOL_} z.sol[i];
produces immediately:
[1] x.SOL y.SOL z.SOL
1 0 0 0
2 0 1 1
3 1 0 1
4 1 1 0
5 1 1 1
In older versions of SAS/OR, you can still call PROC CLP directly,
which is not experimental.
The syntax for your example will be very similar to PROC OPTMODEL's.
I am sure, however, that your model has other variables and constraints.
In that case, remember that no matter how you formulate this,
it is still a search space with a hole in the middle.
So it potentially can make the solver perform poorly.
How poorly is hard to predict. It depends on other features of your model.
If MILP is a better fit for the rest of your model,
you can reformulate your constraint as a valid MILP in two steps.
First, add a binary variable that is zero only when the expression is zero:
/* If solve with CLP is not available, you can linearize the disjunction: */
var IsGTZero binary; /* 1 if any variable in the expression is 1 */
con IsGTZeroBoundsExpression: 3 * IsGTZero >= x + y + z;
Then add another constraint that forces the expression to be
at least the constant you want (in this case 2) when it is nonzero.
num atLeast init 2;
con ZeroOrAtLeast: x + y + z >= atLeast * IsGTZero;
min f=0; /* Explicit objectives are unnecessary in 13.2 */
solve;
The following equation should work:
(x+y-z)*z + (y+z-x)*x + (x+z-y)*y > -1
It can be generalized to more than three variables and if you have some large number you should be able to use index expansions to make it easier.
Ok guys, as requested, I will add more info so that you understand why a simple vector operation is not possible. It's not easy to explain in few words but let's see. I have a huge amount of points over a 2D space.
I divide my space in a grid with a given resolution,say, 100m. The main loop that I am not sure if it's mandatory or not (any alternative is welcomed) is to go through EACH cell/pixel that contains at least 2 points (right now I am using the method quadratcount within the package spatstat).
Inside this loop, thus for each one of this non empty cells, I have to find and keep only a maximum of 10 Male-Female pairs that are within 3 meters from each other. The 3-meter buffer can be done using the "disc" function within spatstat. To select points falling inside a buffer you can use the method pnt.in.poly within the SDMTools package. All that because pixels have a maximum capacity that cannot be exceeded. Since in each cell there can be hundreds or thousands of points I am trying to find a smart way to use another loop/similar method to:
1)go trough each point at a time 2)create a buffer a select points with different sex 3)Save the closest Male-Female (0-1) pair in another dataframe (called new_colonies) 4)Remove those points from the dataframe so that it shrinks and I don't have to consider them anymore 5) as soon as that new dataframe reaches 10 rows stop everything and go to the next cell (thus skipping all remaining points. Here is the code that I developed to be run within each cell (right now it takes too long):
head(df,20):
X Y Sex ID
2 583058.2 2882774 1 1
3 582915.6 2883378 0 2
4 582592.8 2883297 1 3
5 582793.0 2883410 1 4
6 582925.7 2883397 1 5
7 582934.2 2883277 0 6
8 582874.7 2883336 0 7
9 583135.9 2882773 1 8
10 582955.5 2883306 1 9
11 583090.2 2883331 0 10
12 582855.3 2883358 1 11
13 582908.9 2883035 1 12
14 582608.8 2883715 0 13
15 582946.7 2883488 1 14
16 582749.8 2883062 0 15
17 582906.4 2883317 0 16
18 582598.9 2883390 0 17
19 582890.2 2883413 0 18
20 582752.8 2883361 0 19
21 582953.1 2883230 1 20
Inside each cell I must run something according to what I explained above..
for(i in 1:dim(df)[1]){
new_colonies <- data.frame(ID1=0,ID2=0,X=0,Y=0)
discbuff <- disc(radius, centre=c(df$X[i], df$Y[i]))
#define the points and polygon
pnts = cbind(df$X[-i],df$Y[-i])
polypnts = cbind(x = discbuff$bdry[[1]]$x, y = discbuff$bdry[[1]]$y)
out = pnt.in.poly(pnts,polypnts)
out$ID <- df$ID[-i]
if (any(out$pip == 1)) {
pnt.inBuffID <- out$ID[which(out$pip == 1)]
cond <- df$Sex[i] != df$Sex[pnt.inBuffID]
if (any(cond)){
eucdist <- sqrt((df$X[i] - df$X[pnt.inBuffID][cond])^2 + (df$Y[i] - df$Y[pnt.inBuffID][cond])^2)
IDvect <- pnt.inBuffID[cond]
new_colonies_temp <- data.frame(ID1=df$ID[i], ID2=IDvect[which(eucdist==min(eucdist))],
X=(df$X[i] + df$X[pnt.inBuffID][cond][which(eucdist==min(eucdist))]) / 2,
Y=(df$Y[i] + df$Y[pnt.inBuffID][cond][which(eucdist==min(eucdist))]) / 2)
new_colonies <- rbind(new_colonies,new_colonies_temp)
if (dim(new_colonies)[1] == maxdensity) break
}
}
}
new_colonies <- new_colonies[-1,]
Any help appreciated!
Thanks
Francesco
In your case I wouldn't worry about deleting the points as you go, skipping is the critical thing. I also wouldn't make up a new data.frame piece by piece like you seem to be doing. Both of those things slow you down a lot. Having a selection vector is much more efficient (perhaps part of the data.frame, that you set to FALSE beforehand).
df$sel <- FALSE
Now, when you go through you set df$sel to TRUE for each item you want to keep. Just skip to the next cell when you find your 10. Deleting values as you go will be time consuming and memory intensive, as will slowly growing a new data.frame. When you're all done going through them then you can just select your data based on the selection column.
df <- df[ df$sel, ]
(or maybe make a copy of the data.frame at that point)
You also might want to use the dist function to calculate a matrix of distances.
from ?dist
"This function computes and returns the distance matrix computed by using the specified distance measure to compute the distances between the rows of a data matrix."
I'm assuming you are doing something sufficiently complicated that the for-loop is actually required...
So here's one rather simple approach: first just gather the rows to delete (or keep), and then delete the rows afterwards. Typically this will be much faster too since you don't modify the data.frame on each loop iteration.
df <- generateTheDataFrame()
keepRows <- rep(TRUE, nrow(df))
for(i in seq_len(nrow(df))) {
rows <- findRowsToDelete(df, df[i,])
keepRows[rows] <- FALSE
}
# Delete afterwards
df <- df[keepRows, ]
...and if you really need to work on the shrunk data in each iteration, just change the for-loop part to:
for(i in seq_len(nrow(df))) {
if (keepRows[i]) {
rows <- findRowsToDelete(df[keepRows, ], df[i,])
keepRows[rows] <- FALSE
}
}
I'm not exactly clear on why you're looping. If you could describe what kind of conditions you're checking there might be a nice vectorized way of doing it.
However as a very simple fix have you considered looping through the dataframe backwards?